paper_id
stringlengths 12
48
| title
stringlengths 12
155
| url
stringlengths 39
46
| abstract
stringlengths 389
2.11k
| ocr_markdown
stringlengths 18.1k
576k
|
---|---|---|---|---|
qiu-etal-2023-sccs | {SCCS}: Semantics-Consistent Cross-domain Summarization via Optimal Transport Alignment | https://aclanthology.org/2023.findings-acl.101 | Multimedia summarization with multimodal output (MSMO) is a recently explored application in language grounding. It plays an essential role in real-world applications, i.e., automatically generating cover images and titles for news articles or providing introductions to online videos. However, existing methods extract features from the whole video and article and use fusion methods to select the representative one, thus usually ignoring the critical structure and varying semantics with video/document. In this work, we propose a Semantics-Consistent Cross-domain Summarization (SCCS) model based on optimal transport alignment with visual and textual segmentation. Our method first decomposes both videos and articles into segments in order to capture the structural semantics, and then follows a cross-domain alignment objective with optimal transport distance, which leverages multimodal interaction to match and select the visual and textual summary. We evaluated our method on three MSMO datasets, and achieved performance improvement by 8{\%} {\&} 6{\%} of textual and 6.6{\%} {\&}5.7{\%} of video summarization, respectively, which demonstrated the effectiveness of our method in producing high-quality multimodal summaries. | # Sccs: Semantics-Consistent Cross-Domain Summarization Via Optimal Transport Alignment
Jielin Qiu1, Jiacheng Zhu1, Mengdi Xu1**, Franck Dernoncourt**2, Zhaowen Wang2, Trung Bui2, Bo Li3, Ding Zhao1**, Hailin Jin**2 1Carnegie Mellon University, 2Adobe Research, 3University of Illinois Urbana-Champaign
{jielinq,jzhu4,mengdixu,dingzhao}@andrew.cmu.edu, {dernonco,zhawang,bui,hljin}@adobe.com, [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Multimedia summarization with multimodal output (MSMO) is a recently explored application in language grounding. It plays an essential role in real-world applications, i.e., automatically generating cover images and titles for news articles or providing introductions to online videos. However, existing methods extract features from the whole video and article and use fusion methods to select the representative one, thus usually ignoring the critical structure and varying semantics with video/document. In this work, we propose a Semantics-Consistent Cross-domain Summarization (SCCS) model based on optimal transport alignment with visual and textual segmentation. Our method first decomposes both videos and articles into segments in order to capture the structural semantics, and then follows a cross-domain alignment objective with optimal transport distance, which leverages multimodal interaction to match and select the visual and textual summary. We evaluated our method on three MSMO datasets, and achieved performance improvement by 8% & 6% of textual and 6.6% &5.7% of video summarization, respectively, which demonstrated the effectiveness of our method in producing high-quality multimodal summaries.
## 1 Introduction
New multimedia content in the form of short videos and corresponding text articles has become a significant trend in influential digital media. This popular media type has been shown to be successful in drawing users' attention and delivering essential information in an efficient manner. Multimedia summarization with multimodal output (MSMO) has recently drawn increasing attention. Different from traditional video or textual summarization (Gygli et al., 2014; Jadon and Jasim, 2020), where the generated summary is either a keyframe or textual description, MSMO aims at producing both visual and textual summaries simultaneously, making this task more complicated. Previous works addressed the MSMO task by processing the whole video and the whole article together which overlooked the structure and semantics of different domains (Duan et al., 2022; Haopeng et al., 2022; Sah et al., 2017; Zhu et al., 2018; Mingzhe et al., 2020; Fu et al.,
2021, 2020).
The video and article can be regarded as being composed of several topics related to the main idea, while each topic specifically corresponds to one sub-idea. Thus, treating the whole video or article uniformly and learning a general representation ignores these structural semantics and easily leads to biased summarization. To address this problem, instead of learning averaged representations for the whole video & article, we focus on exploiting the original underlying structure. The comparison of our approach and previous works is illustrated in Figure 1. Our model first decomposes the video &
article into segments to discover the content structure, then explores the cross-domain semantics relationship at the segment level. We believe this is a promising approach to exploit the *consistency* lie in the structural semantics between different domains.
Previous models applied attention or fusion
![1_image_0.png](1_image_0.png)
mechanisms to compute image-text relevance scores, finding the best match of the sentences/images within the whole document/video, regardless of the context, which used one domain as an anchor. However, an outstanding anchor has more weight in selecting the corresponding pair. To overcome this, we believe the semantics structure is a crucial characteristic that can not be ignored. Based on this hypothesis, we propose Semantics-Consistent Cross-domain Summarization (SCCS), which explores segment-level crossdomain representations through Optimal Transport
(OT) based multimodal alignment to generate both visual and textual summaries. We decompose the video/document into segments based on its semantic structure, then generate sub-summaries of each segment as candidates. We select the final summary from these candidates instead of a global search, so all candidates are in a fair competition arena.
Our contributions can be summarized as follow:
- We propose SCCS (Semantics-Consistent Cross-domain Summarization), a segmentlevel alignment model for MSMO tasks.
- Our method preserves the structural semantics and explores the cross-domain relationship through optimal transport to match and select the visual and textual summary.
- On three datasets, our method outperforms baselines in both textual and video summarization results qualitatively and quantitatively.
- Our method serves as a hierarchical MSMO
framework and provides better interpretability via OT alignment. The OT coupling shows sparse patterns and specific temporal structure for the embedding vectors of ground-truthmatched video and text segments, providing interpretable learned representations.
Since MSMO generates both visual & textual summaries, We believe the optimal summary comes
from the video and text pair that are both 1) semantically consistent, and 2) best matched globally in a cross-domain fashion. In addition, our framework is more computationally efficient as it conducts cross-domain alignment at the segment level instead of inputting whole videos/articles.
## 2 Related Work
Multimodal Alignment Aligning representations from different modalities is important in multimodal learning. Exploring the explicit relationship across vision and language has drawn significant attention (Wang et al., 2020a). Xu et al. (2015);
Torabi et al. (2016); Yu et al. (2017) adopted attention mechanisms, Dong et al. (2021) composed pairwise joint representation, Chen et al. (2020b);
Wray et al. (2019); Zhang et al. (2018) learned fine-grained or hierarchical alignment, Lee et al.
(2018); Wu et al. (2019) decomposed the inputs into sub-tokens, Velickovic et al. (2018); Yao et al.
(2018) adopted graph attention for reasoning, and Yang et al. (2021); Gutmann and Hyvärinen (2010);
van den Oord et al. (2018); Radford et al. (2021)
applied contrastive learning algorithms.
Multimodal Summarization Multimodal summarization explored multiple modalities, i.e., audio signals, video captions, transcripts, video titles, etc, for a summary generation. Otani et al. (2016);
Yuan et al. (2019); Wei et al. (2018); Fu et al. (2020)
learned the relevance or mapping in the latent space between different modalities. In addition to only generating visual summaries, Li et al. (2017); Atri et al. (2021); Zhu et al. (2018) generated textual summaries by taking audio, transcripts, or documents as input along with videos or images, using seq2seq model (Sutskever et al., 2014) or attention mechanism (Bahdanau et al., 2015). Recent trending on the MSMO task has also drawn much attention (Zhu et al., 2018; Mingzhe et al., 2020;
![2_image_0.png](2_image_0.png)
Fu et al., 2021, 2020; Zhang et al., 2022). More related works are shown in Appendix B.
## 3 Methods
SCCS is a segment-level cross-domain semantics alignment model for the MSMO task, where MSMO aims at generating both visual and language summaries. We follow the problem setting in Mingzhe et al. (2020), for a multimedia source with documents and videos, the document XD =
{x1, x2*, ..., x*d} has d words, and the ground truth textual summary YD = {y1, y2*, ..., y*g} has g words. A corresponding video XV is associated with the document in pair, and there exists a ground truth cover picture YV that can represent the most important information to describe the video. Our SCCS model generates both textual summaries Y′D
and video keyframes Y′V
.
SCCS consists of five modules, as shown in Figure 3(a): video temporal segmentation (Section 3.1), visual summarization (Section 3.3), textual segmentation (Section 3.2), textual summarization (Section 3.4), and cross-domain alignment
(Section 3.5). Each module will be introduced in the following subsections.
## 3.1 Video Temporal Segmentation
Video temporal segmentation splits the original video into small segments, which summarization tasks build upon. The segmentation is formulated as a binary classification problem on the segment boundaries, similar to Rao et al. (2020). For a video XV , the video segmentation encoder separates the video sequence into segments [Xv1, Xv2*, ..., X*vm],
where n is the number of segments.
As shown in Figure 3(b), the video segmentation encoder contains a VTS module and a Bi-LSTM
(Graves and Schmidhuber, 2005). Video XV is first split into shots [Sv1, Sv2*, ..., S*vn] (Castellano, 2021), then the VTS module takes a clip of the video with 2ωb shots as input and outputs a boundary representation bi. The boundary representation captures both differences and relations between the shots before and after. VTS consists of two branches, VTSd and VTSr, as shown in Equation 1.
$$b_{i}=\text{VTS}\left(\left[S_{vi-}(\omega_{b-1}),\cdots,S_{vi+\omega_{b}}\right]\right)$$ $$=\left[\begin{array}{c}\text{VTS}_{d}\left(\left[S_{vi-}(\omega_{b-1}),\cdots,\text{P}_{vi}\right],\left[S_{v(i+1)},\cdots,S_{vi+\omega_{b}}\right]\right)\\ \text{VTS}_{r}\left(\left[S_{vi-}(\omega_{b-1}),\cdots,P_{vi},S_{v(i+1)},\cdots,S_{vi+\omega_{b}}\right]\right)\end{array}\right)\right]\tag{1}$$
VTSd is modeled by two temporal convolution
layers, each of which embeds the wb shots be-
fore and after the boundary, respectively, following an inner product operation to calculate the differences. VTSr contains a temporal convolution layer followed by a max pooling, aiming at capturing the relations of the shots. It predicts a sequence binary labels [pv1, pv2*, ..., p*vn] based on the sequence of representatives [b1, b2*, ..., b*n].
A Bi-LSTM (Graves and Schmidhuber, 2005) is used with stride ωt/2 shots to predict a sequence of coarse score [s1, s2*, ..., s*n], as shown in Equation 2,
[s1, s2*, ..., s*n] = Bi-LSTM ([b1, b2, · · · , bn]) (2)
where si ∈ [0, 1] is the probability of a shot boundary being a scene boundary. The coarse prediction pˆvi ∈ {0, 1} indicates whether the i-th shot boundary is a scene boundary by binarizing si with a threshold τ , pˆvi =
1 if si > τ 0 otherwise . The results with pˆvi = 1 result in the learned video segments
[Xv1, Xv2*, ..., X*vm].
## 3.2 Textual Segmentation
The textual segmentation module takes the whole document or articles as input and splits the original input into segments based on context understanding. We used a hierarchical BERT as the textual segmentation module (Lukasik et al., 2020), which is the current state-of-the-art method. As shown in Figure 3(c), the textual segmentation module contains two-level transformer encoders, where the first-level encoder is for sentence-level encoding, and the second-level encoder is for article-level encoding. The hierarchical BERT starts by encoding each sentence with BERTLARGE independently, then the tensors produced for each sentence are fed into another transformer encoder to capture the representation of the sequence of sentences. All the sequences start with a [CLS] token to encode each sentence with BERT at the first level. If the segmentation decision is made at the sentence level, we use the [CLS] token as input for the secondlevel encoder. The [CLS] token representations from sentences are passed into the article encoder, which can relate the different sentences through cross-attention.
## 3.3 Visual Summarization
The visual summarization module generates visual keyframes from each video segment as its corresponding summary. We use an encoder-decoder architecture with attention as the visual summarization module (Ji et al., 2020), taking each video segment as input and outputting a sequence of keyframes. The encoder is a Bi-LSTM (Graves and Schmidhuber, 2005) to model the temporal relationship of video frames, where the input is X = [x1, x2*, ..., x*T ] and the encoded representation is E = [e1, e2*, ...e*T ]. The decoder is a LSTM (Hochreiter and Schmidhuber, 1997) to generate output sequences D = [d1, d2*, ..., d*m].
To exploit the temporal ordering across the en-
$$([b_{1},b_{2},\cdots,b_{n}])$$
P
tire video, an attention mechanism is used: Et =
m i=1 α itei, s.t. Pn i=1 α it = 1. Similar in Hochreiter and Schmidhuber (1997), the decoder function can be written as:
$$\left[\begin{array}{c}p\left(d_{t}\mid\left\{d_{i}\mid i<t\right\},E_{t}\right)\\ s_{t}\end{array}\right]=\psi\left(s_{t-1},d_{t-1},E_{t}\right)\tag{3}$$
where stis the hidden state, Etis the attention vector at time t, α it is the attention weight between the inputs and the encoder vector, ψ is the decoder function (LSTM). To obtain α it
, the relevance score γ i t is computed by γ i t = score(st−1, ei),
where the score function decides the relationship between the i-th visual features ei and the output scores at time t: γ i t = e T
i Wast−1, α it =
exp(γ i t)/Pm j=1 exp(γ j t
).
## 3.4 Textual Summarization
Language summarization can produce a concise and fluent summary which should preserve the critical information and overall meaning. Our textual summarization module takes BART (Lewis et al.,
2020) as the summarization model to generate abstractive textual summary candidates. BART is a denoising autoencoder that maps a corrupted document to the original document it was derived from.
As in Figure 3(a), BART is an encoder-decoder Transformer pre-trained with a denoising objective on text. We take the fine-tuned BART on CNN
and Daily Mail datasets for the summarization task
(See et al., 2017b; Nallapati et al., 2016).
## 3.5 Cross-Domain Alignment Via Ot
Our cross-domain alignment (CDA) module learns the alignment between keyframes and textual summaries to generate the final multimodal summaries.
Our alignment module is based on OT, which has been explored in several cross-domain tasks
(Chen et al., 2020a; Yuan et al., 2020; Lu et al.,
2021). More OT introductions can be found in Appendix A.
As shown in Figure 3(d), in CDA, the image features V = {vk}
K
k=1 are extracted from pre-trained 1587 ResNet-101 (He et al., 2016) concatenated to faster R-CNN (Ren et al., 2015) as Yuan et al. (2020),
where an image can be represented as a set of detected objects, each associated with a feature vector. For text features, every word is embedded as a feature vector and processed by a Bi-GRU (Cho et al., 2014) to account for context (Yuan et al.,
2020). The extracted image and text embeddings are V = {vi}
K
1
, E = {ei}
M
1
, respectively.
As in Yuan et al. (2020), we take image and text sequence embeddings as two discrete distributions supported on the same feature representation space. Solving an OT transport plan between the two naturally constitutes a matching scheme to relate cross-domain entities (Yuan et al., 2020). To evaluate the OT distance, we compute a pairwise similarity between V and E using cosine distance:
$$C_{km}=C(e_{k},v_{m})=1-\frac{\mathbf{e}_{k}^{T}\mathbf{v}_{k}}{\|\mathbf{e}_{k}\|\ \|\mathbf{v}_{m}\|}\tag{4}$$ Then the OT can be formulated as:
$$\mathcal{L}_{\text{T}}(\mathbf{V},\mathbf{E})=\min_{\mathbf{T}}\sum_{k=1}^{K}\sum_{m=1}^{M}\mathbf{T}_{km}\mathbf{C}_{km}\tag{5}$$ $$\sum_{m}\mathbf{T}_{km}=\mu_{k},\quad\quad\sum_{k}\mathbf{T}_{km}=v_{m},$$ $K^{1}$ **m $\in$ [1 $M^{1}$ **T** $\in$ **T**$\times$**M** **is the
∀k ∈ [1, K], m ∈ [1, M], T ∈ R
K×M
+ is the transport matrix, dk and dm are the weight of vk and em in a given image and text sequence, respectively. We assume the weight for different features to be uniform, i.e., µk =1K
, vm =1M .
The objective of optimal transport involves solving linear programming and may cause potential computational burdens since it has O(n 3)
efficiency. To solve this issue, we add an entropic regularization term equation (5), and the objective of our optimal transport distance becomes:
$${\cal L}_{\rm OT}({\bf V},{\bf E})=\min_{\bf T}\sum_{k=1}^{K}\sum_{m=1}^{M}{\bf T}_{km}{\bf C}_{km}+\lambda H({\bf T})\tag{6}$$
where H(T) = Pi,j Ti,j log Ti,j is the entropy, and λ is the hyperparameter that balance the effect of the entropy term. Thus, we are able to apply the celebrated Sinkhorn algorithm (Cuturi, 2013) to efficiently solve the above equation in O(*nlogn*).
The optimal transport distance computed via the Sinkhorn algorithm is differentiable and can be implemented by Flamary et al. (2021). The algorithm is shown in Algorithm 1, where β is a hyperparameter, C is the cost matrix, ⊙ is Hadamard product, < ·, · > is Frobenius dot-product, matrices are in bold, the rest are scalars.
Algorithm 1 Compute Alignment Distance
![4_image_0.png](4_image_0.png)
1: **Input**: E = {ei}
M 1
, V = {vi}
K 1
, β
2: C = C(V, E), σ ← 1m 1m, T
(1) ← 11T
3: Gij ← exp −
Cij
β
![4_image_3.png](4_image_3.png)
4: for t = 1,2,3,...,N do
5: Q ← G ⊙ T
(t)
6: δ ← 1
KQσ , σ ← 1
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
## 7: T
(T+1) ← Diag(Δ)Q Diag(Σ)
8: **End For**
9: Dis =< Ct*, T >* 3.6 Multimodal Summaries
During training the alignment module, the Wasserstein distance (WD) between each keyframesentence pair of all the visual & textual summary candidates is computed, where the best match is selected as the final multimodal summaries.
## 4 Datasets And Baselines 4.1 Datasets
We evaluated our models on three datasets:
VMSMO dataset, Daily Mail dataset, and CNN
dataset from Mingzhe et al. (2020); Fu et al. (2021, 2020). The VMSMO dataset contains 184,920 samples, including articles and corresponding videos.
Each sample is assigned with a textual summary and a video with a cover picture. We adopted the available data samples from Mingzhe et al. (2020).
The Daily Mail dataset contains 1,970 samples, and the CNN dataset contains 203 samples, which include video titles, images, and captions, similar to Hermann et al. (2015). For data splitting, we take the same experimental setup as Mingzhe et al.
(2020) for the VMSMO dataset. For the Daily Mail dataset and CNN dataset, we split the data by 70%,
10%, and 20% for train, validation, and test sets, respectively, same as Fu et al. (2021, 2020).
## 4.2 Baselines
We select state-of-the-art MSMO baselines and representative pure video & textual summarization baselines for comparison. For the VMSMO dataset, we compare our method with (i) multimodal summarization baselines (MSMO, MOF (Zhu et al.,
2018, 2020), and DIMS (Mingzhe et al., 2020), (ii)
video summarization baselines (Synergistic (Guo et al., 2019) and PSAC (Li et al., 2019)), and (iii) textual summarization baselines (Lead (See et al.,
2017a), TextRank (Mihalcea and Tarau, 2004), PG
(See et al., 2017b), Unified (Hsu et al., 2018), and GPG (Shen et al., 2019)). For Daily Mail and CNN datasets, we compare our method with (i)
multimodal baselines (MSMO (Zhu et al., 2018),
Img+Trans (Hori et al., 2019), TFN (Zadeh et al.,
2017), HNNattTI (Chen and Zhuge, 2018), and M2SM (Fu et al., 2021, 2020)), (ii) video summarization baselines (VSUMM (De Avila et al., 2011)
and DR-DSN (Zhou et al., 2018a)), and (iii) textual summarization baselines (Lead3 (See et al., 2017a),
NN-SE (Cheng and Lapata, 2016), BART (Lewis et al., 2020), T5 (Raffel et al., 2019), and Pegasus (Zhang et al., 2019a)). More details about the baselines are introduced in Appendix C.
## 5 Experiments 5.1 **Experimental Setting And Implementation**
For the VTS module, we used the same model setting as Rao et al. (2020); Castellano (2021) and the same data splitting setting as Mingzhe et al. (2020); Fu et al. (2021, 2020) in the training process.
The visual summarization model is pre-trained on the TVSum (Song et al., 2015) and SumMe
(Gygli et al., 2014) datasets. TVSum dataset contains 50 edited videos downloaded from YouTube in 10 categories, and SumMe dataset consists of 25 raw videos recording various events. Frame-level importance scores for each video are provided for both datasets and used as ground-truth labels. The input visual features are extracted from pre-trained GoogLeNet on ImageNet, where the output of the pool5 layer is used as visual features.
For the textual segmentation module, due to the quadratic computational cost of transformers, we reduce the BERT's inputs to 64-word pieces per sentence and 128 sentences per document as Lukasik et al. (2020). We use 12 layers for both the sentence and the article encoders, for a total of 24 layers. In order to use the BERTBASE
checkpoint, we use 12 attention heads and 768dimensional word-piece embeddings. The hierarchical BERT model is pre-trained on the Wiki727K dataset (Koshorek et al., 2018), which contains 727 thousand articles from a snapshot of the English Wikipedia. We used the same data splitting method as Koshorek et al. (2018).
For textual summarization, we adopted the pretrained BART model from Lewis et al. (2020),
which contains 1024 hidden layers and 406M parameters and has been fine-tuned using CNN and Daily Mail datasets.
In the cross-domain alignment module, the feature extraction and alignment module is pretrained by MS COCO dataset (Lin et al., 2014) on the image-text matching task. We added the OT loss as a regularization term to the original matching loss to align the image and text more explicitly.
## 5.2 Evaluation Metrics
The quality of generated textual summary is evaluated by standard Rouge F1 (Lin, 2004) following previous works (See et al., 2017b; Chen et al., 2018; Mingzhe et al., 2020). ROUGE-1 (R-1), ROUGE-2
(R-2), and ROUGE-L (R-L) refer to the overlap of unigram, bigrams, and the longest common subsequence between the decoded summary and the reference, respectively (Lin, 2004). Due to the limitation of ROUGE, we also adopt BertScore (Zhang et al., 2019b) for evaluation.
For the VMSMO dataset, the quality of the chosen cover frame is evaluated by mean average precision (MAP) and recall at position (Rn@k) (Zhou et al., 2018c; Tao et al., 2019), where (Rn@k) measures if the positive sample is ranked in the top k positions of n candidates. For the Daily Mail dataset and CNN dataset, we calculate the cosine image similarity (Cos) between image references and the extracted frames (Fu et al., 2021, 2020).
## 5.3 Results And Discussion
The comparison results on the VMSMO dataset of multimodal, video, and textual summarization are shown in Table 1. Synergistic and PSAC are pure video summarization approaches, which did not perform as well as multimodal methods, like MOF
or DIMS, which means taking additional modality into consideration actually helps to improve the quality of the generated video summaries. Table 1 also shows the absolute performance improvement or decrease compared with the MSMO baseline, where the improvements are marked in red and decreases in blue. Overall, our method shows the highest absolute performance improvement than the previous methods on both textual and video summarization results. Our method shows the ability to preserve the structural semantics and is able to learn the alignment between keyframes and textual deceptions, which shows better performance than the previous ones. If comparing the quality of generated textual summaries, our method still outperforms the other multimodal baselines, like MSMO, MOF, DIMS, and also traditional textual summarization methods, like Lead, TextRank, PG,
Unified, and GPG, showing the alignment obtained
| Category | Methods | Textual | Video | | | | | |
|--------------------|--------------|-------------|--------------|-----------------|-----------------|-----------------|------------------|-------|
| R-1 | R-2 | R-L | MAP | R10@1 | R10@2 | R10@5 | | |
| Video | Synergistic | - | - | - | 0.558 | 0.444 | 0.557 | 0.759 |
| PSAC | - | - | - | 0.524 | 0.363 | 0.481 | 0.730 | |
| Lead | 16.2 | 5.3 | 13.9 | - | - | - | - | |
| TextRank | 13.7 | 4.0 | 12.5 | - | - | - | - | |
| PG | 19.4 | 6.8 | 17.4 | - | - | - | - | |
| Unified | 23.0 | 6.0 | 20.9 | - | - | - | - | |
| GPG | 20.1 | 4.5 | 17.3 | - | - | - | - | |
| Textual Multimodal | MSMO | 20.1 | 4.6 | 17.3 | 0.554 | 0.361 | 0.551 | 0.820 |
| MOF | 21.3 (↑ 0.8) | 5.7 (↑ 1.1) | 17.9 (↑ 0.6) | 0.615 (↑ 0.061) | 0.455 (↑ 0.094) | 0.615 (↑ 0.064) | 0.817 (↓ -0.003) | |
| DIMS | 25.1 (↑ 5.0) | 9.6 (↑ 5.0) | 23.2 (↑ 5.9) | 0.654 (↑ 0.100) | 0.524 (↑ 0.163) | 0.634 (↑ 0.083) | 0.824 (↑ 0.004) | |
| Ours | Ours-textual | 26.2 | 9.6 | 24.1 | - | - | - | - |
| Ours-video | - | - | - | 0.678 | 0.561 | 0.642 | 0.863 | |
| Ours | 27.1 (↑ 7.0) | 9.8 (↑ 5.2) | 25.4 (↑ 8.1) | 0.697 (↑ 0.143) | 0.582 (↑ 0.221) | 0.688 (↑ 0.137) | 0.895 (↑ 0.075) | |
by optimal transport can help to identify the crossdomain inter-relationships.
In Table 2, we show the comparison results with multimodal baselines on the Daily Mail and CNN
datasets. We can see that for the CNN datasets, our method shows competitive results with Img+Trans, TFN, HNNattTI, and M2SM on the quality of generated textual summaries. While on the Daily Mail dataset, our approach showed better performance on both textual summaries and visual summaries.
We also compare with the traditional pure video summarization baselines and pure textual summarization baselines on the Daily Mail dataset, and the results are shown in Table 2. We can find that our approach performed competitive results compared with NN-SE and M2SM for the quality of the generated textual summary. For visual summarization comparison, we can find that the quality of generated visual summary by our approach still outperforms the other visual summarization baselines.
Still, we also provide absolute performance comparison with baseline MSMO (Zhu et al., 2018), as shown in Table 2, our model achieved the highest performance improvement in both Daily Mail and CNN datasets compared with previous baselines.
If comparing the quality of generated textual summaries with language model (LM) baselines, our method also outperforms T5, Pegasus, and BART.
## 5.4 Human Evaluation
To provide human evaluation results, we asked 5 people (recruited from the institute) to score the results generated by different approaches of CNN and DailyMail datasets. We asked the human judges to score the results of 5 models: MSMO, TFN,
HNNattTI, M2SM, and SCCS, as 1-5, where 5 represents the best results. We averaged the voting results from 5 human judges. The performances of 5 models are listed in Table 3, showing the result by SCCS is better than the baselines.
| Table 3: Human evaluation results. | | | | | |
|--------------------------------------|------|------|----------|------|------|
| Method | MSMO | TFN | HNNattTI | M2SM | SCCS |
| Score | 1.84 | 2.36 | 3.24 | 3.4 | 4.16 |
## 5.5 Factual Consistency Evaluation
Factual consistency is used as another important evaluation criterion for evaluating summarization results (Honovich et al., 2022). For factual consistency, we adopted the method in Xie et al. (2021)
and followed the same setting. The same human annotators from Sec 5.4 provided human judgments.
We report Pearson correlation coefficient CoeP
here. The results of MSMO, Img+Trans, TFN,
HNNaatTI, M2SM, and ours, are shown in Table 4.
In summary, our methods show better results than baselines on factual consistency evaluations.
| Table 4: Factual consistency evaluation results. | | | | | | |
|----------------------------------------------------|-------|-----------|-------|----------|-------|-------|
| Datasets | MSMO | Img+Trans | TFN | HNNattTI | M2SM | SCCS |
| CNN | 40.12 | 41.23 | 41.52 | 42.33 | 42.59 | 44.37 |
| DailyMail | 50.31 | 50.65 | 50.72 | 51.37 | 51.69 | 53.16 |
## 5.6 Ablation Study
To evaluate each component's performance, we performed ablation experiments on different modalities and different datasets. For the VMSMO dataset, we compare the performance of using only visual information, only textual information, and multimodal information. The comparison result is shown in Table 1. We also carried out experiments on different modalities using Daily Mail dataset to show the performance of unimodal and multimodal components, and the results are shown in Table 2.
For ablation results, when only textual data is available, we adopt BERT (Devlin et al., 2019) to generate text embeddings and K-Means clustering
Category Methods CNN dataset Daily Mail dataset
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
to identify sentences closest to the centroid for textual summary selection. While if only video data is available, we solve the visual summarization task in an unsupervised manner, using K-Means clustering to cluster frames using the image histogram and then select the best frame from clusters based on the variance of laplacian as the visual summary.
From Table 1 and Table 2, we can find that multimodal methods outperform unimodal approaches, showing the effectiveness of exploring the relationship and taking advantage of the cross-domain alignments of generating high-quality summaries.
## 5.7 Interpretation
To show a deeper understanding of the multimodal alignment between the visual domain and language domain, we compute and visualize the transport plan to provide an interpretation of the latent representations, which is shown in Figure 4. When we are regarding the extracted embedding from both text and image spaces as the distribution over their corresponding spaces, we expect the optimal transport coupling to reveal the underlying similarity and structure. Also, the coupling seeks sparsity, which further helps to explain the correspondence between the text and image data.
Figure 4 shows comparison results of matched image-text pairs and non-matched ones. The top two pairs are shown as matched pairs, where there is an overlap between the image and the corresponding sentence. The bottom two pairs are shown as non-matched ones, where the overlapping of meaning between the image and text is relatively small.
The correlation between the image domain and the language domain can be easily interpreted by the learned transport plan matrix. In specific, the optimal transport coupling shows the pattern of sequentially structured knowledge. However, for non-matched image-sentences pairs, the estimated
![7_image_2.png](7_image_2.png)
couplings are relatively dense and barely contain any informative structure. As shown in Figure 4, we can find that the transport plan learned in the cross-domain alignment module demonstrates a way to align the features from different modalities to represent the key components. The visualization of the transport plan contributes to the interpretability of the proposed model, which brings a clear understanding of the alignment module.
## 6 Conclusion
In this work, we proposed SCCS, a segment-level Semantics-Consistent Cross-domain Summarization model for the MSMO task. Our model decomposed the video & article into segments based on the content to preserve the structural semantics, and explored the cross-domain semantics relationship via optimal transport alignment at the segment level.
The experimental results on three MSMO datasets show that SCCS outperforms previous summarization methods. We further provide interpretation by the OT coupling. Our approach provides a new direction for the MSMO task, which can be extended to many real-world applications.
## 7 Limitations
Due to the absence of large evaluation databases, we only evaluated our method on three publicly available datasets that can be used for the MSMO
task. The popular video databases, i.e., COIN and Howto100M datasets, can not be used in our task, since they lack narrations and key-step annotation. So a large evaluation database is highly needed for evaluating the performance of MSMO approaches.
As the nature of the summarization task, human preference has an inevitable influence on the performance, since the ground-truth labels were provided by human annotators. It's somehow difficult to quantitatively specify the quality of the summarization result, and current widely used evaluation metrics may not reflect the performance of the results very well. So we are seeking some new directions to find another idea for quality evaluation.
The current setting is short videos & short documents, due to the constrain of available data. To extend the current MSMO to a more general setting, i.e., much longer videos or documents, new datasets should be collected. However, this requires huge human effort in annotating and organizing a high-value dataset, which is extremely time-consuming and labor-intensive. Nevertheless, we believe the MSMO task is promising and can provide valuable solutions to many real-world problems. So if such a dataset is collected, we believe it could significantly boost the research in this field.
## 8 Ethics Statement
Our work aims at providing a better user experience when exploring online multimedia, and there is no new dataset collected. To the best of our knowledge, this application does not involve ethical issues, and we do not foresee any harmful uses of this study.
## 9 Acknowledgements
The research is partially supported by Adobe Research and the DARPA ADAPTER program. We also sincerely appreciate the suggestions and feedback from Daniel Fried.
## References
Sathyanarayanan N. Aakur and Sudeep Sarkar. 2019. A
perceptual prediction framework for self supervised event segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pages 1197–1206.
Sawsan Alqahtani, Garima Lalwani, Yi Zhang, Salvatore Romeo, and Saab Mansour. 2021. Using optimal transport as alignment objective for fine-tuning multilingual contextualized embeddings. In *EMNLP*.
Evlampios E. Apostolidis, E. Adamantidou, Alexandros I. Metsai, Vasileios Mezaris, and I. Patras. 2021.
Video summarization using deep neural networks: A
survey. *Proceedings of the IEEE*, 109:1838–1863.
Yash Kumar Atri, Shraman Pramanick, Vikram Goyal, and Tanmoy Chakraborty. 2021. See, hear, read: Leveraging multimodality with guided attention for abstractive text summarization. *ArXiv*,
abs/2105.09601.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. *CoRR*, abs/1409.0473.
David M. Blei, A. Ng, and Michael I. Jordan. 2003.
Latent dirichlet allocation. *J. Mach. Learn. Res.*,
3:993–1022.
Brandon Castellano. 2021. Intelligent scene cut detection and video splitting tool. https://bcastell.
com/projects/PySceneDetect/.
Harr Chen, S. R. K. Branavan, Regina Barzilay, and David R. Karger. 2009. Global models of document structure using latent permutations. In *NAACL*.
Jingqiang Chen and Hai Zhuge. 2018. Abstractive textimage summarization using multi-modal attentional hierarchical rnn. In *EMNLP*, pages 4046–4056.
Liqun Chen, Zhe Gan, Y. Cheng, Linjie Li, L. Carin, and Jing jing Liu. 2020a. Graph optimal transport for cross-domain alignment. *ICML*.
Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Improving sequence-to-sequence learning via optimal transport. *ArXiv*, abs/1901.06283.
Shixing Chen, Xiaohan Nie, David D. Fan, Dongqing Zhang, Vimal Bhat, and Raffay Hamid. 2021. Shot contrastive self-supervised learning for scene boundary detection. *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 9791–9800.
Shizhe Chen, Yida Zhao, Qin Jin, and Qi Wu. 2020b.
Fine-grained video-text retrieval with hierarchical graph reasoning. *2020 IEEE/CVF Conference on* Computer Vision and Pattern Recognition (CVPR),
pages 10635–10644.
Xiuying Chen, Shen Gao, Chongyang Tao, Yan Song, Dongyan Zhao, and Rui Yan. 2018. Iterative document representation learning towards summarization with polishing. In *EMNLP*.
Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In ACL, pages 484–494.
Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. 2014. Creating summaries from user videos. In *ECCV*.
Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In *EMNLP*.
Li Haopeng, Ke Qiuhong, Gong Mingming, and Zhang Rui. 2022. Video summarization based on video-text modelling.
Ahmed Hassanien, Mohamed A. Elgharib, Ahmed A. S.
Seleim, Mohamed Hefeeda, and Wojciech Matusik. 2017. Large-scale, fast and accurate shot boundary detection through spatio-temporal convolutional neural networks. *ArXiv*, abs/1705.03281.
Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. *Advances in neural information processing systems*, 26:2292–2300.
Eman Hato and Matheel Emaduldeen Abdulmunem.
2019. Fast algorithm for video shot boundary detection using surf features. *2019 2nd Scientific Conference of Computer Sciences (SCCS)*, pages 81–86.
Sandra Eliza Fontes De Avila, Ana Paula Brandão Lopes, Antonio da Luz Jr, and Arnaldo de Albuquerque Araújo. 2011. Vsumm: A mechanism designed to produce static video summaries and a novel evaluation method. *Pattern Recognition Letters*, 32(1):56–68.
J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT.
Jianfeng Dong, Xirong Li, Chaoxi Xu, Xun Yang, Gang Yang, Xun Wang, and Meng Wang. 2021. Dual encoding for video retrieval by text. *IEEE transactions* on pattern analysis and machine intelligence, PP.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9:1735–
1780.
Jiali Duan, Liqun Chen, Son Thai Tran, Jinyu Yang, Yi Xu, Belinda Zeng, and Trishul M. Chilimbi. 2022.
Multi-modal alignment using representation codebook. *ArXiv*, abs/2203.00048.
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Y. Matias. 2022. True: Re-evaluating factual consistency evaluation. In *Workshop on Documentgrounded Dialogue and Conversational Question Answering*.
Xiyan Fu, Jun Wang, and Zhenglu Yang. 2020. Multimodal summarization for video-containing documents. *ArXiv*, abs/2009.08018.
Chiori Hori et al. 2019. End-to-end audio visual sceneaware dialog using multimodal attention-based video features. In *ICASSP*, pages 2352–2356.
Xiyan Fu, Jun Wang, and Zhenglu Yang. 2021. Mm-avs:
A full-scale dataset for multi-modal summarization.
In *NAACL*.
Wan Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. *ArXiv*, abs/1805.06266.
Goran Glavas, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation using semantic relatedness graphs. In **SEMEVAL*.
Shruti Jadon and Mahmood Jasim. 2020. Unsupervised video summarization framework using keyframe extraction and video skimming. In *ICCCA*, pages 140–
145.
Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. *Neural networks*,
18 5-6:602–10.
Zhong Ji, Kailin Xiong, Yanwei Pang, and Xuelong Li. 2020. Video summarization with attention-based encoder–decoder networks. IEEE Transactions on Circuits and Systems for Video Technology, 30:1709–
1717.
Dalu Guo, Chang Xu, and Dacheng Tao. 2019. Imagequestion-answer synergistic network for visual dialog.
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10426–10435.
Johannes Klicpera, Marten Lienen, and Stephan Günnemann. 2021. Scalable optimal transport in high dimensions for graph distances, embedding alignment, and more. *ArXiv*, abs/2107.06876.
Michael U Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In *AISTATS*.
Freddy Y. Y. Choi. 2000. Advances in domain independent linear text segmentation. In *ANLP*.
Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun.
2016. Deep residual learning for image recognition.
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778.
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *NIPS*.
Rémi Flamary et al. 2021. Pot: Python optimal transport.
Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In *NAACL*.
Michal Lukasik, Boris Dadachev, Kishore Papineni, and Gonçalo Simões. 2020. Text segmentation by cross segment attention. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 4707–4716, Online. Association for Computational Linguistics.
Hilde Kuehne, Alexander Richard, and Juergen Gall.
2020. A hybrid rnn-hmm approach for weakly supervised temporal action segmentation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*,
42:765–779.
Colin S. Lea, Michael D. Flynn, René Vidal, Austin Reiter, and Gregory Hager. 2017. Temporal convolutional networks for action segmentation and detection.
In *CVPR*.
John Lee, Max Dabagia, Eva L. Dyer, and Christopher J. Rozell. 2019. Hierarchical optimal transport for multimodal distribution alignment. *ArXiv*,
abs/1906.11768.
Li Mingzhe, Xiuying Chen, Shen Gao, Zhangming Chan, Dongyan Zhao, and Rui Yan. 2020. Vmsmo:
Learning to generate multimodal summary for videobased news articles. In *EMNLP*.
Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017.
Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In *AAAI*.
Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. *ArXiv*, abs/1803.08024.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL.
Haoran Li, Junnan Zhu, Cong Ma, Jiajun Zhang, and Chengqing Zong. 2017. Multi-modal summarization for asynchronous collection of text, image, audio and video. In *EMNLP*.
J. Li, Aixin Sun, and Shafiq R. Joty. 2018. Segbot: A
generic neural text segmentation model with pointer network. In *IJCAI*.
Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, and Naokazu Yokoya. 2016. Video summarization using deep semantic features. *ArXiv*,
abs/1609.08758.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. Glove: Global vectors for word representation. In *EMNLP*.
Yair Poleg, Chetan Arora, and Shmuel Peleg. 2014.
Temporal segmentation of egocentric videos. *2014* IEEE Conference on Computer Vision and Pattern Recognition, pages 2537–2544.
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In *ECCV*.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *ICML*.
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683.
W. Lu, Yiqiang Chen, Jindong Wang, and Xin Qin. 2021.
Cross-domain activity recognition via substructural optimal transport. *Neurocomputing*.
Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In *EMNLP*.
Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig.
2013. Linguistic regularities in continuous space word representations. In *NAACL*.
Ramesh Nallapati, Bowen Zhou, Cícero Nogueira dos Santos, Çaglar Gülçehre, and Bing Xiang. 2016.
Abstractive text summarization using sequence-tosequence rnns and beyond. In *CoNLL*.
Shashi Narayan, Shay B Cohen, and Mirella Lapata.
2018. Ranking sentences for extractive summarization with reinforcement learning. In *NAACL*, pages 1747–1759.
Ana Sofia Nicholls. 2021. A neural model for text segmentation.
Xiangpeng Li, Jingkuan Song, Lianli Gao, Xianglong Liu, Wenbing Huang, Xiangnan He, and Chuang Gan. 2019. Beyond rnns: Positional self-attention with co-attention for video question answering. In AAAI.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *ACL 2004*.
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3730–3740, Hong Kong, China. Association for Computational Linguistics.
Anyi Rao, Linning Xu, Yu Xiong, Guodong Xu, Qingqiu Huang, Bolei Zhou, and Dahua Lin. 2020. A local-to-global approach to multi-modal movie scene segmentation. *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 10143–10152.
Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. *IEEE Transactions on Pattern Analysis and Machine Intelligence*,
39:1137–1149.
Martin Riedl and Chris Biemann. 2012. Topictiling: A
text segmentation algorithm based on lda. In ACL
2012.
Shagan Sah, Sourabh Kulhare, Allison Gray, Subhashini Venugopalan, Emily Tucker Prud'hommeaux, and Raymond W. Ptucha. 2017. Semantic text summarization of long videos. *2017 IEEE Winter Conference on Applications of Computer Vision (WACV)*,
pages 989–997.
M. Saquib Sarfraz et al. 2021. Temporally-weighted hierarchical clustering for unsupervised action segmentation. *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 11220–11229.
A. See, Peter J. Liu, and Christopher D. Manning.
2017a. Get to the point: Summarization with pointergenerator networks. *ArXiv*, abs/1704.04368.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017b. Get to the point: Summarization with pointergenerator networks. In ACL.
Xiaoyu Shen, Yang Zhao, Hui Su, and Dietrich Klakow.
2019. Improving latent alignment in text summarization by generalizing the pointer generator. In EMNLP.
Panagiotis Sidiropoulos, Vasileios Mezaris, Yiannis Kompatsiaris, Hugo Meinedo, Miguel M. F. Bugalho, and Isabel Trancoso. 2011. Temporal video segmentation to scenes using high-level audiovisual features.
IEEE Transactions on Circuits and Systems for Video Technology, 21:1163–1177.
Hajar Sadeghi Sokeh, Vasileios Argyriou, Dorothy Ndedi Monekosso, and Paolo Remagnino.
2018. Superframes, a temporal video segmentation.
2018 24th International Conference on Pattern Recognition (ICPR), pages 566–571.
Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. 2015. Tvsum: Summarizing web videos using titles. *2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 5179–5187.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks.
In *NIPS*.
Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017.
Abstractive document summarization with a graphbased attentional neural model. In ACL, pages 1171–
1181.
Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining.
Atousa Torabi, Niket Tandon, and Leonid Sigal.
2016. Learning language-visual embedding for movie understanding with natural-language. *ArXiv*,
abs/1609.08124.
Aäron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *ArXiv*, abs/1807.03748.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio', and Yoshua Bengio. 2018. Graph attention networks. *ArXiv*,
abs/1710.10903.
Cédric Villani. 2003. Topics in optimal transportation.
Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. 2019.
Temporal segment networks for action recognition in videos. *IEEE Transactions on Pattern Analysis and* Machine Intelligence, 41:2740–2755.
Qinxin Wang, Haochen Tan, Sheng Shen, Michael W.
Mahoney, and Zhewei Yao. 2020a. An effective framework for weakly-supervised phrase grounding.
ArXiv, abs/2010.05379.
Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation.
In *EMNLP*.
Zhenzhi Wang, Ziteng Gao, Limin Wang, Zhifeng Li, and Gangshan Wu. 2020b. Boundary-aware cascade networks for temporal action segmentation. In ECCV.
Huawei Wei, Bingbing Ni, Yichao Yan, Huanyu Yu, Xiaokang Yang, and Chen Yao. 2018. Video summarization via semantic attended networks. In *AAAI*.
Michael Wray, Diane Larlus, Gabriela Csurka, and Dima Damen. 2019. Fine-grained action retrieval through multiple parts-of-speech embeddings. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 450–459.
Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, and Wei-Ying Ma. 2019. Unified visual-semantic embeddings: Bridging vision and language with structured meaning representations.
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6602–6611.
Yuxiang Wu and Baotian Hu. 2018. Learning to extract coherent summary via deep reinforcement learning. In *AAAI*, pages 5602–5609.
Shuwen Xiao, Zhou Zhao, Zijian Zhang, Xiaohui Yan, and Min Yang. 2020. Convolutional hierarchical attention network for query-focused video summarization. *arXiv preprint arXiv:2002.03740*.
Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, and Bolin Ding. 2021. Factual consistency evaluation for text summarization via counterfactual estimation. In Conference on Empirical Methods in Natural Language Processing.
Ke Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S.
Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In *ICML*.
Jianwei Yang, Yonatan Bisk, and Jianfeng Gao. 2021.
Taco: Token-aware cascade contrastive learning for video-text alignment. *2021 IEEE/CVF International Conference on Computer Vision (ICCV)*, pages 11542–11552.
Ting Yao, Yingwei Pan, Yehao Li, and Tao Mei. 2018.
Exploring visual relationship for image captioning.
In *ECCV*.
Youngjae Yu, Hyungjin Ko, Jongwook Choi, and Gunhee Kim. 2017. End-to-end concept word detection for video captioning, retrieval, and question answering. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3261–3269.
S. Yuan, K. Bai, Liqun Chen, Yizhe Zhang, Chenyang Tao, C. Li, Guoyin Wang, R. Henao, and L. Carin.
2020. Weakly supervised cross-domain alignment with optimal transport. *BMVC*.
Yitian Yuan, Tao Mei, Peng Cui, and Wenwu Zhu. 2019.
Video summarization by learning deep side semantic embedding. *IEEE Transactions on Circuits and* Systems for Video Technology, 29:226–237.
Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 1103–1114, Copenhagen, Denmark. Association for Computational Linguistics.
Bowen Zhang, Hexiang Hu, and Fei Sha. 2018. Crossmodal and hierarchical modeling of video and text.
In *ECCV*.
Haoxin Zhang, Zhimin Li, and Qinglin Lu. 2021. Better learning shot boundary detection via multi-task. *Proceedings of the 29th ACM International Conference* on Multimedia.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.
ArXiv, abs/1912.08777.
Litian Zhang, Xiaoming Zhang, Junshu Pan, and Feiran Huang. 2022. Hierarchical cross-modality semantic correlation learning model for multimodal summarization. In *AAAI*.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2019b. Bertscore:
Evaluating text generation with bert. *ArXiv*,
abs/1904.09675.
Xingxing Zhang, Furu Wei, and Ming Zhou. 2019c. HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5059–5069, Florence, Italy. Association for Computational Linguistics.
Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaoou Tang, and Dahua Lin. 2017. Temporal action detection with structured segment networks. *2017* IEEE International Conference on Computer Vision
(ICCV), pages 2933–2942.
Feng Zhou, Fernando De la Torre, and Jessica K. Hodgins. 2013. Hierarchical aligned cluster analysis for temporal clustering of human motion. *IEEE Transactions on Pattern Analysis and Machine Intelligence*,
35:582–596.
Kaiyang Zhou, Yu Qiao, and Tao Xiang. 2018a. Deep reinforcement learning for unsupervised video summarization with diversity-representativeness reward.
In *AAAI*, pages 7582–7589.
Kaiyang Zhou, T. Xiang, and A. Cavallaro. 2018b.
Video summarisation by classification with deep reinforcement learning. In *BMVC*.
Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu.
2018c. Multi-turn response selection for chatbots with deep attention matching network. In ACL.
Jiacheng Zhu, Aritra Guha, Mengdi Xu, Yingchen Ma, Rayleigh Lei, Vincenzo Loffredo, XuanLong Nguyen, and Ding Zhao. 2021. Functional optimal transport: Mapping estimation and domain adaptation for functional data. *ArXiv*, abs/2102.03895.
Junnan Zhu, Haoran Li, Tianshan Liu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2018. Msmo: Multimodal summarization with multimodal output. In EMNLP.
Junnan Zhu, Yu Zhou, Jiajun Zhang, Haoran Li, Chengqing Zong, and Changliang Li. 2020. Multimodal summarization with guidance of multimodal reference. In *AAAI*.
## A Optimal Transport (Ot) Basis
OT is the problem of transporting mass between two discrete distributions supported on latent feature space X . Let µ = {xi, µi}
n i=1 and v = yj, vj m j=1 be the discrete distributions of interest, where xi, yj ∈ X denotes the spatial locations and µi, vj , respectively, denoting the non-negative masses. Without loss of generality, we assume Pi µi =Pj vj = 1. π ∈ R
n×m
+ is a valid transport plan if its row and column marginals match µ and v, respectively, which is Pi P
πij = vj and j πij = µi. Intuitively, π transports πij units of mass at location xito new location yj
. Such transport plans are not unique, and one often seeks a solution π∗ ∈ Π(µ, v) that is most preferable in other ways, where Π(µ, v) denotes the set of all viable transport plans. OT finds a solution that is most cost effective w.r.t. cost function C(x, y):
$\mathcal{D}(\boldsymbol{\mu},\boldsymbol{v})=\sum_{ij}\pi_{ij}^{*}C\left(\boldsymbol{x}_{i},\boldsymbol{y}_{j}\right)=\inf_{\pi\in\Pi(\mu,v)}\sum_{ij}\pi_{ij}C\left(\boldsymbol{x}_{i},\boldsymbol{y}_{j}\right).$
where D(µ, v) is known as OT distance. D(µ, v)
minimizes the transport cost from µ to v w.r.t.
C(x, y). When C(x, y) defines a distance metric on X , and D(µ, v) induces a distance metric on the space of probability distributions supported on X , it becomes the Wasserstein Distance (WD).
## B More Related Work
Optimal Transport OT studies the geometry of probability spaces (Villani, 2003), a formalism for finding and quantifying mass movement from one probability distribution to another. OT defines the Wasserstein metric between probability distributions, revealing a canonical geometric structure with rich properties to be exploited. The earliest contribution to OT originated from Monge in the eighteenth century. Kantorovich rediscovered it under a different formalism, namely the Linear Programming formulation of OT. With the development of scalable solvers, OT is widely applied to many real-world problems and applications (Flamary et al., 2021; Chen et al., 2020a; Yuan et al.,
2020; Zhu et al., 2021; Klicpera et al., 2021; Alqahtani et al., 2021; Lee et al., 2019; Chen et al., 2019; Duan et al., 2022).
Video Summarization Video summarization aims at generating a short synopsis that summarizes the video content by selecting the most informative and vital parts. The summary usually contains a set of representative video keyframes or video key-fragments that have been stitched in chronological order to form a shorter video. The former type is known as video storyboard, and the latter one is known as video skim (Apostolidis et al.,
2021). Traditional video summarization methods only use visual information, extracting important frames to represent the video content. For instance, Gygli et al. (2014); Jadon and Jasim (2020) generated video summaries by selecting keyframes using SumMe and TVSum datasets. Some categorydriven or supervised training approaches were proposed to generate video summaries with videolevel labels (Song et al., 2015; Zhou et al., 2018a; Xiao et al., 2020; Zhou et al., 2018b).
Textual Summarization Textual summarization takes textual metadata, i.e., documents, articles, tweets, etc, as input, and generates textual summaries, in two directions: abstractive summarization and extractive summarization. Abstractive methods select words based on semantic understanding, and even the words may not appear in the source (Tan et al., 2017; See et al., 2017b). Extractive methods attempt to summarize language by selecting a subset of words that retain the most critical points, which weights the essential part of sentences to form the summary (Narayan et al.,
2018; Wu and Hu, 2018). Recently, the fine-tuning approaches have improved the quality of generated summaries based on pre-trained language models in a wide range of tasks (Liu and Lapata, 2019; Zhang et al., 2019c).
Video Temporal Segmentation Video temporal segmentation aims at generating small video segments based on the content or topics of the video, which is a fundamental step in content-based video analysis and plays a crucial role in video analysis.
Previous work mostly formed a classification problem to detect the segment boundaries in the supervised manner (Sidiropoulos et al., 2011; Zhou et al.,
2013; Poleg et al., 2014; Sokeh et al., 2018; Aakur and Sarkar, 2019). Recently, unsupervised methods have also been explored (Gygli et al., 2014; Song et al., 2015). Temporal segmentation of actions in videos has also been widely explored in previous works (Wang et al., 2019; Zhao et al., 2017; Lea et al., 2017; Kuehne et al., 2020; Sarfraz et al.,
2021; Wang et al., 2020b). Video shot boundary detection and scene detection tasks are also relevant and has been explored in many previous studies
(Hassanien et al., 2017; Hato and Abdulmunem, 2019; Rao et al., 2020; Chen et al., 2021; Zhang et al., 2021), which aim at finding the visual change or scene boundaries.
Textual Segmentation Textual segmentation aim at dividing the text into coherent, contiguous, and semantically meaningful segments (Nicholls, 2021). These segments can be composed of words, sentences, or topics, where the types of text include blogs, articles, news, video transcript, etc. Previous work focused on heuristics-based methods
(Koshorek et al., 2018; Choi, 2000), LDA-based modeling algorithms (Blei et al., 2003; Chen et al.,
2009), or Bayesian methods (Chen et al., 2009; Riedl and Biemann, 2012). Recent developments in NLP developed large models to learn huge amount of data in the supervised manner (Mikolov et al.,
2013; Pennington et al., 2014; Li et al., 2018; Wang et al., 2018). Besides, unsupervised or weaklysupervised methods has also drawn much attention
(Glavas et al., 2016; Lukasik et al., 2020).
## C Baselines C.1 Baselines For The Vmsmo Dataset
For the VMSMO dataset, we compare with multimodal summarization baselines and textual summarization baselines:
Multimodal summarization baselines:
Synergistic (Guo et al., 2019): Guo et al. (2019)
proposed a image-question-answer synergistic network to value the role of the answer for precise visual dialog, which is able to jointly learn the representation of the image, question, answer, and history in a single step.
PSAC (Li et al., 2019): The Positional SelfAttention with Coattention (PSAC) model adopted positional self-attention block to model the data dependencies and video-question co-attention to help attend to both visual and textual information.
MSMO (Zhu et al., 2018): MSMO was the first model on producing multimodal output as summarization results, which adopted the pointergenerator network, added attention to text and images when generating textual summary, and used visual coverage by the sum of visual attention distributions to select pictures.
MOF (Zhu et al., 2020): Zhu et al. (2020) proposed a multimodal objective function with the guidance of multimodal reference to use the loss from the summary generation and the image selection to solve the modality-bias problem.
DIMS (Mingzhe et al., 2020): DIMS is a dual interaction module and multimodal generator, where conditional self-attention mechanism is used to capture local semantic information within video, and the global-attention mechanism is applied to handle the semantic relationship between news text and video from a high level.
Textual summarization baselines:
Lead (Nallapati et al., 2017): The Lead method simply selects the first sentence of article/document as the textual summary.
TexkRank (Mihalcea and Tarau, 2004): TexkRank is a graph-based extractive summarization method which adds sentences as nodes and uses edges to weight similarity.
PG (See et al., 2017b): PG is a hybrid pointergenerator model with coverage, which copied words via pointing, and generated words from a fixed vocabulary with attention.
Unified (Hsu et al., 2018): The Unified model combined the strength of extractive and abstractive summarization, where a sentence-level attention is used to modulate the word-level attention and an inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions.
GPG (Shen et al., 2019): Generalized Pointer Generator (GPG) replaced the hard copy component with a more general soft "editing" function, which learns a relation embedding to transform the pointed word into a target embedding.
## C.2 Baselines For The Daily Mail And Cnn Datasets
For Daily Mail and CNN datasets, we have multimodal baselines, video summarization baselines, and textual summarization baselines: Multimodal summarization baselines:
MSMO (Zhu et al., 2018): MSMO was the first model on producing multimodal output as summarization results, which adopted the pointergenerator network, added attention to text and images when generating textual summary, and used visual coverage by the sum of visual attention distributions to select pictures.
Img+Trans (Hori et al., 2019): (Hori et al., 2019)
applied multi-modal video features including video frames, transcripts, and dialog context for dialog generation.
TFN (Zadeh et al., 2017): Tensor Fusion Network
(TFN) models intra-modality and inter-modality dynamics for multimodal sentiment analysis which explicitly represents unimodal, bimodal, and trimodal interactions between behaviors.
HNNattTI (Chen and Zhuge, 2018): HNNattTI
aligned the sentences and accompanying images by using attention mechanism.
M2SM (Fu et al., 2021, 2020): M2SM is a multimodal summarization model with a bi-stream summarization strategy for training by sharing the ability to refine significant information from long materials in text and video summarization.
Video summarization baselines:
VSUMM (De Avila et al., 2011): VSUMM is a methodology for the production of static video summaries, which extracted color features from video frames and adopted k-means for clustering.
DR-DSN (Zhou et al., 2018a): Zhou et al. (2018a)
formulated video summarization as a sequential decision making process and developed a deep summarization network (DSN) to summarize videos.
DSN predicted a probability for each frame, which indicates the likelihood of a frame being selected, and then takes actions based on the probability distributions to select frames to from video summaries.
Textual summarization baselines:
Lead3 (See et al., 2017a): Similar to Lead, Lead3 means picking the first three sentences as the summary result. NN-SE (Cheng and Lapata, 2016): NN-SE is a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor.
T5 (Raffel et al., 2019): T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, inluding summarization.
Pegasus (Zhang et al., 2019a): Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models (PEGASUS) uses self-supervised objective Gap Sentences Generation (GSG) to train a transformer encoderdecoder model.
BART (Lewis et al., 2020): BART is a sequenceto-sequence model trained as a denoising autoencoder, and showed great performance a variety of text summarization datasets.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✗ A2. Did you discuss any potential risks of your work?
To the best of our knowledge, we do not foresee any harmful uses of this study
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4 And Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 and Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and Section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 and Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 and Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
meng-etal-2023-general | General-to-Specific Transfer Labeling for Domain Adaptable Keyphrase Generation | https://aclanthology.org/2023.findings-acl.102 | Training keyphrase generation (KPG) models require a large amount of annotated data, which can be prohibitively expensive and often limited to specific domains. In this study, we first demonstrate that large distribution shifts among different domains severely hinder the transferability of KPG models. We then propose a three-stage pipeline, which gradually guides KPG models{'} learning focus from general syntactical features to domain-related semantics, in a data-efficient manner. With domain-general phrase pre-training, we pre-train Sequence-to-Sequence models with generic phrase annotations that are widely available on the web, which enables the models to generate phrases in a wide range of domains. The resulting model is then applied in the Transfer Labeling stage to produce domain-specific pseudo keyphrases, which help adapt models to a new domain. Finally, we fine-tune the model with limited data with true labels to fully adapt it to the target domain. Our experiment results show that the proposed process can produce good quality keyphrases in new domains and achieve consistent improvements after adaptation with limited in-domain annotated data. All code and datasets are available at \url{https://github.com/memray/OpenNMT-kpg-release}. | # General-To-Specific Transfer Labeling For Domain Adaptable Keyphrase Generation
Rui Meng1, Tong Wang2, Xingdi Yuan2, Yingbo Zhou1**, Daqing He**3 1Salesforce Research, 2Microsoft Research, Montréal, 3University of Pittsburgh [email protected]
## Abstract
Training keyphrase generation (KPG) models require a large amount of annotated data, which can be prohibitively expensive and often limited to specific domains. In this study, we first demonstrate that large distribution shifts among different domains severely hinder the transferability of KPG models. We then propose a three-stage pipeline, which gradually guides KPG models' learning focus from general syntactical features to domain-related semantics, in a data-efficient manner. With domain-general phrase pre-training, we pre-train Sequence-toSequence models with generic phrase annotations that are widely available on the web, which enables the models to generate phrases in a wide range of domains. The resulting model is then applied in the Transfer Labeling stage to produce domain-specific pseudo keyphrases, which help adapt models to a new domain. Finally, we fine-tune the model with limited data with true labels to fully adapt it to the target domain. Our experiment results show that the proposed process can produce good quality keyphrases in new domains and achieve consistent improvements after adaptation with limited in-domain annotated data12.
## 1 Introduction
The last decade has seen major advances in deep neural networks and their applications in natural language processing. Particularly, the subarea of neural keyphrase generation (KPG) has made great progress with the aid of large language models (Lewis et al., 2020) and large-scale datasets (Meng et al., 2017a; Yuan et al., 2020a). Due to the high cost of data annotation, most, if not all, of the large-scale KPG datasets are constructed by scraping domain-specific data from the internet. For example, Meng et al. collected more 1All code and datasets are available at https://github.
com/memray/OpenNMT-kpg-release.
2The research was mostly accomplished when the first author was at the University of Pittsburgh.
than 500k scientific papers of which keyphrases are provided by paper authors. Gallina et al. crawled about 280k news articles from New York Times with editor-assigned keyphrases. Following Gururangan et al. (2020), we use "domain" to denote a distribution over language characterizing a given topic or genre. Specifically in KPG tasks, domains can be "computer science papers", "online forum articles", "news" etc.
Despite recent neural models can to some extent learn KPG skills from existing datasets (Meng et al.,
2021a; Gallina et al., 2019; Yuan et al., 2020a), because most of these datasets are limited to a single domain, it remains unclear how the trained models can be transferred to new domains, especially in a real-world setting. Some existing studies claim their models demonstrate a certain degree of transferability across domains. For instance, Meng et al. show that models trained with scientific paper datasets can generate decent quality keyphrases from news articles, in a zero-shot manner. Xiong et al. present that training with open-domain web documents can improve the model's generalizability. However, there is a lack of systematic studies on domain transferring KPG, and thus the observations reported in prior works do not support a comprehensive understanding of this topic.
To investigate this question, we conduct an empirical study on how well KPG models can transfer across domains. We utilize commonly used KPG
datasets covering four different domains (Science, News, Web, Q&A). We first show experiment results (§2.2) that suggest models trained with data in a specific domain do not generalize well to other domains, even in cases where they are initialized with pre-trained language models such as BART (Lewis et al., 2020). We also visualize the domain gaps among datasets by inspecting their phrase overlaps. Keyphrases often represent the specific knowledge of a domain and this may result in the failure of transferring models across domains.
The empirical study motivates us to explore novel methods that can help models possess the ability of generating high quality keyphrases and more importantly, can quickly adapt to a new domain with limited amount of annotation. We propose a three-stage training pipeline, in which we gradually guide a KPG model's learning focus from general syntactical features to domain-specific information. First, we pre-train the model using community labeled phrases in Wikipedia (§3.1). Then, we use a novel self-training-based domain adaptation method, namely Transfer Labeling, to adapt the model to the new domain. Note this domain adaptation method does not require ground-truth labels, we leverage the model pre-trained in the previous stage to generate pseudo-labels for training itself. Finally, we use a limited amount of in-domain data with true annotations to fully adapt the model to the new domain. We report extensive experiment results and thorough analyses to demonstrate the effectiveness of the proposed methods.
## 2 Background And Motivation 2.1 Background Keyphrase Generation (Kpg) Typically, The
task is to generate a set of keyphrases P =
{p1*, . . . , p*n} given a source text t. Semantically, these phrases summarize and highlight important information contained in t, while syntactically, each keyphrase may consist of multiple words and serve a component of a sentence. Depending on a particular domain the source text belongs to (e.g., scientific paper, news) and downstream applications (e.g., article classification, information retrieval), the extent to which a phrase is important can vary, i.e. the criteria of keyphrase can be different in various datasets. Following Meng et al., we denote a keyphrase as *present* if it is a sub-string of the source text, or as *absent* otherwise. We adopt the One2Seq training paradigm (Yuan et al., 2020a). Given a source text t and a set of ground-truth keyphrases P,
we concatenate all ground-truth keyphrases into a single string: <bos>p1<sep> *· · ·* <sep>pn<eos>,
where <bos>, <sep>, and <eos> are special tokens.
This string is paired with t to train a sequence-tosequence model. We refer readers to (Meng et al.,
2021a) for more details in common KPG practice.
## 2.2 Domain Gap In Kpg Tasks
Previous studies have touched on how much KPG
models can transfer their skills when applied across domains (Meng et al., 2017a; Xiong et al., 2019a),
but not in a systematic way. In this subsection, we revisit this topic and try to ground our discussion with thorough empirical results. Specifically, we consider four broadly used datasets in the KPG community: KP20k (Meng et al., 2017a)
contains scientific papers in computer science; OpenKP (Xiong et al., 2019a) is a collection of web documents; KPTimes (Gallina et al., 2019) contains a set of news articles; StackEx (Yuan et al., 2020a)
are community-based Q&A posts collected from StackExchange. All the four datasets are large enough to train KPG models from scratch. At the same time, the documents in these datasets cover a wide spectrum of domains. We report statistics of these four datasets in appendix Table 7.
![1_image_0.png](1_image_0.png)
On the model dimension, we consider two model architectures: TF-Rand, a 6-layer encoder-decoder Transformer with random initialization (Vaswani et al., 2017); and TF-Bart, a 12-layer Transformer initialized with BART-large (Lewis et al., 2020).
We train the two models on the four datasets individually and subsequently evaluate all the resulting eight checkpoints on the test split of each dataset. As shown in Figure 1, in-domain scores
(i.e., trained and tested on the same datasets) are placed along the diagonal, the other elements represent cross-domain testing scores. We observe that both models exhibit a large gap between in-domain and out-of-domain performance. Even though the initialization with BART can alleviate the gap to a certain degree, the difference remains significant.
Keyphrases are typically concepts or entities that represent important information of a document.
The collection of keyphrases in a domain can also be deemed as a representation of domain knowl-
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
edge. Therefore, to better investigate the domain gaps, we further look into the keyphrase overlap between datasets. As shown in Table 1, only a small proportion of phrases are in common between the four domains. We provide a T-SNE visualization of a set of phrases sampled from these dataset in appendix Figure 8, the phrase clusters present clear domain gaps in their semantic space.
We hypothesize that the domain specific traits in annotated data make models difficult to learn keyphrase patterns in a domain-general sense. Furthermore, humans may label keyphrases under an application-oriented consideration and thus a onesize-fits-all standard for keyphrase annotation may not exist. For example, on StackExchange, users tend to assign common tags to better expose their questions to community experts, resulting in a small keyphrase vocabulary size. On the contrary, the topics are more specialized in scientific papers and authors would emphasize novel concepts in their studies. This may explain the large number of unique keyphrases found in KP20k.
## 2.3 Disentanglement Of "Key" And "Phrase"
In §2.2, we empirically show that KPG models do not adequately transfer to out-of-domain data, even initialized with pre-trained language models. However, data annotation for every single domain or application does not seem practical either, due to the high cost and the potential need of domainspecific annotators. Inspired by some prior works, we attempt to disentangle the important properties of a keyphrase as *keyness* (Bondi and Scott, 2010; Gabrielatos, 2018) and *phraseness* (Tomokiyo and Hurst, 2003). We believe a proficient KPG model should generate outputs that satisfy both properties.
Keyness refers to the attribute that how well a phrase represents important information of a piece of text. The degree of keyness can be document dependent and domain dependent. For example,
"cloud" is a common keyphrase in Computer Science papers, it is, in most cases, less likely to be important in Meteorology studies. Due to its high dependence on domain-specific information, we believe that the knowledge/notion of keyness is more likely to be acquired from in-domain data.
Phraseness, on the other hand, focuses more on the syntactical aspect. It denotes that given a short piece of text, without even taking into account its context, to what extent it can be grammatically functional as a meaningful unit. Although the majority of keyphrases in existing datasets are noun phrases (Chuang et al., 2012), they can present in variant grammatical forms in the real world (Sun et al., 2021). We believe that phraseness can be independent from domains and thus can be obtained from domain-general data.
## 3 Methodology
In the spirit of the motivation discussed above, we propose a three-stage training procedure in which a model gradually moves its focus from learning domain-general phraseness towards domainspecific keyness, and eventually adapts to a new domain with only limited amount of annotated data.
An overview of the proposed pipeline is illustrated in Figure 2. First, with a Pre-Training stage (PT),
the model is trained with domain-general data to learn phraseness (§3.1). Subsequently, in the Domain Adaption stage (DA), the model is exposed with *unlabeled* in-domain data. Within a few iterations, the model labels the data itself and use them to gradually adapt to the new domain (§3.2).
Lastly, in the Fine-Tuning stage (FT), the model fully adapts itself to the new domain by leveraging a limited amount of in-domain data with true annotations (§3.3). In this section, we describe each of the three stages in detail.
## 3.1 Domain-General Phrase Pre-Training
The first training stage aims to capture the phraseness in general, we leverage the Wikipedia data and community labeled phrases from the text.
Wikipedia is an open-domain knowledge base that contains rich entity-centric annotations, its articles cover a wide spectrum of topics and domains and thus it has been extensively used as a resource of distant supervision for NLP tasks related to entities and knowledge (Ghaddar and Langlais, 2017; Yamada et al., 2020; Xiong et al., 2019b). In this work, we consider four types of markup patterns in Wikipedia text to form distant keyphrase labels:
- in-text phrases with special formatting (italic, boldface, and quotation marks);
- in-text phrases with wikilinks (denoting an entity in Wikipedia);
- "see also" phrases (denoting related entities); - "categories" phrases (denoting superordinate entities).
Although the constructed targets using the above heuristics can be noisy if considering the keyness aspect, we show that they work sufficiently for training general phrase generation models.
Given a piece of Wikipedia text t and a set of community labeled phrases, we convert this data point to the format of One2Seq as described in §2.1.
In practice, the number of phrases within t can be large and thus we sample a subset from them to form the target. We group all the phrases appear in t as present candidates, the rest (e.g., "see-also" and categories) are grouped as absent candidates.
Additionally, we take several random spans from t as infilling candidates (similar as (Raffel et al.,
2020)) for robustness. Finally, we sample a few candidates from each group and concatenate them as the final target sequence.
On the source side, we prepend a string suggesting the cardinality of phrases in each target group to the beginning of t. We also corrupt the source
![3_image_0.png](3_image_0.png)
sequence by replacing a small proportion of present and infilling phrases with a special token [MASK],
expecting to improve models' robustness (Raffel et al., 2020). We show an example of a processed Wikipedia data instance in Figure 3.
Trained with this data, we expect a model to become a general phrase generator - given a source text, the model can generate a sequence of phrases, regardless the specific domain a text belongs to.
## 3.2 Domain Adaption With Transfer Labeling
In the second stage, we aim to expose the model with data from a domain of interest, so it can learn the notion of domain-specific keyness. We propose a method, namely General-to-Specific Transfer Labeling , which does not require any in-domain annotated data. Transfer labeling can be considered as a special self-training method (Yarowsky, 1995; Culp and Michailidis, 2008; Mukherjee and Awadallah, 2020), where the key notion is to train a model with its own predictions iteratively.
Distinct from common practice of self-training where initial models are bootstrapped with annotated data, transfer labeling regards the domaingeneral model from the pre-training stage 3.1 as a qualified phrase predictor. We directly transfer the model to documents in a new domain to predict pseudo labels. The resulting phrases, paired with these documents, are used to tune the model so as to adapt it to the target domain distribution. Note that this process can be run iteratively, to gradually adapt models to target domains.
## 3.3 Low-Resource Fine-Tuning
In the third stage, we expose the model to a small amount of in-domain data with annotated keyphrases. This aims to help the model fully adapt to the new domain and reduce the bias caused by noisy labels from previous stages.
## 4 Experiments
We reuse the model architecture described in §2.2 throughout this paper. And most models apply a single iteration of transfer labeling. We discuss the effect of multi-iteration transfer labeling in §4.2.5.
See Appendix A.1 for implementation details.
## 4.1 Datasets And Evaluation Metric
We consider the same four large-scale KPG
datasets as described in §2.2, but instead of training models with all annotated document-keyphrases pairs, we take a large set of unannotated documents from each dataset for domain adaptation, and a small set of annotated examples for few-shot fine-tuning. Specifically, in the pre-training stage
(PT), we use the 2021-05-21 release of English Wikipedia dump and process it with wikiextractor package, which results in 3,247,850 passages.
In the domain adaptation stage (DA), for each domain, we take the first 100k examples from the training split (without keyphrases), and apply different strategies to produce pseudo labels and subsequently train the models. In the fine-tuning stage
(FT), we take the first 100/1k/10k annotated examples (document-keyphrases pairs) from the training split to train the models. We report the statistics of used datasets in appendix Table 7.
We follow previous studies to split training/validation/test sets, and report model performance on test splits of each dataset. A common practice in KPG studies is to evaluate the model performance on present/absent keyphrases separately.
However, the ratios of present/absent keyphrases differ drastically among the four datasets (e.g.
OpenKP is strongly extraction-oriented). Since we aim to improve the model's out-of-domain performance in general regardless of the keyphrases being present or absent, we follow Bahuleyan and El Asri (2020) and simply evaluate present and absent keyphrases altogether. We report the F@O
scores (Yuan et al., 2020a) between the generated keyphrases and the ground-truth. This metric requires systems to model the cardinality of predicted keyphrases themselves.
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
## 4.2 Results And Analyses 4.2.1 Zero-Shot Performance
We first investigate how well models can perform after the pre-training stage, without utilizing any in-domain annotated data. Since Wikipedia articles contain a rather wide range of phrase types, we expect models trained on this data are capable of predicting relevant and well-formed phrases from documents in general. We show our models' testing scores in the first row of Table 2 and 3, where only PT is checked. We observe that pre-training with Wikipedia data can provide decent zero-shot performance in both settings, i.e., model is initialized randomly (Table 2) and with pre-trained language models (3). Both settings achieve the same average F@O score of 12.2, which evinces the feasibility of using PT model to generate pseudo labels for further domain adaptation. The scores also suggest that at the pre-training stage, the BART model
(with pre-trained initialization and more parameters) does not present an advantage in comparison to a smaller model trained from scratch.
## 4.2.2 Domain Adaptation Strategies
We compare transfer labeling (TL, proposed in
§3.2) with two unsupervised strategies: (1) Noun Phrase (NP) and (2) Random Span (RS). For NP,
we employ SpaCy (Honnibal et al., 2020) to POStag source texts and extract noun phrases based on regular expressions. For RS, we follow Raffel et al. (2020), extracting random spans as targets and masking them in the source text. For TL, all pseudo phrases are generated by a PT model in a zero-shot manner (with greedy decoding).
As shown in Figure 4, in the single strategy setting, RS performs the best among the three strategies and TL follows. We speculate that RS models are trained to predict randomly masked spans based on their context, and this results in the best generalization among the three. As for the NP strategy, since targets are only noun phrases appear in the source text, the models may have the risk of overfitting to recognize a subset of possible phrases. TL
lies in between the two discussed strategies, the generated pseudo labels contain both present and absent phrases, and thanks to the PT model trained with Wikipedia data, the generated targets can contain many phrase types beyond noun phrases.
We further investigate the performance gap between RS and TL. On KP20k, the PT model can generate 5.1 present and 2.6 absent keyphrases on average. The generated pseudo labels, albeit of good quality, are always fixed during the training.
This is due to the deterministic nature of the PT
model, which may cause overfitting and limit the model's generalizability. In contrast, random spans in RS are dynamically generated, therefore a model can learn to generate different target phrases even the same documents appear multiple times during training. This motivates us to investigate if these strategies can be synergistic by combining them.
As shown in Figure 4, we observe that combining TL and RS can lead to a significant improvement over all other strategies, indicating that these two strategies are somewhat complementary and thus can be used together in domain adaption. In the rest of the paper, we by default combine TL and RS in the domain adaptation stage, by taking equal amount of data from both sides, we discuss other mixing strategies in Appendix A.3.
It is worth noting that, if we apply domain adaptation with the TL+RS mixing strategy and evaluate models without any fine-tuning (2nd row in Table 2/3), we can observe a clear drop in the performance of randomly initialized model (Table 2).
We believe it is because using random spans for targets worsens the phraseness of the predictions.
BART initialized models, on the other hand, show robust performance against these noisy targets.
## 4.2.3 Performance In Low-Data Setting
As described in §4.1, we use 100/1k/10k in-domain examples with gold standard keyphrases to finetune the model. To investigate the necessity of the PT and DA stages given the FT stage, we conduct a set of ablation experiments, skipping some of the training stages in the full pipeline.
We start with discussing the results of randomly initialized models (Table 2). **FT-only**: in the case where models are only fine-tuned with a small subset of annotated examples, models perform rather poorly on all datasets, especially on KP20k and OpenKP, where more unique target phrases are involved. **DA+FT**: different from the previous setting, here all models are first trained with 100k pseudo labeled in-domain data points. We expect these pseudo labeled data to improve models on both phraseness and keyness dimensions. Indeed, we observe DA+FT leads to a large performance boost in almost all settings. This suggests the feasibility of leveraging unlabeled in-domain data using the proposed adaptation method (TL+RS). **PT+FT**:
the pre-training stage provides a rather significant improvement in all settings, averaging over datasets and k-shot settings, PT+FT (23.8) nearly doubles the performance of DA+FT (12.6). This observation indicates that the large-scale pre-training with domain-general phrase data can be beneficial in various down-stream domains, which is consistent with prior studies for text generation pre-training.
PT+DA+FT: we observe a further performance boost when both PT and DA stages are applied before FT. This to some extent verifies our design that PT and DA can guide the models to focus on different perspectives of KPG and thus work in an complementary manner.
We also investigate when the model is initialized with a pre-trained large language model, i.e.,
BART (Lewis et al., 2020). Due to space limit, we only report models' average scores (over the four datasets, and over the k-shot settings) in Table 3, we refer readers to appendix Table 9 for the full results. We observe that in the pipeline, the finetuning stage provides TF-Bart the most significant performance boost - the average score is tripled, compared to the 0-shot settings, even performing solely the fine-tuning stage. This may be because the BART model was trained on a much wider range of domains of data (compared to Wikipedia, which is already domain-general), so it may have already contained knowledge in our four testing domains. However, the auto-regressive pre-training of BART does not train particularly on the KPG task.
This explains why it requires the BART model to fine-tune on KPG data to achieve higher performance. The above assumption can also be support by further observations in Table 3. Results suggest that the DA stage is not notably helpful to TF-Bart's scores, and the PT stage, on the other hand, seems to contribute to a better score. We believe this is because the quality difference between labels used in these two stages: PT uses
0-shot x 12.2
![6_image_4.png](6_image_4.png)
Average of
few-shot
(100/1k/10k)
![6_image_3.png](6_image_3.png)
community-labeled phrases (high phrase quality but domain-general) and DA uses labels generated by the model itself (no guarantee on phrase quality but closer to target domains). Since TF-Bart only needs specific knowledge about the KPG task, the PT stage can therefore be more helpful.
We run Wilcoxon signed-rank tests on the results of Table 2, and we find all differences between neighboring experiments (e.g. PT+FT vs.
PT+DA+FT, both trained with KP20k and 10kshot) are significant (p < 0.05). For Table 3, the improvement of PT+FT over the other three settings is also significant.
| Model | DA Data | 100-shot | 1k-shot | 10k-shot |
|------------|-----------|------------|-----------|------------|
| KP20k 100k | 16.7 | 19.7 | 22.1 | |
| MAG-CS 1m | 16.8 | 19.4 | 21.8 | |
| MAG-CS 12m | 17.6 | 20.4 | 22.8 | |
| KP20k 100k | 22.2 | 25.3 | 28.4 | |
| MAG-CS 1m | 22.3 | 25.4 | 28.4 | |
| MAG-CS 12m | 22.5 | 25.4 | 28.6 | |
## 4.2.4 Scaling The Domain Adaptation
One advantage of self-labeling is the potential to leverage large scale unlabeled data in target do-
0-shot x 15.0 10.0 9.1 14.8 12.2
![6_image_0.png](6_image_0.png)
x x 11.2 4.6 7.7 4.3 6.9
100-shotx 0.5 0.2 2.4 5.1 2.1
x x 14.1 5.6 5.3 11.7 9.2
x x 14.5 20.1 22.6 13.0 17.6
x x x 16.7 24.4 22.0 18.4 20.4
1k-shotx 0.5 0.6 5.4 7.0 3.4
x x 15.0 8.6 8.9 15.4 12.0
x x 17.6 25.5 30.5 21.1 23.7 x x x 19.7 28.0 30.7 26.3 26.2
10k-shotx 3.4 1.5 19.2 20.8 11.3
![6_image_2.png](6_image_2.png)
x x 16.5 13.1 13.4 23.4 16.6
x x 20.6 30.6 38.6 31.4 30.3 x x x 22.1 31.6 36.7 34.7 31.3
Avgx 1.5 0.8 9.0 11.0 5.6
x x 15.2 9.1 9.2 16.8 12.6
x x 17.6 25.4 **30.6** 21.8 23.8
x x x 19.5 **28.0** 29.8 26.5 **25.9**
Table 2: Zero-shot and low-data results obtained by TF-Rand. The best average score in each column is **boldfaced**.
mains. We also investigate this idea and build a large domain adaptation dataset by pairing an unlabeled dataset with pseudo labels produced by a PT model. To this end, we resort to the MAG
(Microsoft Academic Graph) dataset (Sinha et al.,
2015) and collect paper titles and abstracts from 12 million scientific papers in the domain of Computer Science, filtered by 'field of study'. The resulting subset MAG-CS is supposed to be in a domain close to KP20k, yet it may contain noisy data points due to errors in the MAG's data construction process.
We follow the same experiment setting as reported in the above subsections, except that in the DA
stage we either use 1 million or 12 million pseudolabeled MAG data points for domain adaptation.
We train the models with the PT+DA+FT pipeline and report models' scores on KP20k test split.
| PT | DA | FT | KP20k | OpenKP | KPTimes | StackEx | Avg |
|------|------|------|---------|----------|-----------|-----------|-------|
| x | x | 11.2 | 4.6 | 7.7 | 4.3 | 6.9 | |
| x | x | 14.1 | 5.6 | 5.3 | 11.7 | 9.2 | |
| x | x | 14.5 | 20.1 | 22.6 | 13.0 | 17.6 | |
| x | x | x | 16.7 | 24.4 | 22.0 | 18.4 | 20.4 |
| x | x | 15.0 | 8.6 | 8.9 | 15.4 | 12.0 | |
| x | x | 17.6 | 25.5 | 30.5 | 21.1 | 23.7 | |
| x | x | x | 19.7 | 28.0 | 30.7 | 26.3 | 26.2 |
| x | x | 16.5 | 13.1 | 13.4 | 23.4 | 16.6 | |
| x | x | 20.6 | 30.6 | 38.6 | 31.4 | 30.3 | |
| x | x | x | 22.1 | 31.6 | 36.7 | 34.7 | 31.3 |
| x | x | 15.2 | 9.1 | 9.2 | 16.8 | 12.6 | |
| x | x | 17.6 | 25.4 | 30.6 | 21.8 | 23.8 | |
| x | x | x | 19.5 | 28.0 | 29.8 | 26.5 | 25.9 |
![6_image_1.png](6_image_1.png)
As shown in Table 4, compared to our default setting which uses 100k unlabeled KP20k data points for domain adaptation, larger scale domain adaptation data can indeed benefit model performance —
models adapted with MAG-CS 12m documents show consistent improvements. However, the MAG-CS
1m data (still 10 times the size of KP20k) does not show clear evidence being helpful. We suspect the distribution gap between the domain adaptation data (i.e., MAG-CS) and the testing data (i.e.,
KP20k) may have caused the extra need of generalization. Therefore, the MAG-CS 12m data may represent a data distribution that has more overlap with KP20k and thus being more helpful. We also observe that models initialized with BART are more robust against such a distribution gap, on account of BART's pre-training with large scale of text in general domain.
## 4.2.5 Multi-Iteration Domain Adaptation
Prior self-training studies have demonstrated the benefit of multi-iterations of label propagation (Triguero et al., 2015; Li et al., 2019). We conduct experiments to investigate its effects on KPG.
Specifically, we first pre-train a TF-Rand model using Wikipedia data as in previous subsections.
Then, we repeatedly perform the domain adaptation stage multiple times. In each iteration, the model produces pseudo labels from the in-domain documents and then train itself with this data. Finally, we fine-tune the model with 10k annotated data points, and report its test scores on KP20k. We consider two datasets, KP20k and MAG-CS 1m, as the in-domain data for domain adaptation. As illustrated in Figure 5, the TF-Rand model can gradually gain better test performance by iteratively performing domain adaptation using both datasets. Due to limited computing resources, we set the maximum number of iterations to 10. But the trend suggests that models may benefit from more DA iterations.
![7_image_0.png](7_image_0.png)
## 5 Related Work
Keyphrase Generation. Meng et al. (2017b) first propose KPG, which enables models to generate keyphrases according to importance and semantics, rather than extracting sub-strings from the text (Witten et al., 1999; Liu et al., 2011; Wang et al., 2016).
Following this idea, Chen et al. (2019); Wang et al.
(2019); Liang et al. (2021) propose to leverage extra structure information (e.g., title, topic) to guide the generation. Chan et al. (2019); Luo et al. (2021) propose a model using reinforcement learning, and Swaminathan et al. (2020) propose using GAN for KPG. Ye et al. (2021) propose to dynamically align target phrases to eliminate the influence of target phrase order, a problem highlighted by Meng et al.
(2021a). Mu et al. (2020); Liu et al. (2020); Park and Caragea (2020) use pre-trained language models for better representations of documents. In a similar vein, Ye and Wang utilize self-learning to generate synthetic phrases for data augmentation, whereas we use self-labeling for domain adapation. Gao et al. use a dense retriever to augment keyphrase generation in the cross-lingual scenario.
Pre-training for Phrase/Entity Understanding.
Meng et al. (2021a) show that pre-train models with noisy annotation can deliver great improvements on KPG. Kulkarni et al. (2021) pre-train an understanding and a generation model with a largescale annotated dataset OAGKX (Çano and Bojar, 2020) and the resulting models achieve decent performance on various NLP tasks. Both studies use a large amount of annotated data for pre-training, which is only available for certain domains. Wang et al. (2021); Li et al. (2022) use contrastive learning to train phrase encoders. Wang et al. (2021); Li et al. (2022) use contrastive learning to train phrase encoders. Lee et al. (2021) find open-domain QA
datasets can be used to learn strong dense phrase representations. Wikipedia is also frequently used in training models for entity-centric and knowledgerich tasks. (Yamada et al., 2020; Liu et al., 2021; Xiong et al., 2019b; Meng et al., 2021b; Huang et al., 2021) use Wikipedia and its related resources as distant supervision to enhance BERT's abilities on modeling entities.
Self-labeling. Self-labeling or self-training is a typical means for utilizing unannotated data and it has been applied in various machine learning tasks (He et al., 2019; Mukherjee and Awadallah, 2020). Yu et al. (2021) define rules as weak supervision for text classification and use self training to propagate labels to new documents. In our case, the pseudo labels are induced by models pre-trained with weak phrase annotation in Wikipedia. Liang et al. (2020) use self-training to supplement distantly supervised NER and Huang et al. (2021) use self-training to leverage unlabeled in-domain data.
## 6 Conclusion
In this study, we investigate domain gaps in the KPG task that hinder models from generalization.
We attempt to alleviate this issue by proposing a three-stage pipeline to strategically enhance models' abilities on keyness and phraseness. Essentially, we consider phraseness as a domain-general property and can be acquired from Wikipedia data as distant supervision. Then we use self-labeling to distill the phraseness into data in a new domain, and the resulting pseudo labels are used for domain adaptation, as the labels can reflect the keyness and phraseness of the new domain. Finally, we finetune the model with limited amount of target domain data with true labels. By taking the advantage of open-domain knowledge on the web, we believe this general-to-specific paradigm is generic and can be applied to a wide variety of machine learning tasks. As a next step, we plan to employ the proposed method for text classification and information retrieval, to see whether the domain-general phrase model can produce reliable class labels and queries for domain adaptation.
## Limitations
In this study, we provide empirical evidence of the impact of domain gap in keyphrase tasks, and we propose effective methods to alleviate it. However, we acknowledge that this study is limited in the following aspects: (1) As the first study discussing domain adapation and few-shot results, there is few studies to refer to as fair baselines. Nevertheless, we attempt to show the improvements of the proposed methods over base models by extensive experiments. (2) The pretrained keyphrase generation model can be used off-the-shelf, but the multi-stage adaptation pipeline might increase the engineering complexity in practice. (3) We have only explored three strategies for domain adaptation, and they all require generating hard pseudo labels in different ways. Soft-labeling (Liang et al., 2020) and knowledge distillation (Zhou et al., 2021) methods are worth investigating. (4) We train a model with Wikipedia annotation to predict pseudo keyphrases, and it would be interesting to see if we can use large language models (e.g. GPT-3 (Brown et al.,
2020)) to zero-shot predict phrases.
## Ethics Statement
Dataset Biases The domain-general pseudo phrases were produced based on public web-scale data (Wikipedia), and it mainly represents the culture of the Englishspeaking populace. Political or gender biases may also exist in the dataset, and models trained on these datasets may propagate these biases. Additionally, the pretrained BART
models can carry biases from the data it was pretrained on.
Environmental Cost The experiments described in the paper primarily make use of V100 GPUs.
We typically used four GPUs per experiment, and the first-stage pretraining may take up to four days.
The backbone model BART-LARGE 400 million parameters. While our work required extensive experiments, future work and applications can draw upon ourinsights and need not repeat these comparisons.
## References
Hareesh Bahuleyan and Layla El Asri. 2020. Diverse keyphrase generation with neural unlikelihood training. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5271–
5287.
Marina Bondi and Mike Scott. 2010. *Keyness in texts*,
volume 41. John Benjamins Publishing.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Erion Çano and Ondˇrej Bojar. 2020. Two huge title and keyword generation corpora of research articles. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 6663–6671.
Hou Pong Chan, Wang Chen, Lu Wang, and Irwin King.
2019. Neural keyphrase generation via reinforcement learning with adaptive rewards. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 2163–2174, Florence, Italy. Association for Computational Linguistics.
Wang Chen, Yifan Gao, Jiani Zhang, Irwin King, and Michael R. Lyu. 2019. Title-guided encoding for keyphrase generation. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI
Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA,
January 27 - February 1, 2019, pages 6268–6275.
AAAI Press.
Jason Chuang, Christopher D Manning, and Jeffrey Heer. 2012. "without the clutter of unimportant words" descriptive keyphrases for text visualization.
ACM Transactions on Computer-Human Interaction (TOCHI), 19(3):1–29.
Mark Culp and George Michailidis. 2008. An iterative algorithm for extending learners to a semi-supervised setting. *Journal of Computational and Graphical* Statistics, 17(3):545–571.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep
bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
Costas Gabrielatos. 2018. Keyness analysis: Nature, metrics and techniques. In *Corpus approaches to* discourse, pages 225–258. Routledge.
Ygor Gallina, Florian Boudin, and Béatrice Daille. 2019.
Kptimes: A large-scale dataset for keyphrase generation on news documents. In *Proceedings of the* 12th International Conference on Natural Language Generation, pages 130–135.
Yifan Gao, Qingyu Yin, Zheng Li, Rui Meng, Tong Zhao, Bing Yin, Irwin King, and Michael Lyu. 2022.
Retrieval-augmented multilingual keyphrase generation with retriever-generator iterative training. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1233–1246, Seattle, United States. Association for Computational Linguistics.
Abbas Ghaddar and Phillippe Langlais. 2017. WiNER:
A Wikipedia annotated corpus for named entity recognition. In *Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 413–422, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360.
Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. In *International Conference on* Learning Representations.
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrialstrength Natural Language Processing in Python.
Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2021. Fewshot named entity recognition: An empirical baseline study. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10408–10423.
Mayank Kulkarni, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. 2021. Learning rich representation of keyphrases from text. arXiv preprint arXiv:2112.08547.
Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021. Learning dense representations of phrases at scale. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634–6647.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7871–7880.
Jiacheng Li, Jingbo Shang, and Julian McAuley. 2022.
Uctopic: Unsupervised contrastive learning for phrase representations and topic mining. In *Proceedings of the 60th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers),
pages 6159–6169.
Xinzhe Li, Qianru Sun, Yaoyao Liu, Qin Zhou, Shibao Zheng, Tat-Seng Chua, and Bernt Schiele. 2019.
Learning to self-train for semi-supervised few-shot classification. *Advances in Neural Information Processing Systems*, 32.
Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, and Chao Zhang. 2020. Bond:
Bert-assisted open-domain named entity recognition with distant supervision. In *Proceedings of the 26th* ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1054–1064.
Xinnian Liang, Shuangzhi Wu, Mu Li, and Zhoujun Li.
2021. Unsupervised keyphrase extraction by jointly modeling local and global context. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 155–164, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rui Liu, Zheng Lin, Peng Fu, and Weiping Wang. 2020.
Reinforced keyphrase generation with bert-based sentence scorer. In *2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big* Data & Cloud Computing, Sustainable Computing &
Communications, Social Computing & Networking
(ISPA/BDCloud/SocialCom/SustainCom), pages 1–8.
IEEE.
Zhiyuan Liu, Xinxiong Chen, Yabin Zheng, and Maosong Sun. 2011. Automatic keyphrase extraction by bridging vocabulary gap. the Fifteenth Conference on Computational Natural Language Learning.
Zihan Liu, Feijun Jiang, Yuxiang Hu, Chen Shi, and Pascale Fung. 2021. Ner-bert: a pre-trained model for low-resource entity tagging. *arXiv preprint* arXiv:2112.00405.
Yichao Luo, Yige Xu, Jiacheng Ye, Xipeng Qiu, and Qi Zhang. 2021. Keyphrase generation with finegrained evaluation-guided reinforcement learning. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 497–507, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rui Meng, Xingdi Yuan, Tong Wang, Sanqiang Zhao, Adam Trischler, and Daqing He. 2021a. An empirical study on neural keyphrase generation. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4985–5007, Online. Association for Computational Linguistics.
Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017a. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582–592.
Association for Computational Linguistics.
Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017b. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582–592, Vancouver, Canada. Association for Computational Linguistics.
Yu Meng, Yunyi Zhang, Jiaxin Huang, Xuan Wang, Yu Zhang, Heng Ji, and Jiawei Han. 2021b. Distantlysupervised named entity recognition with noiserobust learning and language model augmented selftraining. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 10367–10378.
Funan Mu, Zhenting Yu, LiFeng Wang, Yequan Wang, Qingyu Yin, Yibo Sun, Liqun Liu, Teng Ma, Jing Tang, and Xing Zhou. 2020. Keyphrase extraction with span-based feature representations. arXiv preprint arXiv:2002.05407.
Subhabrata Mukherjee and Ahmed Awadallah. 2020.
Uncertainty-aware self-training for few-shot text classification. *Advances in Neural Information Processing Systems*, 33:21199–21212.
Seoyeon Park and Cornelia Caragea. 2020. Scientific keyphrase identification and classification by pretrained language models intermediate task transfer learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5409–5419, Barcelona, Spain (Online). International Committee on Computational Linguistics.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1–
67.
Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (mas) and applications. In Proceedings of the 24th international conference on world wide web, pages 243–246.
Si Sun, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, and Jie Bao. 2021. Capturing global informativeness in open domain keyphrase extraction. In *CCF International Conference on Natural Language Processing and Chinese Computing*, pages 275–287.
Springer.
Avinash Swaminathan, Haimin Zhang, Debanjan Mahata, Rakesh Gosangi, Rajiv Shah, and Amanda Stent. 2020. A preliminary exploration of gans for keyphrase generation. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8021–8030.
Takashi Tomokiyo and Matthew Hurst. 2003. A language model approach to keyphrase extraction. In Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment, pages 33–40.
Isaac Triguero, Salvador García, and Francisco Herrera.
2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study.
Knowledge and Information systems, 42(2):245–284.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Minmei Wang, Bo Zhao, and Yihua Huang. 2016. Ptr:
Phrase-based topical ranking for automatic keyphrase extraction in scientific publications. *23rd International Conference, ICONIP 2016*.
Shufan Wang, Laure Thompson, and Mohit Iyyer. 2021.
Phrase-bert: Improved phrase embeddings from bert with an application to corpus exploration. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10837–10851.
Yue Wang, Jing Li, Hou Pong Chan, Irwin King, Michael R. Lyu, and Shuming Shi. 2019. Topicaware neural keyphrase generation for social media language. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 2516–2526, Florence, Italy. Association for Computational Linguistics.
Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Kea:
Practical automatic keyphrase extraction. In Proceedings of the Fourth ACM Conference on Digital Libraries, DL '99, pages 254–255, New York, NY,
USA. ACM.
Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, and Arnold Overwijk. 2019a. Open domain web keyphrase extraction beyond language modeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5175–5184.
Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2019b. Pretrained encyclopedia:
Weakly supervised knowledge-pretrained language model. In International Conference on Learning Representations.
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. Luke: Deep contextualized entity representations with entity-aware self-attention. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 6442–6454.
David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196.
Hai Ye and Lu Wang. 2018. Semi-supervised learning for neural keyphrase generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4142–4153, Brussels, Belgium. Association for Computational Linguistics.
Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, and Qi Zhang. 2021. One2Set: Generating diverse keyphrases as a set. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 4598–4608, Online. Association for Computational Linguistics.
Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2021. Fine-tuning pretrained language model with weak supervision: A
contrastive-regularized self-training approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1063–1077.
Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler.
2020a. One size does not fit all: Generating and evaluating variable number of keyphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961–7975.
Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler.
2020b. One size does not fit all: Generating and evaluating variable number of keyphrases. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 7961–7975, Online. Association for Computational Linguistics.
Xuan Zhou, Xiao Zhang, Chenyang Tao, Junya Chen, Bing Xu, Wei Wang, and Jing Xiao. 2021. Multigrained knowledge distillation for named entity recognition. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5704–5716.
## A Appendix A.1 Implementation Details
Most experiments make use of four V100 GPUs.
We elaborate the training hyper-parameters for reproducing our results in Table 5 and 6. For inference, we follow previous studies (Yuan et al.,
2020b; Meng et al., 2021a) that uses beam search to produce multiple keyphrase predictions (beam width of 50, max length of 40 tokens). We report testing scores with best checkpoints, which achieve best performance on valid set (2,000 data instances for all domains).
Phrase masking ratio denotes for p% of target phrases, replacing their appearances in the source text with a special token [PRESENT].
Random span ratio denotes replacing p% of words in the source text with a special token
[MASK].
![12_image_0.png](12_image_0.png)
![12_image_1.png](12_image_1.png)
## A.2 Data Statistics A.3 Additional Results And Analyses
Figure 6 and 7 show additional results of domain adaptation. In Figure 6, we find that larger beam widths do not lead to significantly better scores after fine-tuning and thus we use simple greedy decoding for most of this study. In Figure 7, we compare various domain adapation strategies of mixing different pseudo labels. Overall, we find that mixing labels of transfer labeling (TL) and random spans (RS) by 50%:50% leads to best performance.
In Figure 8, we use T-SNE to visualize 1,000 most frequent keyphrases from each of four datasets (100k data examples from the training split) in the semantic space. We use BERTbase (Devlin et al., 2019) to generate phrase embeddings (we feed forward each phrase independently as a sequence and take the [CLS] embedding as output). We use the T-SNE of Scikit-Learn (Pedregosa et al., 2011) with default hyperparameters.
The result shows that phrases from each domain tend to gather into clusters. Particularly, we can see that a big overlap between KP20k and StackEx since both domains are related to Computer Science. The distribution of OpenKP is more spread out, as its documents are collected from the web and cover a broader range of topics.
We present the full results of TF-Rand and TF-Bart in Table 8 and 9. Besides, we supplement the evaluation with another two popular datasets:
JPTimes (for models trained in the JPTimes domain) and DUC-2001 (for models trained in the OpenKP domain).
| Hparam | PT | DA | PT+DA/PT+DAMAG-CS | FT 100/1k/10k | *FT 100/1k/10k |
|----------------------|-------------|-------------|---------------------|-----------------|------------------|
| Max source length | 512 | | | | |
| Max target length | 128 | | | | |
| Max phrase number | 16 | 8 | | | |
| Max phrase length | 16 | 8 | | | |
| Phrase masking rate | 0.1 | | | | |
| Random span ratio | 0.05 | | | | |
| Batch size | ≈80 | 100 | 100 | 100 | 100 |
| Learning rate | 3e-4 | 3e-4 | 1e-5 | 3e-4 | 1e-5 |
| Number of steps | 200k | 40k | 20k/200k | 2k/4k/8k | 1k/2k/4k |
| Warmup steps | 10% | | | | |
| Learning rate decay | linear | | | | |
| Optimizer | Adam | | | | |
| Adam β1 | 0.9 | | | | |
| Adam β2 | 0.998 | | | | |
| Adam epsilon | 1e-6 | | | | |
| Max gradient norm | 2.0 | 2.0 | 1.0 | 1.0 | 1.0 |
| Dropout | 0.1 | | | | |
| BPE Dropout | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 |
| Label smoothing | 0.1 | | | | |
| Save checkpoint freq | ending step | ending step | ending step | 100/200/400 | 50/100/200 |
Table 5: Training hyperparameters for TF-Rand. *FT denotes the fine-tuning stage in cases of PT+FT or PT+DA+FT. Empty cell means it is the same as the leftmost value.
| Hparam | PT | DA | PT+DA/PT+DAMAG-CS | FT 100/1k/10k |
|----------------------|-------------|-------------|---------------------|-----------------|
| Max source length | 512 | | | |
| Max target length | 256 | 256 | 256 | 128 |
| Max phrase number | 16 | | | |
| Max phrase length | 6 | 8 | 8 | 8 |
| Phrase masking rate | 0.1 | | | |
| Random span ratio | 0.05 | | | |
| Batch size | 256 | 256 | 256 | 16 |
| Learning rate | 1e-5 | | | |
| Number of steps | 40k | 5k | 5k/20k | 2k/4k/8k |
| Warmup steps | 2.4k | 300 | 300/1.2k | 200/400/800 |
| Learning rate decay | linear | | | |
| Optimizer | Adam | | | |
| Adam β1 | 0.9 | | | |
| Adam β2 | 0.98 | 0.98 | 0.98 | 0.999 |
| Adam epsilon | 1e-8 | | | |
| Weight decay | 0.01 | | | |
| Max gradient norm | 1.0 | - | - | 0.1 |
| Dropout | 0.1 | | | |
| Label smoothing | 0.1 | | | |
| Save checkpoint freq | ending step | ending step | 100/200/400 | 50/100/200 |
| #doc | #words in doc | #kp | #unique | #kp | #uni kp | #present kp | #absent kp | |
|-------------------------|-----------------|---------|-----------|---------|-----------|---------------|--------------|-----|
| kp | per doc | per doc | per doc | per doc | | | | |
| Training Sets Wikipedia | 3.2m | - | - | - | - | - | - | - |
| KP20k | 514.2k | 161 | 2.7m | 680.1k | 5.3 | 1.3 | 3.3 | 1.9 |
| OpenKP | 134.9k | 1104 | 294.2k | 206.8k | 2.2 | 1.5 | 2.1 | 0.0 |
| KPTimes | 259.9k | 803 | 1.3m | 104.8k | 5.0 | 0.4 | 2.4 | 2.6 |
| StackEx | 299.0k | 207 | 803.9k | 8.1k | 2.7 | 0.0 | 1.6 | 1.1 |
| MAG-CS 1M | 1.0m | 151 | 9.6m | 1.7m | 9.6 | 1.7 | 3.4 | 6.2 |
| MAG-CS 12M† | 12.1m | 151 | 115.9m | 14.3m | 9.6 | 1.2 | 3.4 | 6.2 |
| Test Sets KP20k | 19,987 | 161 | 105.2k | 55.9k | 5.3 | 2.8 | 3.3 | 1.9 |
| OpenKP | 6,614 | 894 | 14.6k | 13.6k | 2.2 | 2.0 | 2.0 | 0.2 |
| KPTimes | 10,000 | 804 | 50.4k | 13.9k | 5.0 | 1.4 | 2.4 | 2.6 |
| StackEx | 16,000 | 205 | 43.1k | 4.5k | 2.7 | 0.3 | 1.6 | 1.1 |
| JPTimes | 10,000 | 517 | 50.3k | 9.0k | 5.0 | 0.9 | 4.0 | 1.0 |
| DUC-2001 | 308 | 701 | 2.5k | 1.8k | 8.1 | 6.0 | 7.9 | 0.2 |
Table 6: Training hyperparameters for TF-Bart. Empty cell means it is the same as the leftmost value.
Table 7: Statistics of training/testing datasets used in this study. †Only 7.7m papers in MAG-CS 12M have keyphrases.
0-shot x 15.0 10.0 9.1 14.8 12.2 15.8 9.4
x x 11.2 4.6 7.7 4.3 6.9 12.7 6.6
100-shot
x 0.5 0.2 2.4 5.1 2.1 2.4 0.2
x x 14.1 5.6 5.3 11.7 9.2 7.6 4.0
x x 14.5 20.1 22.6 13.0 17.6 24.1 20.5 x x x 16.7 24.4 22.0 18.4 20.4 24.2 20.3
1k-shot
x 0.5 0.6 5.4 7.0 3.4 2.0 0.6
x x 15.0 8.6 8.9 15.4 12.0 9.0 4.8
x x 17.6 25.5 30.5 21.1 23.7 25.8 20.6 x x x 19.7 28.0 30.7 26.3 26.2 26.1 22.5
10k-shot
x 3.4 1.5 19.2 20.8 11.3 8.5 0.7
x x 16.5 13.1 13.4 23.4 16.6 9.6 6.4
x x 20.6 30.6 38.6 31.4 30.3 25.7 24.3 x x x 22.1 31.6 36.7 34.7 31.3 27.1 23.6
PT DA FT KP20k OpenKP KPTimes StackEx Average over 4 JPTimes DUC-2001 Avg x 1.5 0.8 9.0 11.0 5.6 4.3 0.5 x x 15.2 9.1 9.2 16.8 12.6 8.7 5.1 x x 17.6 25.4 **30.6** 21.8 23.8 25.2 21.8 x x x 19.5 **28.0** 29.8 26.5 25.9 25.8 **22.1**
Table 8: Zero-shot and low-data results. Models are randomly initialized. The best average score is boldfaced.
0-shot x 14.7 9.7 10.5 13.9 12.2 16.3 9.8
x x 13.8 10.7 12.0 11.5 12.0 17.5 11.6
100-shot
x 22.3 32.8 31.6 29.6 29.1 27.9 16.6
x x 22.5 33.3 32.0 29.2 29.3 28.7 20.7
x x 22.4 33.7 31.6 31.1 29.7 28.4 21.5
x x x 22.2 32.0 31.6 29.7 28.9 28.4 21.5
![14_image_0.png](14_image_0.png)
x 25.1 36.4 43.6 41.1 36.5 33.2 21.1 x x 25.3 36.2 43.2 40.9 36.4 31.8 21.0 x x 24.9 36.9 44.3 41.2 36.8 34.0 22.7 x x x 25.3 36.5 42.9 40.1 36.2 31.9 22.1 x 28.2 40.8 53.3 49.3 42.9 34.4 23.2 x x 28.0 41.5 53.4 49.6 43.1 34.5 25.0 x x 28.2 41.3 53.4 49.7 43.1 34.2 25.0 x x x 28.4 41.2 53.2 49.8 43.1 34.9 25.6
PT DA FT KP20k OpenKP KPTimes StackEx Average over 4 JPTimes DUC-2001
![14_image_1.png](14_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Sec 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Sec 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Code Will Be Open-Sourced Under Mit License
✓ B1. Did you cite the creators of artifacts you used?
Sec 4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Code will be open-sourced under MIT License
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Code will be open-sourced under MIT License B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Sec 2.2 and A.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sec 2.2 and A.2
## C ✓ **Did You Run Computational Experiments?** Sec 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
All models are based on public checkpoints. Computational budget was not recorded and multiple configs of infrastructure have been used. But all codes are public.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sec A.1
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Just a single run due to computational constraints, but we experimented with various models/datasets.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sec 4.1 and A.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-etal-2023-e | {E}-{NER}: Evidential Deep Learning for Trustworthy Named Entity Recognition | https://aclanthology.org/2023.findings-acl.103 | Most named entity recognition (NER) systems focus on improving model performance, ignoring the need to quantify model uncertainty, which is critical to the reliability of NER systems in open environments. Evidential deep learning (EDL) has recently been proposed as a promising solution to explicitly model predictive uncertainty for classification tasks. However, directly applying EDL to NER applications faces two challenges, i.e., the problems of sparse entities and OOV/OOD entities in NER tasks. To address these challenges, we propose a trustworthy NER framework named E-NER by introducing two uncertainty-guided loss terms to the conventional EDL, along with a series of uncertainty-guided training strategies. Experiments show that E-NER can be applied to multiple NER paradigms to obtain accurate uncertainty estimation. Furthermore, compared to state-of-the-art baselines, the proposed method achieves a better OOV/OOD detection performance and better generalization ability on OOV entities. | # E-Ner: Evidential Deep Learning For Trustworthy Named Entity Recognition
Zhen Zhang1 Mengting Hu1∗ Shiwan Zhao† Minlie Huang2 **Haotian Wang**1 Lemao Liu3 Zhirui Zhang3 Zhe Liu4 **Bingzhe Wu**3* 1 College of Software, Nankai University, 2 The CoAI group, Tsinghua University 3 Tencent AI Lab, 4 Zhejiang Lab [email protected], [email protected]
## Abstract
Most named entity recognition (NER) systems focus on improving model performance, ignoring the need to quantify model uncertainty, which is critical to the reliability of NER systems in open environments. Evidential deep learning (EDL) has recently been proposed as a promising solution to explicitly model predictive uncertainty for classification tasks. However, directly applying EDL to NER applications faces two challenges, i.e., the problems of sparse entities and *OOV/OOD entities* in NER
tasks. To address these challenges, we propose a trustworthy NER framework named ENER 1 by introducing two uncertainty-guided loss terms to the conventional EDL, along with a series of uncertainty-guided training strategies. Experiments show that E-NER can be applied to multiple NER paradigms to obtain accurate uncertainty estimation. Furthermore, compared to state-of-the-art baselines, the proposed method achieves a better OOV/OOD detection performance and better generalization ability on OOV entities.
## 1 Introduction
Named entity recognition (NER) aims to locate and classify entities in unstructured text, such as extracting LOCATION information *"New York"*
from the sentence *"How far is New York from me"*.
Thanks to the development of deep neural network
(DNN), current NER methods have achieved remarkable performance on a wide range of benchmarks (Lample et al., 2016; Yamada et al., 2020; Li et al., 2022).
Despite this progress, current NER-related research typically focuses on improving the model performance, such as recognition accuracy and F1 scores (Yu et al., 2020; Zhu and Li, 2022).
∗ Mengting Hu and Bingzhe Wu are the corresponding authors.
†Independent researcher.
1https://github.com/Leon-bit-9527/ENER
![0_image_0.png](0_image_0.png)
Prediction: PERSON (ID)
: <Albert *Einstein>*
: <AIBErT *Einstwin>*
Prediction: PERSON (OOV: *Typos*)
: *<HiteJinro>*
Prediction: Other (OOV: unseen entity)
: <Muhammad Ali>
Prediction: Other (OOD: unknow domain)
However, seldom works focus on investigating the model's reliability. The critical aspect of the model reliability is the uncertainty estimation of the predictive results, which can characterize the probability that the model prediction will be wrong. One natural way to construct the predictive uncertainty is based on the maximum value of the Softmax output (Yan et al., 2021; Li et al., 2022; Zhu and Li, 2022) (the smaller this value, the larger the uncertainty). However, previous empirical studies show that probabilistic predictions produced by DNN models (e.g., transformer and CNN) are often inaccurate (Guo et al., 2017; Lee et al., 2018; Pinto et al., 2022). Therefore, this natural way may over/under-estimate the predictive uncertainty, hindering the model's reliability.
High-quality uncertainty estimation helps to improve the model's reliability in an open environment and to find valuable samples to improve training sample efficiency, thus reducing the cost of manual labeling. On the one hand, for the reliability aspect, accurate uncertainty estimation can equip the NER model with the ability to express 1619
"I do not know" to both the out-of-domain (OOD)
or out-of-vocabulary (OOV) samples (Charpentier et al., 2020). A desired uncertainty estimation is conceptually shown in Figure 1, wherein misclassified OOV/OOD entities are assigned with significantly higher uncertainty than the in-domain
(ID) entities. Besides, the estimated uncertainty can be further absorbed into the training process to improve the model robustness against OOV/OOD
samples. On the other hand, for the sample efficiency aspect, prior work shows that high-quality uncertainty estimation can also be used for selecting more "informative" samples and thus can reduce the number of labeled samples required for training the NER model.
To attain high-quality uncertainty estimation, evidential deep learning (EDL) (Sensoy et al., 2018)
provides a promising solution. EDL is superior to existing Bayesian learning-based methods (Blundell et al., 2015; Kingma et al., 2015; Graves, 2011)
in that model uncertainty can be efficiently estimated in a single forward pass that avoids inexact posterior approximation (Kopetzki et al., 2021)
or time/storage-consuming Monte Carlo sampling
(Gal and Ghahramani, 2016). However, directly applying conventional EDL to NER applications still faces two critical challenges: (1) *sparse entities*:
In text corpus, entities only take a minority. For example, only 16.8% of the words in the commonly used CoNLL2003 dataset belong to entities. The remaining non-entity types are labeled into the "others" (O) class. The imbalance between entity and non-entity words can cause over-fitting and poor performance on the entity types. (2) *OOV/OOD entity discrimination*: In the open environment, NER
training/test data typically comes with OOV/OOD
entities. However, the optimization objective of current EDL methods lacks explicit modeling of such types of information.
To address these two issues, we present a trustworthy NER framework named E-NER with a series of uncertainty-guided training strategies. For the issue of sparse entities, we propose to use an uncertainty-guided importance weighted (IW) loss, wherein samples with higher predictive uncertainties are assigned larger weights. This loss helps the model training to pay more attention to entities of interest (e.g., person and location). To solve the issue of unknown entities, we present an additional regularization term to penalize the case where labels are more prone to errors by assigning higher uncertainties to corresponding samples. We empirically show these two uncertainty-guided loss terms can improve both the quality of estimated confidence and the robustness against OOV samples.
Our contributions are summarized as follows:
- To the best of our knowledge, E-NER is the first work to explore how to leverage evidential deep learning to improve the reliability of current NER models. This work has successfully shown the potential of EDL to provide high-quality uncertainty estimation in NER applications. The estimated uncertainty can be further used for detecting OOD/OOV samples in the test phase.
- For the technique contribution, we propose two uncertainty-guided loss terms to mitigate sparse entities and OOV/OOD entity discrimination issues in the NER task.
- E-NER is extensively validated in a series of experiments. In contrast to conventional NER methods, the result shows that E-NER comes with the following superiority:
(1) more accurate uncertainty estimation. (2)
better OOV/OOD detection performance. (3) better generalization ability on OOV entities.
(4) better sample efficiency (i.e., fewer samples are required to achieve the same-level performance).
## 2 Preliminary
This section introduces a commonly-used EDL implementation based on the Dirichlet-based model
(DBM) (Sensoy et al., 2018). We then describe how the DBM computes the uncertainty in a closed form.
## 2.1 Dirichlet-Based Model
Conventional neural network classifiers typically employ a Softmax layer to provide a point estimation of the categorical distribution. In contrast, Dirichlet-based models (DBM) output the parameters of a Dirichlet distribution and then use it to estimate the categorical distribution. Specifically, for the i-th sample x
(i)(e.g., the i-th word in the NER
task) in the C-class classification task, the DBM
replaces the Softmax of the neural network with an activation function layer (e.g., Softplus) to ensure that the network outputs non-negative values, which are considered as the evidence e
(i) ∈ R
C+
![2_image_0.png](2_image_0.png)
��=
�� �=1 � ��=
��+1
�=1 �
(�� +1)
confidence = max
�∊�
[��]
��=
�
�=1 � ��=
�
�=1 �
(�� +1)
evidence e 2 3
<s> New York City <s> <s> New York
to support the classification. The evidence is then used for constructing a Dirichlet distribution which models the distribution over different classes. To this end, the parameter of a Dirichlet distribution is obtained by: α(i) = e
(i) + 1, where 1 represents the vector of C ones. Finally, the density function of Dirichlet distribution is given by:
$$\mathrm{Dir}({\bf p}^{(i)}|\mathbf{\alpha}^{(i)})={\frac{1}{B(\mathbf{\alpha}^{(i)})}}\prod_{c=1}^{C}p_{c}^{(\alpha_{c}^{(i)}-1)},\quad\quad(1)$$
where B(α(i)) is the C-dimensional multinomial beta function.
To learn model parameters, given the sample
(x
(i), y
(i)), where y
(i)is a one-hot C-dimensional label for sample x
(i), previous EDL methods build the optimization objective by combining a crossentropy classification loss LCLS and a KL penalty loss LKL:
(element-wise) product, which removes the nonmisleading evidence from predicted parameters α(i). Intuitively, the first term in Eq. 2 measures the classification performance while the second term can be seen as a regularization term that penalizes misleading evidences by encouraging the associate distribution to be close to uniform distribution (see more details in Appendix §C.3).
## 2.2 Uncertainty Estimation Of Dbm
Once we obtain the Dirichlet distribution for prediction, we can estimate the predictive uncertainty in a closed form. To this end, EDL provides two probabilities: *belief mass* and *uncertainty mass*. The belief mass b represents the probability of evidence assigned to each category and the uncertainty mass u provides uncertainty estimation. Specifically, for the sample x
(i), the belief mass b
(i)
c and uncertainty u
(i)are computed as:
$$\begin{split}\mathcal{L}_{EDL}^{(i)}&=\mathcal{L}_{CLS}^{(i)}+\mathcal{L}_{KL}^{(i)}\\ &=\underbrace{\sum_{c=1}^{C}y_{c}^{(i)}\left(\psi(S^{(i)})-\psi(\alpha_{c}^{(i)})\right)}_{\text{(a)classification loss}}\\ &\quad+\underbrace{\lambda_{1}KL[\text{Dir}(\mathbf{p}^{(i)}|\widetilde{\mathbf{\alpha}}^{(i)})||\text{Dir}(\mathbf{p}^{(i)}|\mathbf{1})]}_{\text{(b)penalty loss}},\end{split}\tag{2}$$
where ψ(·) is the digamma function, and S
(i) = PC
c=1 α
(i)
c denotes the Dirichlet strength, λ1 is the balance factor, Dir(p
(i)|1) is a special case which is equivalent to the uniform distribution, and αe
(i) = y
(i) + (1 − y
(i)) ⊙ α(i) denotes the masked parameters while ⊙ refers to the Hadamard
$$b_{c}^{(i)}=\frac{e_{c}^{(i)}}{S^{(i)}}\quad\mathrm{and}\quad u^{(i)}=\frac{C}{S^{(i)}},\qquad\mathrm{(3)}$$
with the restrictions that u
that $u^{(i)}+\sum_{c=1}^{C}b_{c}^{(i)}=1$. The
## Belief Mass B And The Uncertainty Mass U Will Be
Used To Guide The Training Process In Our Proposed
Framework (See Section §3.3). 3 E-Ner Architecture
In this section, we describe the three core modules of E-NER and provide an overview of the system architecture in Figure 2. Additionally, we revise the learning strategy of EDL by incorporating importance weights (IW) to address the sparse entities problem and uncertainty mass optimization (UNM)
to model the uncertainty of mispredicted entities.
## 3.1 Ner Feature Extraction
Given a word sequence X = {x
(1)*, ..., x*(n)} and a target sequence Y = {y
(1)*, ..., y*(n)}. To obtain the hidden representation H of X, the words in the sentence X are first preprocessed according to the input form required by the corresponding NER method. Then the processed input is fed into an Encoder module (e.g., BERT (Devlin et al., 2019)) to compute the hidden representation H = Encoder(X), where H ∈ R
n×dh and dh denotes the dimension of the hidden representation.
The input format for NER models can vary depending on the paradigm used. Three NER paradigms were considered for this study: sequence labeling (Figure 2(a)), span-based (Figure 2(b)), and Seq2Seq (Figure 2(c)). The specific formats for these paradigms are provided in the Appendix §A.
Note that in the Seq2Seq (sequence-to-sequence)
paradigm, we choose a pointer-based model (Yan et al., 2021), so that we don't need to learn on the entire vocabulary.
## 3.2 Dirichlet-Based Prediction Layer
Once we obtain the hidden representation, we introduce a Dirichlet-based layer to produce the final predictive distribution. Precisely, for the i th sample, the hidden representation h is fed to the fully connected layer to output logits, and then we can transform the logits into Dirichlet parameters α as described in Section §2.1. Finally, as shown in Figure 2, only one forward step using Eq. 3 is sufficient to calculate the uncertainty u
(i), while the probability distribution p
(i)and prediction y
(i)
are calculated as follows:
p (i) = α(i) S(i) , y(i) = arg max c∈C hp (i) c i. (4)
## 3.3 E-Ner Model Learning
Overview. The objective function of EDL training is to minimize the sum of losses over all words.
Due to the *sparse entities* and *OOV/OOD entities* issues, directly applying EDL to NER leads suboptimal uncertainty estimates. We improve conventional EDL methods by incorporating belief mass and uncertainty into the network training process.
Specifically, two key modifications are introduced:
(1) We compute importance weights for each sample based on the belief mass to reweight the original
![3_image_0.png](3_image_0.png)
b ={0.001,0.968,0.001}|u=0.03
classification loss in Eq. 2(a). (2) We introduce an additional term to increase the uncertainty of mispredicted instances, which explicitly improves the quality of uncertainty estimation and helps OOD
entity detection.
Importance Weight. Due to the inherent imbalance between entities and non-entities in NER
datasets, conventional EDL methods tend to overfit non-entities and assign high uncertainty estimates to entities. To make the training focus more on the entities and increase the evidence corresponding to the ground-truth category, we use the belief mass of the ground-truth category to compute the categorylevel uncertainty for each instance to adjust the loss.
Specifically, for the i th sample, we use (1 − b
(i))
as the category-level uncertainty which serves as the importance weights of entity categories during training. To this end, we replace the ground truth y
(i) of one-hot representation with an importance weight (IW) w(i) = (1 − b
(i)) ⊙ y
(i), and lastly, the Eq. 2(a) is adjusted to:
$${\mathcal{L}}_{I W}^{(i)}=\sum_{c=1}^{C}w_{c}^{(i)}\left(\psi(S^{(i)})-\psi(\alpha_{c}^{(i)})\right).\qquad(5)$$
As illustrated in Figure 3(b), the belief mass of the ground-truth category is high, indicating a high level of certainty in the prediction. In this case, the importance weight (IW) assigned will be small. Conversely, Figure 3(c) presents a small belief mass, indicating an uncertain prediction. IW
will be assigned a large value. In this manner, the training process can focus more on sparse but valuable entities.
Uncertainty Mass Optimization. Assigning high uncertainty to OOV/OOD entities (see Figure 3(d)
as an example) facilitates OOV/OOD entity detection. However, ground-truth OOV/OOD samples are not available during training. One solution is to synthesize such data on the boundary of the indomain region via a generative model (Lee et al.,
2018). In this paper, we propose a more convenient way to treat hard samples as OOV/OOD samples which are often outliers and are mispredicted even after adequate model training. In this way, we enable the model to detect OOV/OOD data.
Specifically, uncertainty mass optimization (UNM)
assigns higher uncertainty to more error-prone samples for the model to express a lack of evidence, by adding an uncertainty mass penalty term LUNM to the wrongly predicted samples:
$${\mathcal{L}}_{U N M}=-\lambda_{2}\sum_{i\in\{{\hat{y}}^{(i)}\neq y^{(i)}\}}\log(u^{(i)}).\qquad(6)$$
The coefficient λ2 = λ0 exp{−(lnλ0/T)t}, where λ2 ∈ [λ0, 1], λ0 ≪ 1 is a small positive constant, t is the current training epoch, and T is the total number of training epochs. As the training epoch t increases towards T, the factor λ2 will increase monotonically from λ0 to 1.0. This allows the network to initially focus on optimizing classification and gradually shift its emphasis towards optimizing UNM as the training progresses.
Overall Loss. The overall loss function combines three components: the importance weighted classification loss LIW , the KL divergence penalty loss LKL, and the uncertainty mass loss LUNM for mispredicted entities. Each element contributes to the overall loss and is defined as follows:
$${\mathcal{L}}_{o v e r a l l}=\sum_{i=1}^{N}({\mathcal{L}}_{I W}^{(i)}+{\mathcal{L}}_{K L}^{(i)})+{\mathcal{L}}_{U N M}.\quad(7)$$
## 4 Experiments 4.1 Research Questions
In this section, we design extensive experiments to validate whether the proposed method obtains high-quality uncertainty estimation. Concretely, the following four research questions will be investigated.
RQ1: Whether E-NER improves the quality of confidence estimation in contrast to prior work?
| Dataset | Sentences | Types | Domain |
|---------------|-------------|---------|----------|
| CoNLL2003 | 22,137 | 4 | Newswire |
| OntoNotes 5.0 | 76,714 | 18 | General |
| WikiGold | 1,696 | 4 | General |
Table 1: Statistics of the NER dataset.
| Dataset | Sentences | Entities | OOV Rate |
|-----------------|-------------|------------|------------|
| TwitterNER | 3257 | 3990 | 0.62 |
| CoNLL2003-Typos | 2676 | 4130 | 0.71 |
| CoNLL2003-OOV | 3685 | 5648 | 0.96 |
Table 2: Statistics of OOV entities in the test set.
RQ2: Can uncertainty provided by E-NER
achieve better OOV/OOD detection performance?
RQ3: Can E-NER improve the model generalization ability on OOV samples?
RQ4: Can E-NER help to find valuable instances to improve the sample efficiency of NER
model training?
Following these four research questions, we provide further discussions on our method including ablation studies and limitations.
## 4.2 Datasets And Metrics
Datasets from Different Domains. To answer the above research questions, we choose three widelyused datasets, including CoNLL2003 (Tjong Kim Sang and De Meulder, 2003), OntoNotes 5.0
(Weischedel et al., 2013)
2and WikiGold (Balasuriya et al., 2009). The statistics are displayed in Table 1.
OOV Datasets. We further choose three public OOV datasets, including TwitterNER (Zhang et al., 2018), CoNLL2003-Typos (Wang et al., 2021), and CoNLL2003-OOV (Wang et al., 2021). The statistics are displayed in Table 2.
Metrics. We evaluate the results using three metrics: F1, Expected Calibration Error (ECE), and Area Under the ROC Curve (AUC). F1 is a commonly used performance indicator in NER. ECE is a metric that measures the confidence calibration of a model, with a low score indicating a wellcalibrated model. AUC is a commonly used metric for evaluating the performance of binary classifiers, and we use it to evaluate the OOV/OOD detection performance. Their detailed computations are described in the Appendix §C.2.
2https://catalog.ldc.upenn.edu/LDC2013T19
Setting Typos OOV OOD
Con Unc Con Unc Con Unc
BERT-Tagger (Devlin et al., 2019) 0.812 0.812 0.689 0.751 0.674 0.756
-EDL 0.805 0.808 0.699 0.759 0.693 0.767
-E-NER(ours) **0.820 0.817 0.700 0.760 0.769 0.799**
SpanNER(Fu et al., 2021) 0.717 0.783 0.614 0.773 0.623 0.799
-EDL 0.701 0.759 0.607 0.760 0.620 0.792
-E-NER(ours) **0.741 0.792 0.640 0.796 0.676 0.824**
Seq2Seq (Yan et al., 2021) 0.825 0.833 0.724 0.794 0.797 0.820
-EDL **0.829** 0.830 0.729 0.787 0.793 0.818
-E-NER(ours) 0.824 **0.841 0.743 0.803 0.822 0.847**
Setting CoNLL2003 OntoNotes 5.0
F1(↑) ECE(↓) F1(↑) ECE(↓)
BERT-Tagger 91.32 0.0845 88.20 0.1053
-EDL 91.36 0.0755 88.09 0.0838
-E-NER(ours) **91.55 0.0739 88.74 0.0603**
SpanNER 91.94 0.0673 87.82 0.0609
-EDL 91.97 0.0481 87.39 0.0474
-E-NER(ours) **92.06 0.0414 88.44 0.0434**
Seq2Seq 93.05 0.0324 89.89 0.0375
-EDL 92.84 0.0322 90.22 0.0329
-E-NER(ours) **93.15 0.0225 90.64 0.0328**
## 4.3 Experiment Setting
We conduct experiments on three popular NER
paradigms: sequence labeling, span-based, and Seq2Seq. The following three models are chosen for evaluating each paradigm.
BERT-Tagger (Devlin et al., 2019). It follows the classical paradigm, recognizing entities via *sequence labeling*.
SpanNER3(Fu et al., 2021). It enumerates all spans and detects entities from them. For simplicity, we use the original span-based method, without any constraints or data processing.
Seq2Seq4(Yan et al., 2021). It is a generative model based on BART, which does not require additional labeling strategies and entity enumeration.
In the experiments, all the reported results are the average of five runs. The experiment details are introduced in Appendix §C.
![5_image_0.png](5_image_0.png)
## 4.4 Research Question Discussions 4.4.1 Confidence Estimation Quality
To answer the first research question, an important concept should be clarified, i.e., *what is qualified* confidence? This concept should have a positive correlation with performance, meaning that higher confidence should indicate better performance and vice versa, as depicted by the dashed line in Figure 4. Our findings reveal that on both datasets, Softmax is far below the perfectly calibrated line, indicating that confidence does not reflect performance well, and it is an example of *over-confidence*.
However, E-NER is found to approach the perfect calibrated line. This suggests that E-NER can produce well-qualified confidence.
We further evaluate all paradigms and present the results in Table 4. It can be observed that E-NER
consistently performs the best across all paradigms.
This demonstrates that E-NER can be effectively applied in various frameworks. When comparing EDL to the original models, it is observed that while EDL improves confidence estimation, it also Table 5: Evaluation results of generalization on OOV
samples in terms of F1 (%). To compare fairly, we also choose SpanNER as the basic encoder.
results in a decline in performance. For example, on OntoNotes 5.0 dataset, EDL performs worse than BERT-Tagger and SpanNER in terms of the F1 metric. This highlights the limitations of directly applying the EDL approach. In contrast, E-NER performs the best on both metrics, demonstrating that it can provide better-qualified confidence without negatively impacting performance, and even achieving slight improvements in all settings. A
typical reliability diagram is also included in Appendix §B.1 for a more detailed representation.
| Methods | TwitterNER CoNLL2003 Typos OOV | | |
|------------------------------|----------------------------------|-------|-------|
| VaniIB (Alemi et al., 2017) | 71.19 | 83.49 | 70.12 |
| DataAug (Dai and Adel, 2020) | 73.69 | 81.73 | 69.60 |
| SpanNER (BERT large) | 71.57 | 81.83 | 64.43 |
| SpanNER (RoBERTa large) | 71.70 | 82.85 | 64.70 |
| SpanNER (AlBERT large) | 70.33 | 82.49 | 64.12 |
| EDL-SpanNER (BERT large) | 74.14 | 82.89 | 68.40 |
| E-SpanNER (BERT base) | 74.94 | 83.31 | 67.99 |
| E-SpanNER (BERT large) | 75.64 | 83.64 | 69.71 |
| ∆ E-NER-NER vs. SpanNER | 4.07↑ | 1.81↑ | 5.28↑ |
## 4.4.2 Oov/Ood Detection
The typical usage of uncertainty is to detect whether an instance is OOV/OOD or not, as large uncertainty tends to reveal unnatural instances, such as OOV and OOD. To evaluate uncertainty from this usage (RQ2), we choose three binary detection tasks, including typos, OOV, and OOD. The results are shown in Table 3.
Firstly, it can be observed that, when compared to the original model of each paradigm, EDL does not improve the performances in most experiments of the three paradigms. This verifies that EDL is not effective in addressing the *OOV/OOD entity discrimination* challenge of NER. Then we found that E-NER significantly outperforms the original models and EDL in various paradigms. In particular, in span-based OOD detection, E-NER outperforms SpanNER by +5.3% and EDL by +5.6% on AUC
when using confidence for detection. This demonstrates the effectiveness of E-NER in distinguishing whether an entity is OOV/OOD or not. Note that using uncertainty is better than using confidence for OOV/OOD detection in most cases.
Setting CoNLL2003 OntoNotes 5.0
Ratio F1(↑) Ratio F1(↑)
Random 5.5% 85.39 3.0% 79.47
Entropy 5.5% 88.29 3.0% 84.80 MC dropout 5.5% 88.67 3.0% 86.06
EDL 5.5% 90.51 3.0% 86.25
E-NER 5.5% **90.88** 3.0% **86.68**
Table 7: Evaluation results of cross-domain data selection in terms of F1 (%). The left side of the arrow ←
is the target domain, and the right side is the source domain.
## 4.4.3 Generalization On Oov Samples
| Setting | WikiGold←CoNLL. | CoNLL2003←Onto. | | |
|------------|-------------------|-------------------|-------|-------|
| Ratio | F1(↑) | Ratio | F1(↑) | |
| Random | 4.8% | 53.67 | 4.7% | 84.23 |
| Entropy | 4.8% | 80.63 | 4.7% | 88.81 |
| MC dropout | 4.8% | 82.87 | 4.7% | 90.32 |
| EDL | 4.8% | 83.32 | 4.7% | 90.12 |
| E-NER | 4.8% | 84.08 | 4.7% | 90.52 |
Another benefit of well-qualified confidence is the robustness to noise, since the model is properly calibrated without over or under-confidence. Thus, we further investigate E-NER's generalizing ability on OOV samples (RQ3). The results on three OOV datasets are reported in Table 5.
It is first observed that E-NER (BERT large)
achieves the best performances on TwitterNER
and CoNLL2003-Typos datasets, and competitive performance on CoNLL2003-OOV. Compared with a strong baseline SpanNER (BERT large), ENER (BERT large) significantly outperforms it by
+4.07%, +1.81% and +5.28% on three datasets, respectively. This validates the generalizing ability of our approach. Secondly, by comparing EDL (BERT large) and E-NER s(BERT large), our method also achieves consistently better performances. This further validates that our proposed two uncertainty-guided loss terms effectively promote the robustness against OOV samples.
## 4.4.4 Sample Efficiency
In active learning, a sample's uncertainty can be utilized for data selection. Then whether the selected samples are valuable also suggests the quality of uncertainty. To evaluate E-NER from this perspec-
| Setting | CoNLL2003 | OntoNotes 5.0 | | |
|-----------|-------------|-----------------|-------|-------|
| F1 | ECE | F1 | ECE | |
| E-NER | 92.06 | 0.041 | 88.44 | 0.043 |
| -UNM | 92.10 | 0.058 | 88.21 | 0.051 |
| -IW | 91.95 | 0.045 | 87.77 | 0.042 |
tive (RQ4), we design in-domain and cross-domain sample selection experiments. The results are displayed in Table 6 and Table 7, respectively.
It is found that using the same scale of samples, E-NER achieves consistently the best performances in both the in-domain and cross-domain settings.
This verifies that uncertainty predicted by E-NER
has better quality. Concretely, MC dropout attains uncertainty with multiple runs of sub-models, which costs time and memory. Though outperforming naive random selection and entropy of softmax, MC dropout is still less performed than EDL and E-NER, which both directly compute the uncertainty in one forward pass. Then we see that EDL does not always outperform MC dropout, as the cross-domain experiment CoNLL2003←Onto shown. Yet E-NER, concentrating on two issues of NER task, is universally effective, and can better handle the challenges of an open environment.
## 4.5 Further Analysis
Ablation Study. To explore the effects of individual loss terms, the ablation study is presented in Table 8. It is observed that removing each loss term would cause performance declines in most evaluation metrics. Concretely, removing IW causes the F1 score to decrease more than removing UNM. On the contrary, removing UNM makes a significant degradation in ECE. Overall, this study indicates that the proposed uncertainty-guided terms are both effective.
Why E-NER Works. We incorporate two uncertainty-guided loss terms into EDL. Firstly, IW is designed for sparse entities which leads to an imbalance problem. Using uncertainties as weights helps the model training to pay more attention to entities of interest. As reported in Table 8, IW is effective in improving the F1 score. Secondly, UNM is proposed to deal with OOV/OOD entities.
Such entities should have larger uncertainties compared to normal ones, however, naive EDL does not model this explicitly. E-NER increases the uncertainty of mispredictions which are relatively close to OOV/OOD entities. As shown in Table 8, UNM
helps to improve the quality of uncertainty estimation. These two uncertainty-guided loss terms target different NER issues, and using uncertainty
(IW) and learning uncertainty (UNM) interactively allows E-NER to perform well in various experimental settings. Furthermore, we showcase actual predictions in Appendix §B.2.
## 5 Related Work
NER Paradigm. NER is a fundamental task in information extraction. The mainstream methods of NER can be divided into three categories: sequence labeling, span-based, and Seq2Seq. Sequence labeling methods assign a label to each token in a sentence to identify flat entities, and are better at handling longer entities with lower label consistency (Fu et al., 2021). Span-based methods, which enumerate and classify entity sets in a sentence according to the maximum span length, perform better on sentences with OOV words and entities of medium length (Alemi et al., 2017; Dai and Adel, 2020; Fu et al., 2021). Seq2Seq methods directly generate the entities and corresponding labels in the sentence, and are capable of handling various NER subtasks uniformly (Yan et al., 2021).
Recently, NER systems are undergoing a paradigm shift (Akbik et al., 2018; Yan et al., 2019), using one paradigm to handle multiple types of NER tasks. Zhang et al. (2022) analysis the incorrect bias in Seq2Seq from the perspective of causality, and designed a data augmentation method based on the theory of backdoor adjustment, making Seq2Seq more suitable for unified NER tasks.
Uncertainty Estimation. Bayesian deep learning uses Bayesian principles to estimate uncertainty in DNN parameters. However, modeling uncertainty in network parameters does not guarantee accurate estimation of predictive uncertainty (Sensoy et al., 2021). Recently, there has been a trend in using the output of neural networks to estimate the parameters of the Dirichlet distribution for uncertainty estimation (Sensoy et al., 2018; Malinin and Gales, 2018). The EDL (Sensoy et al., 2018) has the advantages of generalizability and low computational cost, making it applicable to various tasks
(Han et al., 2021; Hu and Khan, 2021). However, their uncertainty estimates have difficulty expressing uncertainties outside the domain (Amini et al.,
2020; Hu and Khan, 2021). In contrast, the Prior Networks (Malinin and Gales, 2018) require OOD
data during training to distinguish in-distribution
(ID) and OOD data. When the NER model encounters unseen entities (e.g., OOV and OOD),
it is easy to make unreliable predictions, which are often considered from the perspective of data augmentation or information theory (Fukuda et al.,
2020; Wang et al., 2022), but there is no guarantee that these methods will achieve a balance between performance and robustness.
## 6 Conclusion
In this work, we study the problem of trustworthy NER by leveraging evidential deep learning. To address the issues of *sparse entities* and OOV/OOD
entities, we propose E-NER with two uncertaintyguided loss terms. Extensive experimental results demonstrate that the proposed method can be effectively applied to various NER paradigms. The uncertainty estimation quality of E-NER is improved without harming performance. Additionally, the well-qualified uncertainties contribute to detecting OOV/OOD, generalization, and sample selection.
These results validate the superiority of E-NER on real-world problems.
## Limitations
Our work is the first attempt to explore how evidential deep learning can be used to improve the reliability of current NER models. Despite the improved performance and robustness, our work has limitations that may guide our future work.
First, we propose a simple method to treat hard samples (such as outliers) in the dataset as OOV/OOD samples, enabling the model to detect OOV/OOD data with minimal cost. However, there is still a certain gap between these hard samples and the real OOV/OOD data. OOV/OOD detection performance can still be improved by further incorporating more real OOV/OOD samples, for example, real OOD data from other domains, well-designed adversarial examples, generated OOV samples by data augmentation techniques, etc.
Second, we evaluate the versatility of E-NER
by applying it to mainstream NER paradigms. However, there are still other paradigms, such as Hypergraph-based methods (Lu and Roth, 2015)
and the W2NER (Li et al., 2022) approach in recent work, that could be evaluated in the future.
## Acknowledgements
We sincerely thank all the anonymous reviewers for providing valuable feedback. This work is supported by the youth program of National Science Fund of Tianjin, China (Grant No. 22JCQNJC01340), the Fundamental Research Funds for the Central University, Nankai University (Grant No. 63221028), and the key program of National Science Fund of Tianjin, China (Grant No.
21JCZDJC00130)
## References
Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018.
Contextual string embeddings for sequence labeling. In *Proceedings of the 27th International Conference on Computational Linguistics (COLING)*, pages 1638–1649.
Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. 2017. Deep variational information bottleneck. In *International Conference on Learning* Representations (ICLR), pages 1–19.
Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. 2020. Deep evidential regression.
In *Advances in Neural Information Processing Systems (NeurIPS)*, pages 14927–14937.
Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R. Curran. 2009. Named entity recognition in Wikipedia. In Proceedings of the 2009 Workshop on The People's Web Meets NLP:
Collaboratively Constructed Semantic Resources
(People's Web), pages 10–18.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural network. In *International* conference on machine learning (ICML), pages 1613–1622.
Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. 2020. Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. In Advances in Neural Information Processing Systems (NeurIPS), pages 1356–1367.
Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition. In Proceedings of the 28th International Conference on Computational Linguistics (COLING), pages 3861–
3867.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pages 4171–4186.
Jinlan Fu, Xuanjing Huang, and Pengfei Liu. 2021.
SpanNER: Named entity re-/recognition as span prediction. In *Proceedings of the 59th Annual Meeting* of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 7183–
7195.
Nobukazu Fukuda, Naoki Yoshinaga, and Masaru Kitsuregawa. 2020. Robust Backed-off Estimation of Out-of-Vocabulary Embeddings. In *Findings of the* Association for Computational Linguistics: EMNLP
2020, pages 4827–4838.
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning (ICML), pages 1050–1059.
Alex Graves. 2011. Practical variational inference for neural networks. In *Advances in neural information* processing systems (NeurIPS), page 2348–2356.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In *Proceedings of the 34th International Conference on Machine Learning (ICML)*, pages 1321–
1330.
Zongbo Han, Changqing Zhang, Huazhu Fu, and Joey Tianyi Zhou. 2021. Trusted multi-view classification. In International Conference on Learning Representations (ICLR), pages 1–16.
Yibo Hu and Latifur Khan. 2021. Uncertainty-aware reliable text classification. In *Proceedings of the 27th* ACM SIGKDD Conference on Knowledge Discovery
& Data Mining (SIGKDD), pages 628–636.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *International* Conference on Learning Representations (ICLR),
pages 1–15.
Durk P Kingma, Tim Salimans, and Max Welling. 2015.
Variational dropout and the local reparameterization trick. In Advances in neural information processing systems (NeurIPS), pages 2575–2583.
Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, and Stephan Günnemann.
2021. Evaluating robustness of predictive uncertainty estimation: Are dirichlet-based models reliable? In International Conference on Machine Learning (ICML), pages 5707–5718.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition.
In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
(NAACL), pages 260–270.
Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. 2018. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In International Conference on Learning Representations (ICLR), pages 1–16.
Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022.
Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI
Conference on Artificial Intelligence(AAAI), pages 10965–10973.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations (ICLR)*, pages 1–18.
Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 857–867.
Andrey Malinin and Mark Gales. 2018. Predictive uncertainty estimation via prior networks. In *Advances* in neural information processing systems (NeurIPS), page 7047–7058.
Francesco Pinto, Philip HS Torr, and Puneet K Dokania.
2022. An impartial take to the cnn vs transformer robustness contest. In *European Conference on Computer Vision (ECCV)*, pages 466–480.
Murat Sensoy, Lance M. Kaplan, and Melih Kandemir.
2018. Evidential deep learning to quantify classification uncertainty. In *Advances in Neural Information* Processing Systems (NeurIPS), page 3183–3193.
Murat Sensoy, Maryam Saleki, Simon Julier, Reyhan Aydogan, and John Reid. 2021. Misclassification risk and uncertainty quantification in deep classifiers.
In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2484–2492.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 (HLTNAACL), pages 142–147.
Xiao Wang, Shihan Dou, Limao Xiong, Yicheng Zou, Qi Zhang, Tao Gui, Liang Qiao, Zhanzhan Cheng, and Xuanjing Huang. 2022. MINER: Improving out-of-vocabulary named entity recognition from an information theoretic perspective. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
(ACL), pages 5590–5600.
Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, and Zexiong and Pang. 2021. TextFlint: Unified multilingual robustness evaluation toolkit for natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations (ACL-IJCNLP), pages 347–355.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes Release 5.0. In 3. Abacus Data Network.
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entityaware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442–6454.
Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu.
2019. TENER: adapting transformer encoder for named entity recognition. *CoRR*, abs/1911.04474.
Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(ACL-IJCNLP), pages 5808–5822.
Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020.
Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6470–6476.
Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang.
2018. Adaptive co-attention network for named entity recognition in tweets. In Thirty-Second AAAI
Conference on Artificial Intelligence (AAAI), page 5674–5681.
Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. 2022. De-bias for generative extraction in unified NER task. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (ACL),
pages 808–818.
Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 7096–7108.
| BERT-Tagger | SpanNER | Seq2Seq | |
|-------------------------------|--------------------------------------------|-------------------------------|------------------------------------------|
| (1), x(2), ..., x(n) } | X = {x (1), x(2), ..., x(n) } | X = {x (1), x(2), ..., x(n) } | |
| Input | X = {x | | |
| Processing | - | Enumerate all spans | Obtain start and end indexes of entities |
| S = {s (1), s(2), ..., s(m) } | Y = {y 1, ye b 1, y1, ..., yb k, ye k, yk} | | |
| Hidden state | h = Encoder(X); | (i) ); | ht = EncoderDecoder(X, Y<t); |
| h = Encoder(s | | | |
| n×d | d | d | |
| h ∈ R | h ∈ R | ht ∈ R | |
| Inference | Token-level classification | Span-level classification | Target sequence Y generation |
Table 9: Explanation of the three NER paradigms.
![11_image_1.png](11_image_1.png)
## A Ner Paradigms
Here we introduce three popular NER paradigms, shown in Table 9.
BERT-Tagger. It follows the sequence labeling paradigm, which aims to assign a tagging label Y = {y
(1)*, ..., y*(n)} to each word in a sequence X = {x
(1)*, ..., x*(n)}. We use BERT-Tagger (Devlin et al., 2019) as the baseline method for sequence labeling. The labeling method adopts a BIO
tag set, which indicates the beginning and interior of an entity, or other words. X is fed to BERT to obtain hidden states, followed by a nonlinear classifier to classify each word.
SpanNER. Given an input sentence X =
{x 1*, ..., x*n}, SpanNER enumerates all spans and obtains a set S = {s
(1), ..., s(i)*, ..., s*(m)}. Then it assigns each span an entity label y (Fu et al.,
2021). The maximum length l of the span is artificially set. Assume a sentence's length is n and the maximum span length is set to 2, the subscript of the span set can be expressed as
{(1, 1),(1, 2)...(n − 1, n − 1),(n − 1, n),(*n, n*)}.
Each span is fed into the encoder to obtain a vector representation.
Seq2Seq. As presented in Table 9, given an input sentence X = {x
(1), x(2)*, ..., x*(n)},
the target sequence is represented as Y =
{y b1
, ye 1
, y1*, ..., y*bk
, yek
, yk}. This target sequence indicates X describes k entities. Take the first entity as an example, its beginning and end indexes are y b1 and y e 1
, with entity category y1. This method learns in a sequence-to-sequence manner
(Yan et al., 2021).
## B Additional Experimental Analysis B.1 Reliability Diagrams
We further depict the reliability diagrams to evaluate the quality of uncertainty estimation. As shown
(a) CoNLL2003 Softmax (b) CoNLL2003 E-NER
![11_image_0.png](11_image_0.png)
(c) OntoNotes 5.0 Softmax (d) OntoNotes 5.0 E-NER
![11_image_2.png](11_image_2.png)
in Figure 5 and Figure 6, the confidence range is equally divided into ten bins. Then the subset within the same confidence range is utilized to compute the accuracy.
As shown in Figure 5, the confidence of Softmax represents poor accuracy, indicating it is overconfident. Then compared with Softmax, E-NER
nearly approaches the perfectly calibrated line and
| Case | Sentence | Softmax+Entropy | E-NER | | | | | | |
|----------------------------------------------------------------|--------------------------------------------------------------------|-----------------------------------|---------|--------|-------|---------|------------|-----|-----|
| * | Mapping: {MIS: miscellaneous; PER: person; | Entity: {Predcition; Confidence%; | | | | | | | |
| ORG: organization; O: non-entity} | Uncertainty%} | | | | | | | | |
| IID | A visit to the computer centre offering InternetE [MIS] services 1 | 1 | 1 | | | | | | |
| E {O ; 99.9 ; 8.0} | E {O ; 42.0 ; 70.8} | | | | | | | | |
| found a EuropeanE [MIS] official clicking away on his mouse. 2 | E 2 | E 2 | | | | | | | |
| {MIS ; 99.9 ; 3.0} | {MIS ; 92.7 ; 8.9} | | | | | | | | |
| 1 | | | | | | | | | |
| IIID | LazioE [ORG] | have | injury | doubts | about | striker | P ierluigi | E 1 | E 1 |
| {O ; 98.8 ; 7.3} | {ORG ; 88.9 ; 12.5} | | | | | | | | |
| 2 | E 2 | E 2 | | | | | | | |
| CasiraghE [PER]. | {PER ; 99.9 ; 0.4} | {PER ; 98.3 ; 2.3} | | | | | | | |
| IIIOOV | But the InthrnetE [MIS] , a global computer network. | E | | | | | | | |
| 1 | 1 {O ; 90.5 ; 23.1} | E 1 {MIS ;28.1 ; 70.0} | | | | | | | |
| IVOOD | Redesignated 65 F ighter W ingE [ORG] on 24 July 1943. | E | | | | | | | |
| 1 | 1 {O ; 99.2 ; 4.6} | E 1 {O ; 51.3 ; 60.7} | | | | | | | |
Table 10: Case study of Softmax and E-NER under the span-based paradigm. The entities and their categories are already denoted in four sentences. The predicted entities with confidence (%) and uncertainty (%) scores are also presented. Incorrectly predicted entities are denoted by "Red E", whereas "Blue E" represents correctly predicted entities.
has a much smaller ECE score. This suggests that E-NER can yield well-qualified confidence, showing it is more trustworthy. Then the observations in Figure 6 are similar, which demonstrates the reliability of the proposed approach for OOD entities.
## B.2 Case Study
As presented in Table 10, we conduct a case study by choosing four typical cases, including ID, OOV,
and OOD samples. The uncertainty of Softmax is computed with entropy.
The first case contains two MIS entities. Softmax and E-NER both wrongly predict the first entity to O category, with confidence scores of 99.9%
and 42.0%, respectively. This shows that Softmax is over-confident even for error results. Yet E-NER
can output a larger uncertainty score, suggesting unsure towards the prediction. Then the second case describes two entities. Softmax wrongly predicts the first ORG entity to O with large confidence, i.e.
98.8%. But E-NER can correctly detect the entity category as ORG.
Moreover, *Inthrnet* in the third sentence is a MIS entity, which is OOV due to misspelling. Softmax detects it as O with a confidence score of 90.5%, showing over-confident for errors. On the contrary, E-NER assigns a large uncertainty score for the OOV sample and correctly predicts the entity category. Similarly, the last case describes an OOD entity. It can be observed that E-NER outputs a large uncertainty score compared with Softmax.
Based on the cases and observations, we draw the following conclusions: 1) Softmax is overconfident, even for error prediction, OOV and OOD samples; 2) E-NER can recognize entities accurately and yield well-qualified uncertainties towards error, OOV and OOD samples. This contributes to the reliability and robustness of E-NER.
## C Implementation Details C.1 Model Parameters
In this paper, we implement three NER methods, including BERT-Tagger, SpanNER and Seq2Seq.
The testing set is evaluated by the best model chosen by the development set. The implementation details are shown as follows.
BERT-Tagger. BERT-Tagger5adopts BERT-largecased as the base encoder (Devlin et al., 2019). We set the dropout rate as 0.2, the training batch size as 16, and the weight decay as 0.02. All models in this paradigm use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 2e-5. Sentences are truncated to a maximum length of 256. The initial value for λ0 is set to 1e-02.
SpanNER. Following the original SpanNER6(Fu et al., 2021), we adopt BERT-large-uncased as the base encoder (Devlin et al., 2019). The dropout rate is set to 0.2. All models in this paradigm are trained using the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of 1e-5, with the training batch size as 10. To improve training efficiency, sentences are truncated to a maximum length of 128, and the maximum length of span enumeration is set to 4. The sampling times for MC dropout are set to 5 in the experiments. The 5https://github.com/google-research/bert 6https://github.com/neulab/spanner.
initial value of λ0 is set to 1e-02. We use heuristic decoding and retain the highest probability span for flattened entity recognition in span-based methods.
Seq2Seq. Following Yan et al. (2021), we exploit BART-Large model7. BART model is fine-tuned with the slanted triangular learning rate warmup.
The warmup step is set to 0.01. The training batch size is set to 16. The initial value of λ0 is set to 1e-3.
## C.2 Evaluation Metrics
ECE. It denotes the expected calibration error, which aims to evaluate the expected difference between model prediction confidence and accuracy
(Guo et al., 2017). Figure 6 depicts the difference in a geometric manner. The concrete formulation is as follows:
$$\mathrm{ECE}=\sum_{i=1}^{|B|}{\frac{N_{i}}{N}}|\mathrm{acc}(b_{i})-\mathrm{conf}(b_{i})|,\tag{8}$$
where bi represents the i-th bin and |B| represents the total number of bins, setting to 10 in our experiment. N denotes the number of total samples. Ni represents the number of samples in the i-th bin.
acc(bi) denotes the accuracy and conf(bi) denotes the average of confidences in the i-th bin.
AUC. The area under the curve (AUC)8is a commonly used metric for evaluating the performance of binary classifiers. The formulation is as follows:
$$\text{AUC}(f)=\frac{\sum_{t_{0}\in\mathcal{D}^{0}}\sum_{t_{1}\in\mathcal{D}^{1}}\mathbf{1}[f(t_{0})<f(t_{1})]}{|\mathcal{D}^{0}|\cdot|\mathcal{D}^{1}|}\tag{9}$$ where $\mathcal{D}^{0}$ is the set of vertices over $\mathbf{1}$ and $\mathcal{D}^{1}$
where D0is the set of negative examples, and D1 is the set of positive examples. 1[f(t0) < f(t1)]
denotes an indicator function which returns 1 if f(t0) < f(t1) otherwise return 0.
In this paper, we evaluate the performance of OOV/OOD detection using the AUC metric.
Specifically, we consider two settings for the AUC score:
- Con. It uses confidence as a classifier. The correct entity recognition is a positive example D1, and the entity recognition error is a negative example D0.
- Unc. It uses uncertainty as a classifier. Wrong prediction results of OOV/OOD entities are 7https://github.com/yhcc/BARTNER
8sklearn.metrics.auc.html.
considered positive examples, denoted as D1. Correct prediction results of in-domain entities are considered negative examples, recorded as D0. These metrics assess the classifier's capability in detecting OOV/OOD entities.
## C.3 Edl Optimization Function
In this section, we give a detailed formulation of the EDL optimization function. Eq. 1 introduces the density of the Dirichlet distribution. As the classification loss item of EDL, its cross-entropy loss function is as follows:
loss function is as follows: $$\begin{split}\mathcal{L}_{CLS}^{(i)}&=\frac{\int\left[\sum_{c=1}^{C}-y_{c}^{(i)}\text{log}(p_{c}^{(i)})\right]}{B(\boldsymbol{\alpha}^{(i)})}\prod_{c=1}^{C}p_{c}^{\alpha(i)-1}d\boldsymbol{p}^{(i)}\\ &=\sum_{c=1}^{C}y_{c}^{(i)}\left(\psi(S^{(i)})-\psi(\alpha_{c}^{(i)})\right).\end{split}\tag{10}$$ The KL divergence calculation function under
the Dirichlet distribution takes the following form and serves as the category penalty term in EDL:
$$\mathcal{L}_{KL}^{(i)}=KL[\text{Dir}(\mathbf{p}^{(i)}|\widetilde{\boldsymbol{\alpha}}^{(i)})||\text{Dir}(\mathbf{p}^{(i)}|\mathbf{1})]$$ $$=\log\left(\frac{\Gamma(\sum_{c=1}^{C}\widetilde{\alpha}_{c}^{(i)})}{\Gamma(C)\prod_{c=1}^{C}\Gamma(\widetilde{\alpha}_{c}^{(i)})}\right)$$ $$+\sum_{c=1}^{C}(\widetilde{\alpha}_{c}^{(i)}-1)\left[\psi(\mathcal{S}^{(i)})-\psi(\sum_{j=1}^{C}\widetilde{\alpha}_{j}^{(i)})\right].\tag{11}$$ Finally, we get the loss function for overall EDL
learning:
$${\mathcal{L}}_{E D L}=\sum_{i=1}^{N}({\mathcal{L}}_{C L S}^{(i)}+{\mathcal{L}}_{K L}^{(i)})\qquad(12)$$
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section §*Limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section §1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section §2 And Section §4
✓ B1. Did you cite the creators of artifacts you used?
Section §2 , Section §4 and Section §6
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section §4 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section §4.2
## C ✓ **Did You Run Computational Experiments?** Section §4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section §4 and Section §C.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section §4 and Section §C.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section §4 and Section §C.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section §4 and Section §C.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ramos-etal-2023-lmcap | {LMC}ap: Few-shot Multilingual Image Captioning by Retrieval Augmented Language Model Prompting | https://aclanthology.org/2023.findings-acl.104 | Multilingual image captioning has recently been tackled by training with large-scale machine translated data, which is an expensive, noisy, and time-consuming process. Without requiring any multilingual caption data, we propose LMCap, an image-blind few-shot multilingual captioning model that works by prompting a language model with retrieved captions. Specifically, instead of following the standard encoder-decoder paradigm, given an image, LMCap first retrieves the captions of similar images using a multilingual CLIP encoder. These captions are then combined into a prompt for an XGLM decoder, in order to generate captions in the desired language. In other words, the generation model does not directly process the image, instead it processes retrieved captions. Experiments on the XM3600 dataset of geographically diverse images show that our model is competitive with fully-supervised multilingual captioning models, without requiring any supervised training on any captioning data. |
## Lmcap**: Few-Shot Multilingual Image Captioning By** Retrieval Augmented Language Model Prompting
Rita Ramos† Bruno Martins† **Desmond Elliott**⋆
†INESC-ID, Instituto Superior Técnico, University of Lisbon
⋆Department of Computer Science, University of Copenhagen [email protected]
## Abstract
Multilingual image captioning has recently been tackled by training with large-scale machine translated data, which is an expensive, noisy, and time-consuming process. Without requiring any multilingual caption data, we propose LMCAP, an *image-blind* few-shot multilingual captioning model that works by prompting a language model with retrieved captions.
Specifically, instead of following the standard encoder-decoder paradigm, given an image, LMCAP first retrieves the captions of similar images using a multilingual CLIP encoder.
These captions are then combined into a prompt for an XGLM decoder, in order to generate captions in the desired language. In other words, the generation model does not directly process the image, instead processing retrieved captions. Experiments on the XM3600 dataset of geographically diverse images show that our model is competitive with fully-supervised multilingual captioning models, without requiring any supervised training on any captioning data.
## 1 Introduction
The task of image captioning has witnessed impressive performance gains with the trend of large-scale encoder-decoder models and vision-and-language pre-training (Li et al., 2022; Wang et al., 2021; Hu et al., 2022; Wang et al., 2022). Despite all of this progress, existing models are mostly available on English or are specialised for other high-resource languages. This limits the access to the technology for a broader range of languages that exist in the world. Moreover, the current mainstream trend results in design decisions and methods that may only work well for English-centric datasets or the couple of languages for which captioning data is available (Ruder, 2020). There is a need to develop multilingual image captioning models that can serve speakers of different languages.
Still, scaling captioning models to a wide variety of languages involves different challenges. One major limitation is the lack of multilingual imagecaption pairs of clean labelled data for training the models. One possible solution is to automatically translate the existing English datasets (Thapliyal et al., 2022). While effective, this approach can result in models that learn translation artefacts, and perpetuates an English-centric perspective instead of encouraging the use of geographically diverse concepts that are not overly specific to the western culture (Liu et al., 2021). Moreover, with or without automatic translations, training captioning models with multilingual data can be expensive, given the amount of data and number of parameters needed to mitigate the *curse of multilinguality* (Conneau et al., 2019; Goyal et al., 2021).
This paper presents LMCAP, an *image-blind* multilingual image captioning model that does not require any training specific for image captioning.
We propose an efficient method that reuses a pretrained multilingual language model and adapts it to the vision-and-language captioning setting. Our work is motivated by the recent "Socratic Models" framework (Zeng et al., 2022), in which different models can be combined through text prompting
(e.g., image captioning can be achieved by prompting a language model with a set of visual concepts extracted from the predictions of a vision model). Different from the original Socratic Models, our approach is inspired by retrieval-augmented generation (Lewis et al., 2020; Izacard et al., 2022).
Specifically, a multilingual language model generates captions given a prompt consisting of the captions retrieved from similar images, and a demonstration of how to produce a caption in the desired language. We note here that this is an *image-blind* approach, i.e. the language model producing the caption does not actually process the image.
Our main contributions are as follows: (1) We propose a few-shot multilingual image captioning approach named LMCAP, that re-uses pre-trained models without requiring any training specific for image captioning; (2) To the best of our knowledge, LMCAP is the first captioning model with retrieval-augmented generation in a multilingual setting, and in a few-shot setting of captioning; (3)
We report on experiments with the XM3600 benchmark (Thapliyal et al., 2022) of human-authored captions and geographic diverse images, demonstrating that LMCAP exhibits strong few-shot performance on a wide variety of languages; (4) We further show that LMCAP performs substantially better than the original Socratic Models. Moreover, instead of only achieving competitive performance against other zero-shot models, LMCAP can also compete with a large-scale supervised state-of-art captioning model.
## 2 Background And Related Work
Image Captioning: The task of automatically generating textual descriptions for input images has been largely explored in English, while multilingual image captioning has only been addressed in a couple of studies (Gu et al., 2018; Thapliyal et al., 2022; Chen et al., 2022). Like in most recent work on image captioning (Li et al., 2022; Wang et al., 2021, 2022), studies addressing multilingual setups have also focused on scaling the size of encoder-decoder models and the amount of training data, resorting to machine translated versions of multimodal data to accommodate multiple languages (Thapliyal et al., 2022). Differently from training a large-scale encoder-decoder model, we follow a few-shot setting with an *image-blind* approach based on prompting.
Few-Shot and Zero-Shot Approaches: Performing few-shot learning by prompting a language model with examples and demonstrations of a task
(Brown et al., 2020; Radford et al., 2019; Schick and Schütze, 2020) is an efficient and effective alternative to update model parameters. Similarly to other NLP tasks, recent work in the vision-andlanguage domain has used prompt-based learning by building on top of pre-trained language and vision models, although usually also involving extra multimodal training (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Jin et al., 2021). In our work, we follow a similar few-shot prompting approach to the recent Socratic Models (Zeng et al., 2022) that do not involve any multimodal training, as described next. In image captioning, there have also been zero-shot methods that similarly to our approach do not involve any training, by relying on prompts or adaptations over the decoding algorithms, such as ZeroCap (Tewel et al., 2021) and ConZic (Zeng et al., 2023). However, these models work for English and not for the multilingual captioning setting.
Socratic Models: Zeng et al. (2022) proposed the Socratic Models (SMs) framework, where different multimodal pre-trained models communicate via zero-shot or few-shot prompting. For the task of image captioning, SMs generate captions by prompting a language model (i.e., GPT-3 (Brown et al., 2020)) with information about the input image obtained with another pre-trained model (i.e.,
CLIP (Radford et al., 2021)). The visual information is in this way represented into a languagebased prompt, containing the number of people presented in the image, the places, objects, and what is the type of image. We explore a similar approach in the multilingual setting by reusing multilingual models, and through a retrieval-based prompt.
Retrieval-augmentation: The knowledge from language models can be adapted and expanded by combing non-parametric knowledge from datastores (i.e., external memories) (Khandelwal et al.,
2019; Lewis et al., 2020; Izacard et al., 2022; Ram et al., 2023). The success of conditioning generation with retrieved information, in several different NLP tasks, has inspired some recent studies in image captioning (Ramos et al., 2023a; Fei, 2021; Sarto et al., 2022; Ramos et al., 2023b). The study that is most closely related to our captioning model is SmallCap (Ramos et al., 2023b), an encoder-decoder model that is prompted with retrieved captions as well. However, in image captioning, retrieval-augmentation has mostly being explored with supervised learning and not fewshot learning. Moreover, retrieval-augmentation remains unexplored in the multilingual scenario.
## 3 Model
Language Model Prompt-based Captioning
(LMCAP) is a few-shot multilingual captioning model augmented with retrieval. It involves prompting a Language Model (LM) with captions retrieved from a datastore by a Vision-andLanguage Model (VLM). Captions are generated in an *image-blind* manner, without actually processing the visual contents of the input image, instead using a prompt containing the retrieved captions. The method works as follows: first, given an input image, the VLM is used to find relevant captions in the datastore. Second, the retrieved captions are converted to a language prompt, which is encoded by the multilingual LM to generate captions in a desired language, conditioning the generation on the prompt. Finally, the set of generated captions can be scored by the VLM against the input image, to select the best caption. The main aspects of our approach are shown in Figure 1 and fully detailed next.
Image-Text Retrieval: The input image and a datastore of captions are encoded by a multilingual CLIP (Carlsson et al., 2022), i.e. a VLM that can be used to calculate image-text similarity. In this way, given the encoded data, M-CLIP is used to retrieve the K most similar captions from the datastore. The datastore contains captions associated to diverse images, which can be in English or another language. The retrieved captions will serve to guide a language model as an example of what the predicted caption should resemble, through the use of a prompt and as described next.
Retrieval-augmented Prompting: The retrieved captions, which represent the visual information about the image, are formatted into a prompt for the language model. The prompt starts with fixed N-shot examples and ends with the retrieval information about the input image, to guide the language model. Each shot is a demonstration of how to generate a caption in a desired language for an image, given a set of retrieved captions. After these Nexamples, the prompt terminates with the retrieved information about the actual input image. An example of the format of the prompt can be seen in Figure 1 and in more detail in Appendix D. We note that the retrieved captions, either from the fixed N-shot examples or those corresponding to the input image, can be presented in any language or in multiple languages.
Prompting Multilingual Text Generation: The aforementioned prompt is used as input for an XGLM (Lin et al., 2021) pre-trained multilingual autoregressive LM, to generate captions in a given language. XGLM is applied in a few-shot setting, which means that LMCAP does not require any training (i.e., the captions are generated by providing the prompt at inference time to XGLM).
Captions are generated in the desired language by including an example in the N demonstrations in the prompt, as shown in Figure 1.
Multilingual Reranking: After the LM generates a set of captions, the multilingual VLM performs a final image–text similarity step to find the caption that best describes the input image. This is based on the same M-CLIP model used for the initial image–text retrieval.
## 4 Evaluation
In this section, we describe the evaluation of LMCAP. We describe the experimental setup and results, and we also present ablation studies and further discussions about our approach.
## 4.1 Experimental Setup
Model: LMCAP uses two pre-trained multilingual models, namely the autoregressive XGLM language model facebook/xglm-2.9B,
and the multilingual M-CLIP vison-and-language model xlm-roberta-large-ViT-H-14, respectively available on HuggingFace (Wolf et al., 2020)
and OpenCLIP1. Our approach does not require any training, generating captions at inference time using a single NVIDIA V100S 32GB GPU.
To generate a caption in a desired language, XGLM is prompted with retrieved captions extracted by the M-CLIP model. For caption retrieval, the input image and a set of captions from a datastore are both encoded by M-CLIP to perform direct image-text search. The datastore contains English captions from the COCO training set and is indexed offline with the nearest-neighbour search library named FAISS (Johnson et al., 2017), using the index IndexFlatIP that does not involve training.
A set of K=4 retrieved captions are used in the prompt for the input image, along with a fixed set of N=3-shot examples, as described in Appendix D. Conditioned on the prompt, XGLM generates captions using beam-search decoding with a beam of 3. A set of c=3 candidate captions are re-ranked using M-CLIP, to select the final generated caption in the desired language. The code for LMCAP is made freely available2.
Datasets: We mainly evaluate our approach on XM3600, i.e. a multilingual image captioning dataset (Thapliyal et al., 2022) featuring geographically-diverse images, collected from Open Images with basis on the regions of 36 languages. For each language, 100 images were selected and annotated with human generated cap-
![3_image_0.png](3_image_0.png)
tions, resulting in a total of 3600 images and 261375 captions across the 36 languages. XM3600 does not contain training or validation splits.
For validation and hyperparameter tuning, we relied on the COCO (Chen et al., 2015) validation split (COCO-DEV) from the standard Karpathy splits (Karpathy and Fei-Fei, 2015). For "reference captions", we machine translate the English captions into Spanish, Hindi, and Chinese, using the M2M-100 model (Fan et al., 2021),
similarly in spirit to Thapliyal et al. (2022) who used the Google Translate API3. We make this development set available to the community at https://github.com/RitaRamo/lmcap. As previously mentioned, we also use the captions from the COCO training set to build the datastore. The datastore simply contains the original English captions from COCO without incurring in an expensive and noisy machine translation process, unlike in the study from Thapliyal et al. (2022).
Model Assessment and Comparison: We compare LMCAP with the four multilingual models proposed by Thapliyal et al. (2022). These models combine different mT5 (Xue et al., 2020) and ViT (Zhai et al., 2022) versions and are trained in a fully-supervised fashion on COCO-35L and CC3M-35L, i.e., Google's machine translation API
versions of the original COCO and CC3M datasets
(Chen et al., 2015; Sharma et al., 2018). Specifically, BB+CC combines mT5-base and ViT-B/16 pretrained on CC3M-35L and finetuned on COCO35L; BB is trained on COCO-35L; Bg switches to the ViT-g/14 model; and Lg uses mT5-large and and ViT-g/14, also trained with COCO-35L. For reference, Thapliyal et al. (2022) spent 5000 TPU
hours to train their models, while our method can be used out-of-the-box for inference, i.e., 45 minutes for the X3600 benchmark per language.
Following Thapliyal et al. (2022), results are reported with the CIDEr (Vedantam et al., 2015) metric for English, Spanish, Hindi, and Chinese, with other languages covered in Section 4.4. CIDEr is a standard captioning metric that computes how well the generated caption matches the consensus of the reference captions, based on Term Frequency–Inverse Document Frequency (TF-IDF). In Appendix A, we included more generation metrics for holistic evaluation. To compute the metrics, we used the COCO evaluation package 4, and the SacreBLEU tokenization (Post, 2018).
## 4.2 Results
XM3600: Following Thapliyal et al. (2022), we report results on XM3600 for English, Spanish, Hindi, and Chinese, in Table 1. We can see that LMCAP outperforms all supervised approaches on Chinese, and achieves competitive performance on the other languages, despite being *image-blind* and not being trained on any image captioning data. For English, Spanish, and Hindi, we note that LMCAP
is only outperformed by the large-scale supervised variant BB+CC, pre-trained on CCM3 and finetuned on COCO, jointly on English and the other 35 languages for the two datasets, i.e., with 123M
captions. For the other variants that are only trained on COCO-35L, our model has a substantially larger performance on the CIDER metric across all four languages. We also show that our model can further benefit from increasing the datastore (LMCAP+),
as described in more detail over Section 4.3.
COCO: For completeness, we also report results on the machine translated COCO-DEV set in Table 2. In the top half of the table we show the performance of the 4 SOTA models on COCO-DEV
via Google's machine translation API. Since this dataset was not provided by the authors, we perform as well automatic machine-translation but using the M2M-100 model (Fan et al., 2021), which gives an approximation for model comparison on COCO. As expected, LMCAP is outperformed on COCO since all the 4 variants were trained on it across 36 languages, with a large number of trainable parameters. Our model still reaches impressive performance, considering it was not trained on 4Available at https://github.com/tylin/
coco-caption
| Model | en | es | hi | zh |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------|------|------|
| Multilingual Captioning Supervised Learning BB+CC 0.584 0.425 0.197 0.202 BB 0.297 0.194 0.098 0.087 Bg 0.337 0.232 0.112 0.110 Lg 0.343 0.220 0.111 0.099 Few-shot Learning LMCAP 0.452 0.329 0.132 0.221 LMCAP+ 0.526 0.326 0.078 0.251 | | | | |
COCO for any of those languages, neither was it trained on any multimodal data. This is especially the case for English, where our model reaches a similar CIDEr score, although it only reaches about half the performance for the other languages. In Appendix B, we also compare LMCAP with promptbased captioning methods that were specially designed for English.
| Model | |θ| | en | es | hi | zh |
|-----------------|-------|-------|-------|-------|-------|
| COCO-DEV-GOOGLE | | | | | |
| BB+CC | 766 | 0.980 | 0.962 | 0.759 | 0.748 |
| BB | 1230 | 0.856 | 0.844 | 0.671 | 0.659 |
| Bg | 1691 | 0.851 | 0.835 | 0.718 | 0.695 |
| Lg | 2241 | 0.875 | 0.859 | 0.624 | 0.656 |
| COCO-DEV-M2M100 | | | | | |
| LMCAP | N/A | 0.767 | 0.453 | 0.334 | 0.584 |
## 4.3 Ablation Studies
To better understand the design choices of LMCAP, we report a series of ablation tests on COCO-DEV,
to avoid direct tuning on the XM3600 benchmark.
Prompt: Given that LMCAP works by prompting a language model with K retrieved captions and N-shot examples, we study the effect of our prompt when varying K and N. Table 3 shows the importance of not depending on a single retrieved caption across the 4 languages. This is similar to previous findings in retrieval-augmentated captioning studies focusing on English (Sarto et al., 2022; Ramos et al., 2023b), which showed that a large K makes the model more robust to mismatched captions. We further see that English and Spanish benefit from encoding a larger set of retrieved captions, while Hindi and Chinese work better with a smaller K. We select K = 4 since it has close-tooptimal performance for each of the languages. We then explore varying the number of N-shot examples, and found N = 3 to be the optimal value on all the four the languages. We thus use K = 4 and N = 3 in the prompt of LMCAP.
Setup en es hi zh K=1, N=1 0.622 0.380 0.240 0.522
K=2, N=1 0.654 0.400 **0.269** 0.562 K=3, N=1 0.695 0.414 0.211 **0.565**
K=4, N=1 0.711 0.415 0.229 0.554
K=5, N=1 **0.734 0.424** 0.205 0.529
K=4, N=1 0.711 0.415 0.229 0.554 K=4, N=2 0.735 0.440 0.247 0.583
K=4, N=3 **0.767 0.454 0.334 0.584**
K=4, N=4 0.757 0.424 0.318 0.580
Varying K-Captions Varying N-Shot Datastore: We also studied different contents for the datastore beyond the English captions from the COCO training set, shown in Table 4. Given that our model reaches much better performance on English, we hypothesise that our model can better generate captions in a desired language when having the retrieved captions in that same language. This could be validated using translations from COCO
in the other languages, but since those are not available, we instead used a machine translated version of the Conceptual Captions dataset (CCM3) from Qiu et al. (2022). We used the English, Spanish, and Chinese versions of the CCM3 training set, respectively for each of the corresponding languages
(CCM3-L). We found that performance deteriorates on the COCO-DEV dataset, which might be explained by the difference between the COCO
and CCM3-L datasets. Even combining the two datasets (COCO + CCM3-L) is worse than using only the COCO dataset.
In an attempt to cover more diverse concepts, we augmented COCO with three large web datasets
(Conceptual Captions (Sharma et al., 2018), Conceptual 12M (Changpinyo et al., 2021), and SBU
captions (Ordonez et al., 2011)), using their noisefree versions (Li et al., 2022). We refer to this dataset as CCS, and it contains synthetic modelgenerated texts for the web images. Using CCS
leads to an improvement compared to just using COCO, except for Hindi. In Table 1, we also report results on XM3600 with this best datastore configuration, for which the performance again decreases for Hindi, but has a substantial improvement on English and Chinese. The benefits of including a more diverse collection of captions are further shown in Apprendix E with some qualitative examples (e.g., LMCAP was now able to generate the french concept *macarons* in English). Notice that the retrieved captions from CCS are still on English. Thus, although there is lack of multilingual image-caption pairs with clean labelled data, it would be interesting to pursue further work on incorporating retrieved information from other languages, in order to improve performance to levels similar to those for English.
Model Size: In Table 5, we show the importance of using a language model that has a sufficiently large number of parameters. Both XGLM-562M and XGLM-1.7B are unable to generate captions beyond English. On the other hand, the 7.5B variant can lead to a stronger performance, but large-
| Datastores | en | es | hi | zh |
|---------------|-------|-------|-------|-------|
| COCO | 0.711 | 0.415 | 0.229 | 0.554 |
| CC3M-L | 0.387 | 0.309 | - | 0.337 |
| COCO + CC3M-L | 0.601 | 0.359 | - | 0.481 |
| COCO + CCS | 0.713 | 0.431 | 0.212 | 0.563 |
scale LMs require more GPU memory, which limits the size of the prompt that can be encoded with modest hardware5. LMCAP uses the more efficient XGLM-2.9B version. These results are in line with previous findings, which suggest that stronger fewshot performance is achieved when the prompt is encoded by large LMs (Brown et al., 2020).
Table 5: CIDEr performance on COCO-DEV, across the different variants of XGLM, to show the scaling behaviour of the LM used in LMCAP. RAM corresponds to the GPU memory consumption.
## 4.4 Additional Discussion
We now discuss the performance of LMCAP across the 36 languages, taking into consideration the data that was used for pre-training the LM. We also compare our approach with SMs and a simple baseline of retrieval plus translation. To support quantitative evaluation, we show some qualitative examples.
Multilingual Pre-training: In Table 6, we report the results of LMCAP on XM3600 for all the 36 languages considered in the dataset, ordered by the percentage of pre-training data used in XGLM for each language. LMCAP shows strong few shot performance on the diverse set of languages in which XGLM was pre-trained on. Similarly to BB+CC
and Lg models, which are limited to the 36 languages they were trained on, our model is also dependent on the LM pre-training data, although there is potential to replace XGLM by another large LM, in order to generalize to other languages.
Comparision with Socratic Models: Since LMCAP is inspired in Socratic Models (SMs), we compare them against our approach. For this, XGLM
receives the Socratic prompt that includes the image type, the number of people, places and object categories6, instead of our retrieved captions. Results are reported in Table 7. Compared to either zero-shot or few-shot SMs, we can see that our model largely outperforms SMs, with a noteworthy 5We had to run the largest model in half precision (float16).
6Using the original code at https://colab.
research.google.com/drive/1KOlc9nN0NJ5GAif_ dmuOqsRqlZycoIrc?usp=sharing
| Params | Config. | RAM | en | es | hi | zh |
|----------|-----------|-------|-------|-------|-------|-------|
| 564M | K=4, N=3 | 6G | 0.411 | 0.094 | 0.030 | 0.146 |
| 1.7B | K=4, N=3 | 12G | 0.637 | 0.143 | 0.066 | 0.272 |
| 2.9B | K=4, N=3 | 16G | 0.767 | 0.454 | 0.334 | 0.584 |
| 7.5B | K=4, N=3 | 22G | 0.787 | 0.489 | 0.365 | 0.644 |
CIDER improvement of more than 39.1% on English, 20.0% on Spanish, 11.5% on Hindi, and of 21.4% Chinese. This confirms the effectiveness of our retrieval-augmented LM prompting approach.
| BB+CC | LG | LMCAP | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|---------|-------|
| en | 0.584 | 0.343 | 0.452 |
| ru | 0.194 | 0.089 | 0.134 |
| zh | 0.202 | 0.099 | 0.221 |
| de | 0.224 | 0.130 | 0.153 |
| es | 0.425 | 0.220 | 0.329 |
| fr | 0.410 | 0.217 | 0.260 |
| ja | 0.254 | 0.141 | 0.161 |
| it | 0.321 | 0.168 | 0.226 |
| pt | 0.380 | 0.202 | 0.283 |
| el | 0.199 | 0.101 | 0.136 |
| ko | 0.288 | 0.152 | 0.157 |
| fi | 0.177 | 0.089 | 0.112 |
| id | 0.307 | 0.167 | 0.151 |
| tr | 0.232 | 0.122 | 0.103 |
| ar | 0.227 | 0.106 | 0.107 |
| vi | 0.336 | 0.182 | 0.265 |
| th | 0.418 | 0.226 | 0.166 |
| hi | 0.197 | 0.111 | 0.132 |
| bn | 0.200 | 0.133 | 0.022 |
| sw | 0.319 | 0.151 | 0.085 |
| te | 0.196 | 0.099 | 0.042 |
| Languages not in XGLM pretraining data cs 0.313 0.139 0.005 da 0.329 0.192 0.020 fa 0.311 0.155 0.002 he 0.230 0.098 0.001 hr 0.224 0.085 0.001 hu 0.175 0.096 0.006 mi 0.405 0.243 0.015 nl 0.441 0.232 0.082 no 0.385 0.230 0.025 pl 0.236 0.108 0.003 ro 0.188 0.100 0.007 sv 0.370 0.225 0.077 uk 0.189 0.081 0.006 AVG∗ 0.290 0.154 0.176 | | | |
![7_image_0.png](7_image_0.png)
Model en es hi zh
Socratic 0.067 0.045 0.001 0.031
Socratic N=1 0.454 0.280 0.176 0.340 Socratic N=2 0.344 0.215 0.141 0.268 Socratic N=3 0.376 0.254 0.219 0.370
LMCAP **0.767 0.454 0.334 0.584**
## Baseline Of Retrieval With Translation: We Also
compared our approach against a baseline that retrieves the nearest caption on English and translates it into other languages in Table 8, using the M2M100 model. This is to quantify the impact of prompting the language model compared to performing direct translation on retrieved captions. On COCODEV, we see that LMCAP only outperforms these results on English. Notice, however, that the references on COCO-DEV for the other languages rely on the M2M-100 distributions, as the baseline, promoting to an inequitable CIDEr. When evaluating on human-labeled data, as is the case with the XM3600 dataset, we see the benefits of prompting with retrieval information.
Notice also both LMCAP and the retrieval baseline outperform the BB model (the later also competitive to the other 3 SOTA variants), despite training with large-scale multimodal machine translated data for hours. This shows the clear benefits of using retrieval-augmentation in multilingual image captioning, not just for result quality but to avoid high computation costs as well.
Qualitative Results: Figure 2 shows examples of captions generated in different languages by LMCAP, together with the retrieved captions that are provided in the prompt regarding each *blind-input* image. Qualitative examples tend to show diversity in the generation across the languages, with the retrieved information being itself diverse. For instance, in the first example, for English and Spanish, LMCAP focuses on describing that a man is
| Model | en | es | hi | zh |
|------------------|-------|-------|-------|-------|
| COCO-DEV | | | | |
| LMCAP | 0.767 | 0.454 | 0.334 | 0.584 |
| Baseline M2M-100 | 0.590 | 0.563 | 0.548 | 0.714 |
| XM3600 | | | | |
| LMCAP | 0.452 | 0.329 | 0.132 | 0.221 |
| Baseline M2M-100 | 0.333 | 0.205 | 0.120 | 0.170 |
| BB: COCO-35L | 0.297 | 0.194 | 0.098 | 0.087 |
in front of microphones (i.e., based on the first two retrieved captions). In turn, for Hindi and Chinese, the man is in front of a laptop (i.e., from the first example), and the captions can also mention that he is ready to give a speech in Chinese (i.e., given the last two retrieved captions). In the second image, we can see that LMCAP can simply copy a retrieved caption to generate in English, while for the other languages the model may come up with terms not directly present in the retrieved captions
(e.g., "snow slope" in Spanish). The last image is a negative example, where incorrect retrieved captions led the model into errors in English and Chinese, showing that there are also limitations in our *image-blind* approach. For more examples, see Appendix C.
## 5 Conclusions
This paper proposes LMCAP, an *image-blind* fewshot multilingual image captioning model. LMCAP is based on prompting a language model with N-shot examples and retrieved captions extracted by a vision-and-language model, to condition caption generation in a desired language with a multilingual language model. On XM3600, i.e. a humanlabelled massively multilingual multimodal benchmark, LMCAP performs competitively against the state-of-the-art without involving expensive training with large-scale translated multimodal data, or with any captioning data. Experimental results further demonstrate that LMCAP largely outperforms Socratic Models (Zeng et al., 2022), showing that retrieval augmentation plays a crucial role in our prompting approach. As future work, we plan to further assess the use of multilingual data in the datastore, as well as the impact of directly promoting diversity (Ye et al., 2022; Levy et al., 2022) in the captions used in the prompt.
## Acknowledgements
This research was supported by the Portuguese Recovery and Resilience Plan through project C645008882-00000055, through Fundação para a Ciência e Tecnologia (FCT) with the Ph.D. scholarship 2020.06106.BD, and through the INESCID multi-annual funding from the PIDDAC programme (UIDB/50021/2020).
## Limitations
Image captioning and multilingual image captioning studies tend to focus on the COCO dataset, which was shown to contain gender imbalance. Previous research has also showed that models trained on COCO tend to amplify this bias (Hendricks et al., 2018; Zhao et al., 2017). While our model is not trained on COCO or in any captioning data, it relies on a pre-trained language model, which is known to suffer from different sources of bias and fairness issues (Bommasani et al., 2021; Sheng et al., 2021; Schramowski et al., 2022).
Our model also involves retrieval-augmentation with captions extracted by a vision-and-language model, also pre-trained in an unsupervised manner. Like in the case of other retrieval-augmented generative models (Lewis et al., 2020), LMCAP
has inherently a bias towards the retrieved information. Notwithstanding, by conditioning on information from a datastore with clean and curated text, LMCAP has potential to ameliorate some of the generation issues of the language model (e.g.,
elude hateful or violent language). To have insights on the biases presented in LMCAP, we recommend analysing the retrieved captions used by the model, since they provided cues to the predictions, as shown in Figure 2. We argue that it can be much harder to have a direct interpretation for captioning models that are not retrieval-augmented.
Another limitation of our model relates to it following a full *image-blind* approach, which heavily depends on information from similar captions instead of the visual content from the actual input image. To address this limitation, future work could additionally include concepts extracted from the image in the prompt, as proposed in Socratic Models, combined with the retrieved information.
## Ethics Statement
The datasets supporting the evaluation of LMCAP
are publicly available for academic purposes. We also plan to release our code, and the additional resources that were built to support the experiments.
We emphasise that LMCAP challenges the efficiency of most current captioning approaches, in terms of resource usage and development/deployment effort, while at the same time promoting more equitability and inclusion, exemplified here by attempting to balance language representation at low computational costs.
We further note that while our model attempts to advance research beyond English-centric captioning, by considering captioning for a wide variety of languages, it is important to address and pay more attention to low-resource languages as well
(i.e., languages beyond those covered in our tests).
Evaluating LMCAP with additional datasets, covering an even larger set of languages and concepts, would be desirable.
## References
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. volume 33, pages 1877–1901.
Fredrik Carlsson, Philipp Eisen, Faton Rekathati, and Magnus Sahlgren. 2022. Cross-lingual and multilingual CLIP. In *Proceedings of the Language* Resources and Evaluation Conference, pages 6848–
6854.
Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12m: Pushing webscale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, pages 3558–3568.
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. 2022. PaLI: A jointly-scaled multilingual language-image model. *arXiv preprint* arXiv:2209.06794.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth
workshop on statistical machine translation, pages 376–380.
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. *J. Mach. Learn. Res.*,
22(107):1–48.
Zhengcong Fei. 2021. Memory-augmented image captioning. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 35, pages 1317–1324.
Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021. Larger-scale transformers for multilingual masked language modeling.
arXiv preprint arXiv:2105.00572.
Jiuxiang Gu, Shafiq Joty, Jianfei Cai, and Gang Wang.
2018. Unpaired image captioning by language pivoting. In *Proceedings of the European Conference on* Computer Vision, pages 503–519.
Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In *Proceedings of the European Conference on* Computer Vision, pages 771–787.
Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2022.
Scaling up vision-language pre-training for image captioning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*,
pages 17980–17989.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. *arXiv preprint* arXiv:2208.03299.
Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2021. A good prompt is worth millions of parameters? low-resource prompt-based learning for vision-language models. *arXiv preprint* arXiv:2110.08484.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017.
Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734.
Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 3128–3137.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. *arXiv preprint arXiv:1911.00172*.
Itay Levy, Ben Bogin, and Jonathan Berant. 2022.
Diverse demonstrations improve in-context compositional generalization. arXiv preprint arXiv:2212.06800.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.
Retrieval-augmented generation for knowledgeintensive NLP tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459–
9474.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, et al. 2021.
Few-shot learning with multilingual language models. arXiv preprint arXiv:2112.10668.
Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond Elliott. 2021. Visually grounded reasoning across languages and cultures. *arXiv preprint* arXiv:2109.13238.
Vicente Ordonez, Girish Kulkarni, and Tamara Berg.
2011. Im2text: Describing images using 1 million captioned photographs. volume 24.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In *Proceedings of the* annual meeting of the Association for Computational Linguistics, pages 311–318.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Chen Qiu, Dan Oneata, Emanuele Bugliarello, Stella Frank, and Desmond Elliott. 2022. Multilingual multimodal learning with machine translated text. *arXiv* preprint arXiv:2210.13134.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763.
PMLR.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. *arXiv preprint arXiv:2302.00083*.
Rita Ramos, Desmond Elliott, and Bruno Martins.
2023a. Retrieval-augmented image captioning. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3666–3681.
Rita Ramos, Bruno Martins, Desmond Elliott, and Yova Kementchedjhieva. 2023b. Smallcap: Lightweight image captioning prompted with retrieval augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2840–2849.
Sebastian Ruder. 2020. Why You Should Do NLP Beyond English. http://ruder.io/
nlp-beyond-english.
Sara Sarto, Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. 2022. Retrieval-augmented transformer for image captioning. arXiv preprint arXiv:2207.13162.
Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676.
Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A Rothkopf, and Kristian Kersting. 2022.
Large pre-trained language models contain humanlike biases of what is right and wrong to do. Nature Machine Intelligence, 4(3):258–268.
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In *Proceedings of the Annual Meeting of the Association for Computational Linguistics*,
pages 2556–2565.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2021. Societal biases in language generation: Progress and challenges. *arXiv preprint* arXiv:2105.04054.
Yoad Tewel, Yoav Shalev, Idan Schwartz, and Lior Wolf. 2021. Zero-shot image-to-text generation for visual-semantic arithmetic. *arXiv preprint* arXiv:2111.14447.
Ashish V Thapliyal, Jordi Pont-Tuset, Xi Chen, and Radu Soricut. 2022. Crossmodal-3600: A massively multilingual multimodal evaluation dataset. *arXiv* preprint arXiv:2205.12522.
Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, and Felix Hill. 2021.
Multimodal few-shot learning with frozen language models. In *Advances in Neural Information Processing Systems*, volume 34, pages 200–212.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In *Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition*,
pages 4566–4575.
Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. 2022. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100.
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. SimVLM: Simple visual language model pretraining with weak supervision. *arXiv preprint arXiv:2108.10904*.
Thomas Wolf, Julien Chaumond, Lysandre Debut, Victor Sanh, Clement Delangue, Anthony Moi, Pierric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. *arXiv preprint* arXiv:2010.11934.
Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, and Ramakanth Pasunuru. 2022.
Complementary explanations for effective in-context learning. *arXiv preprint arXiv:2211.13892*.
Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *arXiv preprint arXiv:2204.00598*.
Zequn Zeng, Hao Zhang, Ruiying Lu, Dongsheng Wang, Bo Chen, and Zhengjue Wang. 2023. Conzic: Controllable zero-shot image captioning by samplingbased polishing. In *Proceedings of the IEEE/CVF*
Conference on Computer Vision and Pattern Recognition, pages 23465–23476.
Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. 2022. Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12104–12113.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.
## A Standard Evaluation Metrics B Additional Results On Coco C More Qualitative Examples
| B-1 | B-4 | ROGUE-L | METEOR | |
|-------|-------|-----------|----------|-------|
| en | 0.387 | 0.067 | 0.299 | 0.129 |
| es | 0.364 | 0.052 | 0.256 | 0.126 |
| hi | 0.258 | 0.015 | 0.182 | 0.220 |
| zh | 0.318 | 0.053 | 0.231 | 0.105 |
Model B-4 METEOR CIDEr
ZeroCap 0.026 0.115 0.146
ConZIC 0.013 0.115 0.128
LMCAP 0.199 0.220 0.759
In the paper, comparison between models is performed using the CIDEr metric by following Thapliyal et al. (2022). For holistic captioning evaluation, we provide here the performance of LMCAP
on XM3600 across additional standard automatic metrics. Specifically, Table 9 reports performance with BLEU-1 (B-1) and BLEU-4 (B-4) (Papineni et al., 2002), ROGUE-L (Lin, 2004) and METEOR
(Denkowski and Lavie, 2014).
Table 9: LMCAP performance on the XM3600 benchmark across different evaluation metrics.
Table 10 provides additional results on COCO,
comparing LMCAP against against other promptbased captioning models that do not involve training, including two previously proposed zero-shot captioning methods that are English-specific (i.e.,
ZeroCap (Tewel et al., 2021) and ConZic (Zeng et al., 2023)). Results show that LMCAP outperforms both. We also notice that unlike these models, LMCAP works for the multilingual setting, advancing research beyond English-centric captioning.
Table 10: Results on the COCO test set. We compare our few-shot LMCAP model against English-specific captioning models that do not involve supervising training as well.
We provide several additional examples of captions generated from XM3600 images in Figure 3.
![12_image_0.png](12_image_0.png)
## D Prompt-Template
We follow the Socratic template, where instead of including different categories (objects, places, number of people, etc), we replace them by the retrieved captions. By following the same template, in place of a completely different one, we can assess the impact of including retrieval compared to the original Socratic framework. Our template is: I am an intelligent image captioning bot. Similar images have the following captions: <caption 1> <caption 2>
<caption 3> <caption 4>. A creative short caption I can generate to describe this image in <language> is:
Between the retrieved captions we use the special end of sentence token (i.e., </s>) of XGLM.
Notice also that our prompt starts with 3 fixed shot examples from images in the training dataset
(i.e., the same prompt is repeated multiple times to encode the n-shot examples). We share the N-shot examples and the set of K retrieved captions used in our prompt, together with the code at https://github.com/RitaRamo/lmcap. The following text is a concrete example of the prompt provided for the first image of XM3600.
I am an intelligent image captioning bot. Similar images have the following captions: a horse grazing in a grassy field next to a barn</s> a brown horse grazing in its pen and a red barn and water</s> a pretty brown horse eating some grass in a bare field</s> a horse is eating grass next to a barn in the middle of a pasture</s> A creative short caption I can generate to describe this image in spanish is: Un caballo marrón es grasa cerca de una casa roja</s>
I am an intelligent image captioning bot. Similar images have the following captions: a teal toilet is the center of this bathroom photo</s> a small bathroom with brightly painted blue walls</s> the bathroom has a splash of color with the blue tiles</s> the sink is above a turquoise tile sink</s> A creative short caption I can generate to describe this image in spanish is: Un baño muy limpio y bien decorado</s> I am an intelligent image captioning bot. Similar images have the following captions: a woman and child focus on a pink device in public</s> a woman holding a small child while standing near a crowd</s> a very cute lady posing with a small kid</s> a young child with a cell phone and an adult</s> A creative
short caption I can generate to describe this image in spanish is: Una mujer se acercó a mirar en su teléfono mientras está listo para tomar una foto</s> I am an intelligent image captioning bot. Similar images have the following captions: a brown chicken is walking around outside with another hen</s> a couple of roosters standing in a field</s> a hen pecks the ground while another looks off in the distance</s> a couple of roosters are in a field</s> A creative short caption I can generate to describe this image in spanish is:.
## E Augmented Datastore Examples
In this appendix, we provide qualitative examples on XM3600 when the datastore is augmented with CCS, i.e., with large and diverse data. In Figure 4, we can see generation improving for English, where LMCAP correctly mentions the french concept of *macarons*, available in the retrieved captions. In line with the quantitative results provided in Section 4.3, we can also see a possible explanation for why generation degraded for Hindi, that has a lower pre-training language ratio than English: LMCAP seems to have copied the last 3-shot example provided in prompt, described above in Section D), maybe due to presence of more noise in the CCS data. Another example can be seen in Figure 5, where LMCAP is more specific in generating the flower type *orchid*.
![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png)
| - a close-up of an exotic flower on a large stem | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|
| - a view of a flower from very close, it appears to be fully bloomed | COCO |
| - a pink, white and red orchid in a vase in front of a window - a pink and white orchid is in a small black vase' en: "a close-up of an exotic flower on a large stem" es: "una flor exótica en un tallo grande" (an exotic flower in a big shell) hi: "एक फू ल का एक बड़ा Ǒहस्सा" (a large portion of a flower) zh: "一种美丽的花朵在巨大的花中" (a beautiful flower in a huge flower) - a large white flowered orchid with two pink tipped spots - an orchid with red and white markings - the pink and white orchid is on display at show - an orchid that won the individual's trophy | COCO |
| | + |
| CCS | |
| en: "a large white flowered orchid" es: "una flor de orchid" (a flower of orchid) hi: "फू लों कȧ एक बड़ी लता" (a large flower) zh: "一个白色的花朵,有两个红色的斑点" (a white flower with two red spots) | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitations
✓ A2. Did you discuss any potential risks of your work?
Section Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
Left blank.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
yang-li-2023-boosting | Boosting Text Augmentation via Hybrid Instance Filtering Framework | https://aclanthology.org/2023.findings-acl.105 | Text augmentation is an effective technique for addressing the problem of insufficient data in natural language processing. However, existing text augmentation methods tend to focus on few-shot scenarios and usually perform poorly on large public datasets. Our research indicates that existing augmentation methods often generate instances with shifted feature spaces, which leads to a drop in performance on the augmented data (for example, EDA generally loses approximately 2{\%} in aspect-based sentiment classification). To address this problem, we propose a hybrid instance-filtering framework (BoostAug) based on pre-trained language models that can maintain a similar feature space with natural datasets. BoostAug is transferable to existing text augmentation methods (such as synonym substitution and back translation) and significantly improves the augmentation performance by 2-3{\%} in classification accuracy. Our experimental results on three classification tasks and nine public datasets show that BoostAug addresses the performance drop problem and outperforms state-of-the-art text augmentation methods. Additionally, we release the code to help improve existing augmentation methods on large datasets. | # Boosting Text Augmentation Via Hybrid Instance Filtering Framework
Heng Yang, Ke Li∗
Department of Computer Science, University of Exeter, EX4 4QF, Exeter, UK
{hy345, k.li}@exeter.ac.uk
## Abstract
Text augmentation is an effective technique for addressing the problem of insufficient data in natural language processing. However, existing text augmentation methods tend to focus on few-shot scenarios and usually perform poorly on large public datasets. Our research indicates that existing augmentation methods often generate instances with shifted feature spaces, which leads to a drop in performance on the augmented data (for example, EDA generally loses ≈ 2% in aspect-based sentiment classification). To address this problem, we propose a hybrid instance-filtering framework
(BO O S TAU G) based on pre-trained language models that can maintain a similar feature space with natural datasets. BO O S TAU G is transferable to existing text augmentation methods
(such as synonym substitution and back translation) and significantly improves the augmentation performance by ≈ 2 − 3% in classification accuracy. Our experimental results on three classification tasks and nine public datasets show that BO O S TAU G addresses the performance drop problem and outperforms state-ofthe-art text augmentation methods. Additionally, we release the code to help improve existing augmentation methods on large datasets.
## 1 Introduction
Recent pre-trained language models (PLMs) (Devlin et al., 2019; Brown et al., 2020; He et al., 2021; Yoo et al., 2021) have been able to learn from large amounts of text data. However, this also leads to a critical problem of data insufficiency in many lowresource fine-tuning scenarios (Chen et al., 2020; Zhou et al., 2022a; Miao et al., 2021; Kim et al.,
2022; Wang et al., 2022b; Yang et al., 2022). Despite this, existing augmentation studies still encounter failures on large public datasets. While some studies(Ng et al., 2020; Body et al., 2021; Chang et al., 2021; Luo et al., 2021) have attempted
∗Corresponding author
![0_image_0.png](0_image_0.png)
to leverage the language modeling capabilities of PLMs in text augmentation, these methods still suffer from performance drops on large datasets.
To explore the root cause of this failure mode, we conducted experiments to explain the difference between "good" and "bad" augmentation instances. Our study found that existing augmentation methods (Wei and Zou, 2019; Coulombe, 2018; Li et al., 2019; Kumar et al., 2019; Ng et al.,
2020) usually fail to maintain the feature space in augmentation instances, which leads to bad instances. This shift in feature space occurs in both edit-based and PLM-based augmentation methods.
For example, edit-based methods can introduce breaking changes that corrupt the meaning of the text, while PLM-based methods can introduce outof-vocabulary words. In particular, for the editbased methods, the shifted feature space mainly comes from breaking text transformations, such as changing important words (e.g., 'but ' ) in sentiment analysis. As for PLM-based methods, they usually introduce out-of-vocabulary words due to word substitution and insertion, which leads to an adverse meaning in sentiment analysis tasks.
To address the performance drop in existing augmentation methods caused by shifted feature space, we propose a hybrid instance-filtering framework
(BO O S TAU G) based on PLMs to guide augmentation instance generation. Unlike other existing methods (Kumar et al., 2020), we use PLMs as a powerful instance filter to maintain the feature space, rather than as an augmentor. This is based on our finding that PLMs fine-tuned on natural datasets are familiar with the identical feature space distribution. The proposed framework consists of four instance filtering strategies: perplexity filtering, confidence ranking, predicted label constraint, and a cross-boosting strategy. These strategies are discussed in more detail in section Section 2.3.
Compared to prominent studies, BO O S TAU G is a pure instance-filtering framework that can improve the performance of existing text augmentation methods by maintaining the feature space.
With the mitigation of feature space shift, BO O S TAU G can generate more valid augmentation instances and improve existing augmentation methods' performance, which more augmentation instances generally trigger performance sacrifice in other studies (Coulombe, 2018; Wei and Zou, 2019; Li et al., 2019; Kumar et al., 2020)). According to our experimental results on three finegrained and coarse-grained text classification tasks, BO O S TAU G1significantly alleviates feature space shifts for existing augmentation methods.
Our main contributions are:
- We propose the feature space shift to explain the performance drop in existing text augmentation methods, which is ubiquitous in full dataset augmentation scenarios.
- We propose a universal augmentation instance filter framework to mitigate feature space shift and significantly improve the performance on the ABSC and TC tasks.
- Our experiments show that the existing text augmentation methods can be easily improved by employing BO O S TAU G.
1We release the source code and experiment scripts of BO O S TAU G at: https://github.com/yangheng95/
BoostTextAugmentation.
Algorithm 1: The pseudo code of BO O S TAU G
1 Split D into k folds, D := {Fi}
k i=1; 2 Daug := ∅;
3 for i ← 1 to k do 4 D
i aug := ∅, D
i boost := F
i; 5 Randomly pick up k − 2 folds except F
ito constitute D
i train; 6 D
i valid := F \ (F
i SD
i train);
7 Use the DeBERTa on D
i train and D
i valid to build the surrogate language model; 8 **forall** dorg ∈ Diboost do
![1_image_0.png](1_image_0.png)
16 **return** Daug
![1_image_1.png](1_image_1.png)
## 2 Proposed Method
The workflow of BO O S TAU G is shown in Figure 2 and the pseudo code is given in Algorithm 1. Different from most existing studies, which focus on unsupervised instance generation, BO O S TAU G serves as an instance filter to improve existing augmentation methods. The framework consists of two main phases: 1) Phase \#1: the training of surrogate language models; 2) Phase \#2: surrogate language models guided augmentation instance filtering. The following paragraphs will provide a detailed explanation of each step of the implementation.
## 2.1 Surrogate Language Model Training
At the beginning of Phase \#1, the original training dataset is divided into k ≥ 3 folds where the k−2 ones are used for training (denoted as the training fold) while the other two are used for the validation and augmentation purposes, denoted as the validation and boosting fold, respectively2(lines 46). Note that the generated augmentation instances, which will be introduced in Section 2.2, can be 2We iteratively select the i-th fold, i ∈ 1, · · · , k, as the boosting fold (line 3 in Algorithm 1). The validation fold is used to select the best checkpoint of the surrogate language model to filter the augmented instances. This process is repeated k times to ensure that all the folds have been used for validation and boosting at least once, thus avoiding data overlapping between the training and validation folds.
![2_image_0.png](2_image_0.png)
identical to the instances in training folds the surrogate language model. This data overlapping problem will lead to a shifted feature space. We argue that the proposed k-fold augmentation approach, a.k.a. "cross-boosting", can alleviate the feature space shift of the augmentation instances, which will be validated and discussed in detail in Section 4.3. The main crux of Phase \#1 is to build a surrogate language model as a filter to guide the elimination of harmful and poor augmentation instances.
We construct a temporary classification model using the DeBERTa (He et al., 2021) architecture.
This model is then fine-tuned using the data in the k − 2 training folds and the validation fold to capture the semantic features present in the data
(line 7). It is important to note that we do not use the original training dataset for this fine-tuning process. Once the fine-tuning is complete, the language model constructed from the DeBERTa classification model is then utilized as the surrogate language model in the instance filtering step in Phase \#2 of BO O S TAU G.
This is different from existing works that use a pre-trained language model to directly generate augmentation instances. We clarify our motivation for this from the following two aspects. - In addition to modeling the semantic feature, the surrogate language model can provide more information that can be useful for the quality control of the augmentation instances, such as text per-
plexity, classification confidence, and predicted label.
- Compared to the instance generation, we argue that the instance filtering approach can be readily integrated with any existing text augmentation approach.
## 2.2 Augmentation Instance Generation
As a building block of Phase \#2, we apply some prevalent data augmentation approaches as the back end to generate the augmentation instances in BO O S TAU G (line 9). More specifically, let Dorg := {d iorg}
N
i=1 be the original training dataset.
d iorg := ⟨s i, ℓi⟩ is a data instance where s iindicates a sentence and ℓ iis the corresponding label, i ∈ 1, · · · , N. By applying the transformation function F(·, ·, ·) upon d iorg as follows, we expect to obtain a set of augmentation instances Diaug for d iorg:
$${\mathcal{D}}_{\mathrm{aug}}^{i}:=F(d_{\mathrm{org}}^{i},\tilde{N},\Theta),$$
$$(1)$$
org, N, ˜ Θ), (1)
where N˜ ≥ 1 is used to control the maximum number of generated augmentation instances. In the end, the final augmentation set is constituted as Daug := SN
i=1 Diaug (line 14). Note that depending on the specific augmentation back end, there can be more than one strategy to constitute the transformation function. For example, EDA (Wei and Zou, 2019) has four transformation strategies, including synonym replacement, random insertion, random swap, and random deletion. Θ consists of the parameters associated with the transformation strategies of the augmentation back end, e.g.,
the percentage of words to be modified and the mutation probability of a word.
## 2.3 Instance Filtering
Our preliminary experiments have shown that merely using data augmentation can be detrimental to the modeling performance, no matter how many augmentation instances are applied in the training process. In addition, our experiments in Section 4.3 have shown a surprising feature space shift between the original data and the augmented instances in the feature space. To mitigate this issue, BO O S TAU G proposes an instance filtering approach to control the quality of the augmentation instances. It consists of three filtering strategies, including perplexity filtering, confidence ranking, and predicted label constraint, which will be delineated in the following paragraphs, respectively.
Note that all these filtering strategies are built on the surrogate language model developed in Phase
\#1 of BO O S TAU G (lines 12 and 13).
## 2.3.1 Perplexity Filtering
Text perplexity is a widely used metric to evaluate the modeling capability of a language model (Chen and Goodman, 1999; Sennrich, 2012). Our preliminary experiments have shown that low-quality instances have a relatively high perplexity. This indicates that perplexity information can be used to evaluate the quality of an augmentation instance.
Since the surrogate language model built in Phase
\#1 is bidirectional, the text perplexity of an augmentation instance daug is calculated as:
$$\mathbb{P}(d_{\rm aug})=\prod_{i=1}^{s}p\left(w_{i}\mid w_{1},\cdots,w_{i-1},w_{i+1},\cdots,w_{s}\right)\tag{2}$$
where wi represents the token in the context. s is the number of tokens in daug and p (wi| w1, · · · , wi−1, wi+1, · · · , ws) is the probability of wi conditioned on the preceding tokens, according to the surrogate language model, i ∈ 1, · · · , s. Note that daug is treated as a lowquality instance and is discarded if P(daug) ≥ α while α ≥ 0 is a predefined threshold.
## 2.3.2 Confidence Ranking
We observe a significant feature space shift in the augmentation instances. These instances will be allocated with low confidences by the surrogate language model. In this case, we can leverage the classification confidence as a driver to control the quality of the augmentation instances. However, it is natural that long texts can have way more augmentation instances than short texts, thus leading to the so-called unbalanced distribution. Besides, the confidence of most augmentation instances is
≥ 95%, which is not selective as the criterion for instance filtering. To mitigate the unbalanced distribution in augmentation instances and make use of confidence, we develop a confidence ranking strategy to eliminate the redundant augmentation instances generated from long texts while retaining the rare instances having a relatively low confidence. More specifically, we apply a softmax operation on the output hidden state learned by the surrogate language model, denoted as H(daug), to evaluate the confidence of daug as:
$$\mathbb{C}(d_{\mathrm{aug}})=\operatorname{argmax}{\bigg(}{\frac{\exp(\mathbb{H}_{d a u g})}{\sum_{1}^{c}\exp(\mathbb{H}_{d a u g})}}{\bigg)},\quad(3)$$
where c is the number of classes in the original training dataset. To conduct the confidence ranking, 2 × N˜ instances are generated at first, while only the top N˜ instances are selected to carry out the confidence ranking. By doing so, we expect to obtain a balanced augmentation dataset even when there is a large variance in the confidence predicted by the surrogate language model. After the confidence ranking, the augmentation instances with Cdaug ≤ β are discarded while β ≥ 0 is a fixed threshold.
## 2.3.3 Predicted Label Constraint
Due to some breaking text transformation, text augmentation can lead to noisy data, e.g., changing a word "greatest" to "worst" in a sentence leads to an adverse label in a sentiment analysis task.
Since the surrogate language model can predict the label of an augmentation instance based on its confidence distribution, we develop another filtering strategy that eliminates the augmentation instances whose predicted label ˜ℓdaug is different from the ground truth. By doing so, we expect to mitigate the feature space bias.
## 2.4 Feature Space Shift Metric
To quantify the shift of the feature space, we propose an ensemble metric based on the overlapping ratio and distribution skewness of the t-SNE-based augmented instances' feature space.
The feature space overlapping ratio measures the diversity of the augmented instances. A larger overlapping ratio indicates that more natural instances have corresponding augmented instances.
On the other hand, the distribution skewness measure describes the uniformity of the distribution of the augmented instances. A smaller distribution skewness indicates that the natural instances have approximately equal numbers of corresponding augmented instances. To calculate the feature space shift, we first calculate the overlapping ratio and distribution skewness of the natural instances and their corresponding augmented instances. The feature space shift is calculated as follows:
$${\mathcal{S}}=1-{\mathcal{O}}+s k,$$
where O and sk are the feature space convex hull overlapping ratio and feature space distribution skewness, which will be introduced in the following subsections.
## 2.4.1 Convex Hull Overlapping Calculation
To calculate the convex hull overlapping rate, we use the Graham Scan algorithm3(Graham, 1972)
to find the convex hulls for the test set and target dataset in the t-SNE visualization, respectively.
Let P1 and P2 represent the convex hulls of two datasets in the t-SNE visualization; we calculate the overlapping rate as follows:
$${\mathcal{O}}={\frac{{\mathcal{P}}_{1}\cap{\mathcal{P}}_{2}}{{\mathcal{P}}_{1}\cup{\mathcal{P}}_{2}}},\qquad\qquad(5)$$
where ∩ and ∪ denote convex hull intersection and union operations, respectively. O is the overlapping rate between P1 and P2.
2.4.2 Distribution skewness calculation The skewness of an example distribution is computed as follows:
$$s k=\frac{m_{3}}{m_{2}^{3/2}},\qquad\qquad\qquad(6)$$ $$m_{i}=\frac{1}{N}\sum_{n=1}^{N}(x_{n}-\bar{x})^{i},\qquad\qquad(7)$$
where N is the number of instances in the distribution; sk is the skewness of an example distribution.
mi and x¯ are the i-th central moment and mean of the example distribution, respectively. Because the t-SNE has two dimensions (namely x and y
$${}^{3}{\mathrm{htttps}}\!://g i t h u b.{\mathrm{com/shapely/shapely}}.$$
axes), we measure the global skewness of the target dataset (e.g., training set, augmentation set) by summarizing the absolute value of skewness on the x and y axes in t-SNE:
$$s k^{g}=|s k^{x}|+|s k^{y}|,\qquad\qquad(8)$$
where skgis the global skewness of the target dataset; skxand skyare the skewness on the x and y axes, respectively.
By combining the convex hull overlapping ratio and distribution skewness, the proposed feature space shift metric offers a comprehensive view of how well the augmented instances align with the original data distribution. This metric can be used to evaluate the effectiveness of different data augmentation approaches, as well as to inform the fine-tuning process for better model performance.
$$(4)$$
## 3 Experimental Setup 3.1 Datasets
Our experiments are conducted on three classification tasks: the sentence-level text classification (TC), the aspect-based sentiment classification (ABSC), and natural language inference (NLI).
The datasets used for the TC task include SST2, SST5 (Socher et al., 2013) from the Stanford Sentiment Treebank, and AGNews10K4(Zhang et al.,
2015). Meanwhile, the datasets used for the ABSC
task are Laptop14, Restaurant14(Pontiki et al.,
2014), Restaurant15 (Pontiki et al., 2015), Restaurant16 (Pontiki et al., 2016), and MAMS (Jiang et al., 2019). The datasets5 used for the NLI
task are the SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) datasets, respectively.
The split of these datasets is summarized in Table 1.
The commonly used Accuracy (i.e., Acc) and macro F1 are used as the metrics for evaluating the performance of different algorithms following existing research (Wang et al., 2016; Zhou et al.,
2022a). Additionally, all experiments are repeated five times with different random seeds. Detailed information on the hyper-parameter settings and sensitivity tests of α and β can be found in Appendix A.
Table 1: The summary of experimental datasets for the text classification, aspect-based sentiment analysis and natural language inference tasks.
| Dataset | Training Set | Validation Set | Testing Set |
|--------------|----------------|------------------|---------------|
| Laptop14 | 2328 | 0 | 638 |
| Restaurant14 | 3608 | 0 | 1120 |
| Restaurant15 | 1120 | 0 | 540 |
| Restaurant16 | 1746 | 0 | 615 |
| MAMS | 11186 | 1332 | 1336 |
| SST2 | 6920 | 872 | 1821 |
| SST5 | 8544 | 1101 | 2210 |
| AGNews10K | 7000 | 1000 | 2000 |
| SNLI | 1000 | 10000 | 10000 |
| MNLI | 1000 | 20000 | 0 |
## 3.2 Augment Backends
We use BO O S TAU G to improve five state-of-theart baseline text augmentation methods, all of which are used as the text augmentation backend of BO O S TAU G. Please find the introductions of these baselines in Appendix B and refer to Table 6 for detailed performance of BO O S TAU G based on different backends.
We also compare BO O S TAU G enhanced EDA
with the following text augmentation methods:
- EDA (TextAttack6) (Wei and Zou, 2019) performs text augmentation via random word insertions, substitutions, and deletions.
- SynonymAug (NLPAug7) (Niu and Bansal, 2018) replaces words in the original text with their synonyms. This method has been shown to be effective in improving the robustness of models on certain tasks.
- TAA (Ren et al., 2021) is a Bayesian optimization-based text augmentation method.
It searches augmentation policies and automatically finds the best augmentation instances.
- AEDA (Karimi et al., 2021) is based on the EDA,
which attempts to maintain the order of the words while changing their positions in the context. Besides, it alleviates breaking changes such as critical deletions and improves the robustness.
- AMDA (Si et al., 2021) linearly interpolates the representations of pairs of training instances, which has a diversified augmentation set compared to discrete text adversarial augmentation.
In our experiments, LSTM, BERT-BASE(Devlin et al., 2019), and DeBERTa-BASE(He et al., 2021)
are used as the objective models for the TC task. FastLCF is an objective model available for the
## 4 Experimental Results 4.1 Main Results
From the results shown in Table 2, it is clear that BO O S TAU G consistently improves the performance of the text augmentation method EDA across all datasets and models. It is also worth noting that some traditional text augmentation methods can actually harm the performance of the classification models. Additionally, the performance improvement is relatively small for larger datasets like SST-2, SST-5, and MAMS. Furthermore, the performance of LSTM is more affected by text augmentation, as it lacks the knowledge gained from the large-scale corpus that is available in PLMs.
When comparing the different text augmentation methods, it is apparent that EDA performs the best, despite being the simplest method. On the other hand, SplitAug performs the worst for LSTM because its augmentation instances are heavily biased in the feature space due to the word splitting transformation. The performance of SpellingAug is similar to EDA. This can be attributed to the fact that PLMs have already captured some common misspellings during pretraining. Additionally, PLM-based augmentation methods like WordsEmbsAug tend to generate instances with unknown words, further exacerbating the feature space shift of the augmented texts.
We also compare the performance of BO O S TAU G with several state-of-the-art text augmentation methods. The results of these comparisons can be found in Table 3. From the results, it can be seen that even when using EDA
(Wei and Zou, 2019) as the backend, BO O S TAU G
outperforms other state-of-the-art methods such as AEDA (Karimi et al., 2021), AMDA (Si et al.,
2021), and Bayesian optimization-based TAA
(Ren et al., 2021) on the full SST2 dataset.
## 4.2 Ablation Study
To gain a deeper understanding of the working mechanism of BO O S TAU G, we conduct experiments to evaluate the effectiveness of crossboosting, predicted label constraint, confidence ranking, and perplexity filtering. The results, which can be found in Table 4, show that the performance of the variant MonoAug is significantly lower than that of BO O S TAU G. This is because MonoAug trains the surrogate language model using the entire
Augmentation Model Laptop14 Restaurant14 Restaurant15 Restaurant16 MAMS SST2 SST5 **AGNews10K**
Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1
None
LSTM 71.32 65.45 77.54 66.89 78.61 58.54 87.40 64.41 56.96 56.18 84.36 84.36 45.29 44.61 87.60 87.36
BERT 79.47 75.70 85.18 78.31 83.61 69.73 91.3 77.16 82.78 82.04 90.88 90.88 53.53 52.06 92.47 92.26
DeBERTa 83.31 80.02 87.72 81.73 86.58 74.22 93.01 81.93 83.31 82.87 95.07 95.07 56.47 55.58 92.30 92.13
FastLCF 83.23 79.68 88.5 82.7 87.74 73.69 93.69 81.66 83.46 82.88 - - - - - -
EDA
LSTM 68.65 62.09 76.18 62.41 76.30 56.88 85.59 61.78 56.59 55.33 84.79 84.79 43.85 43.85 87.72 87.46
BERT 78.37 74.23 83.75 75.38 81.85 65.63 91.38 77.27 81.81 81.10 91.16 91.16 51.58 50.49 92.50 92.28
DeBERTa 80.96 78.65 86.79 79.82 84.44 70.40 93.01 77.59 81.96 81.96 94.07 94.07 56.43 53.88 92.55 92.33
FastLCF 81.97 79.57 87.68 81.52 86.39 72.51 93.17 78.96 82.19 81.63 - - - - - -
SpellingAug
LSTM 67.24 60.30 75.36 63.01 73.52 49.04 84.72 53.92 55.99 55.16 83.14 83.14 41.45 40.40 87.25 86.96
BERT 73.59 69.11 82.54 73.18 79.63 62.32 89.76 74.74 81.89 81.42 91.00 91.00 52.26 50.90 92.42 92.22
DeBERTa 80.17 76.01 85.13 76.67 85.83 71.54 92.76 78.33 81.89 81.24 93.68 93.68 55.95 53.78 92.68 92.50
FastLCF 79.62 74.81 86.03 78.73 87.41 75.14 92.60 75.27 82.19 81.66 - - - - - -
SplitAug
LSTM 62.98 56.53 73.43 58.57 70.19 45.71 83.93 54.41 56.74 55.34 84.29 84.29 44.00 42.10 87.23 87.01
BERT 75.47 70.56 82.86 74.48 82.87 65.19 90.98 77.51 81.74 81.35 90.88 90.88 51.99 50.95 92.45 92.16
DeBERTa 79.15 75.72 86.03 79.28 85.46 70.43 92.76 79.79 81.59 81.09 94.29 94.29 55.51 49.77 92.52 92.29
FastLCF 81.82 78.46 86.34 78.36 86.67 70.87 93.09 76.50 82.07 81.53 - - - - - -
ContextualWordEmbsAug
LSTM 67.40 61.57 75.62 62.13 74.44 51.67 84.98 58.67 56.06 55.10 83.14 83.14 44.07 42.03 87.53 87.24
BERT 75.63 70.79 83.26 75.11 78.61 61.48 90.24 72.37 81.29 80.50 91.02 91.02 51.27 50.27 92.10 91.86
DeBERTa 76.88 71.98 85.49 77.22 84.63 70.50 92.28 77.42 81.66 81.32 94.12 94.12 55.48 53.60 92.80 92.62
FastLCF 79.08 74.61 85.62 76.88 84.91 70.06 91.38 76.27 81.89 81.09 - - - - - -
BackTranslationAug
LSTM 68.50 62.22 78.12 66.70 78.85 59.08 86.97 63.47 - - - - - - - -
BERT 79.94 76.19 85.54 78.51 84.42 72.05 92.02 85.78 - - - - - - - -
DeBERTa 84.17 81.15 88.93 83.54 89.42 78.67 93.97 80.52 - - - - - - - -
FastLCF 82.76 79.82 89.46 84.94 88.13 75.70 94.14 81.82 - - - - - - - -
BO O S TAU G (EDA)
LSTM 73.20† 67.46† 79.12† 68.07† 80.06† 59.61† 87.80† 65.33† 59.21† 59.58† 85.83† 85.83† 45.93† 43.59† 88.45 88.16
BERT 80.10† 76.48† 86.34† 79.99† 86.12† 73.79† 91.95† 79.12† 84.01† 83.44† 92.33† 92.33† 53.94† 52.80† 92.48 92.25
DeBERTa 84.56† 81.77† 89.02† 83.35† 88.33† 76.77† 93.58† 81.93† 84.51† 83.97† 96.09† 96.09† 57.78† 56.15† 92.95 **92.76**
FastLCF 85.11† 82.18† 90.38† 85.04† 89.81† 77.92† 94.37† **82.67**† 84.13† 82.97†- - - - - -
training set, leading to a high degree of similarity between the original and augmentation instances.
This data overlapping problem, as discussed in Section 2.1, results in biased instance filtering and overfitting of the instances to the training fold data distribution. Additionally, the variant without the perplexity filtering strategy performs the worst, indicating that the perplexity filtering strategy is crucial in removing instances with syntactical and grammatical errors. The performance of the variants without the predicted label constraint and confidence ranking is similar, with the label constraint helping to prevent the mutation of features into an adverse meaning and the confidence ranking helping to eliminate out-of-domain words and reduce feature space shift.
## 4.3 Feature Space Shift Investigation
In this subsection, we explore the feature space shift problem in more detail by using visualizations and the feature space shift metric. We use t-SNE to visualize the distribution of the features of the testing set and compare it to different augmented variants. The full results of feature space shift metrics are available in Figure 6. The results of feature space shift metrics in our experiment show that the augmentation instances generated by BO O S TAU G
have the least shift of feature space. Specifically, the overlapping ratio and skewness in relation to the testing set are consistently better than those of the training set. This explains the performance improvement seen when using BO O S TAU G in previous experiments. In contrast, the augmentation instances generated by EDA, which was the best peer text augmentation method, have a worse overlapping rate compared to even the training set. This explains the performance degradation when using EDA on the baseline classification models. It is also noteworthy that the quality of the augmentation instances generated by MonoAug is better than EDA.
## 4.4 **Effect Of Augmentation Instances Number**
To further understand the effectiveness of BO O S TAU G, we conduct an experiment to analyze the relationship between the number of augmentation instances generated and the performance of
| Augmentation | Model | Acc | F1 |
|-------------------|---------|--------------|--------------|
| None∗ | BERT | 90.88 (0.31) | 90.87 (0.31) |
| EDA∗ | BERT | 90.99 (0.46) | 90.99 (0.46) |
| SynonymAug∗ | BERT | 91.32 (0.55) | 91.31 (0.55) |
| TAA∗ | BERT | 90.94 (0.31) | 90.94 (0.31) |
| AEDA | BERT | 91.76 ( - ) | - |
| AMDA | BERT | 91.54 ( - ) | - |
| BO O S TAU G(EDA) | BERT | 92.33 (0.29) | 92.33 (0.29) |
![7_image_0.png](7_image_0.png)
the classification models. We use Acc and F1 as the evaluation metrics and plot the trajectories of these metrics with error bars against the number of augmentation instances generated for an example by using BO O S TAU G. The results are shown in Figure 3. For comparison, the trajectory visualization plots of MonoAug and EDA can also be found in Figure 7. From the results, it is clear to see that the performance of the classification models improves as the number of augmentation instances increases, but eventually reaches a saturation point.
Furthermore, it is observed that the performance improvement achieved by BO O S TAU G is consistently better than that of MonoAug and EDA. This further confirms the effectiveness of BO O S TAU G
in mitigating the feature space shift problem and improving the performance of the classification models.
However, it is also important to consider the computational budgets required to generate a large number of augmentation instances, as this can impact the overall efficiency of the text augmentation method being used.
## 4.5 Hyper-Parameter Sensitivity Analysis
We find that there is no single best setting for the two hyper-parameters, α and β, in different situations such as different datasets and backend augmentation methods. To explore the sensitivity of these hyper-parameters, we conducted experiments on the Laptop14 and Restaurant14 datasets and show the Scott-Knott rank test (Mittas and Angelis, 2013) plots and performance box plots in Figure 4 and Figure 5, respectively. We found that the best value of α highly depends on the dataset. For the Laptop14 and Restaurant14 datasets, a value of α = 0.5 was found to be the best choice according to Figure 4. However, it's worth noting that the smaller the value of α, the higher the computation complexity due to the need for more augmentation instances. To balance efficiency and performance, we recommend a value of α = 0.99
(α = 1 means no augmentation instances survive)
in BO O S TAU G, which reduces computation complexity. Additionally, we found that β is relatively easy to determine, with a value of β = 4 being commonly used.
![8_image_0.png](8_image_0.png)
## 5 Related Works
As pretraining has advanced, text augmentation techniques have become an increasingly popular area of research (Sennrich et al., 2016; Coulombe, 2018; Li et al., 2019; Wei and Zou, 2019; Kumar et al., 2020; Lewis et al., 2020; Xie et al., 2020; Bi et al., 2021; Ren et al., 2021; Haralabopoulos et al.,
2021; Wang et al., 2022c; Yue et al., 2022; Zhou et al., 2022a; Kamalloo et al., 2022; Wang et al.,
2022a). Many of these techniques focus on lowresource scenarios (Chen et al., 2020; Zhou et al.,
2022a; Kim et al., 2022; Zhou et al., 2022b; Wu et al., 2022; Yang et al., 2022; Wang et al., 2022b; Yang et al., 2022). However, they tend to fail when applied to large public datasets (Zhou et al., 2022a).
Recent prominent works (Sennrich et al., 2016; Kumar et al., 2020; Lewis et al., 2020; Ng et al., 2020; Body et al., 2021; Chang et al., 2021; Luo et al.,
2021; Wang et al., 2022b) recognize the significance of pre-trained language models (PLMs) for text augmentation and propose PLM-based methods to improve text augmentation. However, the quality of augmentation instances generated by unsupervised PLMs cannot be guaranteed. Some research (Dong et al., 2021) has attempted to use adversarial training in text augmentation, which can improve robustness, but these methods are more suitable for low-sample augmentation scenarios and cause shifted feature spaces in large datasets.
While recent studies have emphasized the importance of quality control for augmentation instances
(Lewis et al., 2021; Kamalloo et al., 2022; Wang et al., 2022b), there remains a need for a transferable augmentation instance-filtering framework that can serve as an external quality controller to improve existing text augmentation methods.
Our work aims to address the failure mode of large dataset augmentation and improve existing augmentation methods more widely. Specifically, BO O S TAU G is a simple but effective framework that can work with a variety of existing augmentation backends, including EDA (Wei and Zou, 2019) and PLM-based augmentation (Kumar et al.,
2020).
## 6 Conclusion
Existing text augmentation methods usually lead to performance degeneration in large datasets due to numerous low-quality augmentation instances, while the reason for performance degeneration has not been well explained. We find low-quality augmentation instances usually have shifted feature space compare to natural instances. Therefore, we propose a universal augmentation instance filter framework BO O S TAU G to widely enhance existing text augmentation methods. BO O S TAU G is an external and flexible framework, all the existing text augmentation methods can be seamless improved. Experimental results on three TC datasets and five ABSC datasets show that BO O S TAU G is able to alleviate feature space shift in augmentation instances and significantly improve existing augmentation methods.
## Acknowledgements
This work was supported by UKRI Future Leaders Fellowship (MR/X011135/1, MR/S017062/1),
NSFC (62076056), Alan Turing Fellowship, EPSRC (2404317), Royal Society (IES/R2/212077)
and Amazon Research Award.
## 7 Limitations
We propose and solve the feature space shift problem in text augmentation. However, there is a limitation that remains. BO O S TAU G cannot preserve the grammar and syntax to a certain extent. We apply the perplexity filtering strategy, but it is an implicit constraint and cannot ensure the syntax quality of the augmentation instances due to some breaking transformations, such as keyword deletions and modifications. However, we do not need precise grammar and syntax information in most classification tasks, especially in PLM-based classification. For some syntax-sensitive tasks, e.g.,
syntax parsing and the syntax-based ABSC (Zhang et al., 2019; Phan and Ogunbona, 2020; Dai et al.,
2021), ensuring the syntax quality of the augmented instances is an urgent problem. Therefore, BO O S TAU G may not be an best choice for some tasks or models requiring syntax as an essential modeling objective (Zhang et al., 2019). In other words, the syntax quality of BO O S TAU G depends on the backend.
## References
Wei Bi, Huayang Li, and Jiacheng Huang. 2021. Data augmentation for text generation without any augmented data. In ACL/IJCNLP'21: Proc. of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 2223–
2237. Association for Computational Linguistics.
Thomas Body, Xiaohui Tao, Yuefeng Li, Lin Li, and Ning Zhong. 2021. Using back-and-forth translation to create artificial augmented textual data for sentiment analysis models. *Expert Syst. Appl.*,
178:115033.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *EMNLP'15: Proc. of the 2015 Conference on* Empirical Methods in Natural Language Processing, pages 632–642. The Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In NeurlIPS'20: Advances in Neural Information Processing Systems.
Ernie Chang, Xiaoyu Shen, Dawei Zhu, Vera Demberg, and Hui Su. 2021. Neural data-to-text generation with lm-based text augmentation. In *EACL'21: Proc.*
of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 758–768. Association for Computational Linguistics.
Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mixtext:
Linguistically-informed interpolation of hidden space for semi-supervised text classification. In ACL'20:
Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2147–2157. Association for Computational Linguistics.
Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. *Comput. Speech Lang.*, 13(4):359–393.
Claude Coulombe. 2018. Text data augmentation made simple by leveraging NLP cloud apis. *CoRR*,
abs/1812.04718.
Junqi Dai, Hang Yan, Tianxiang Sun, Pengfei Liu, and Xipeng Qiu. 2021. Does syntax matter? A strong baseline for aspect-based sentiment analysis with roberta. In *NAACL-HLT'21: Proc. of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1816–1829. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT'19: Proc. of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Association for Computational Linguistics.
Xin Dong, Yaxin Zhu, Zuohui Fu, Dongkuan Xu, and Gerard de Melo. 2021. Data augmentation with adversarial training for cross-lingual NLI. In ACL/IJCNLP'21: Proc. of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 5158–5167. Association for Computational Linguistics.
Ronald L. Graham. 1972. An efficient algorithm for determining the convex hull of a finite planar set. *Inf.*
Process. Lett., 1(4):132–133.
Giannis Haralabopoulos, Mercedes Torres Torres, Ioannis Anagnostopoulos, and Derek McAuley. 2021.
Text data augmentations: Permutation, antonyms and negation. *Expert Syst. Appl.*, 177:114769.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: decoding-enhanced bert with disentangled attention. In *ICLR'21: 9th International Conference on Learning Representations*.
OpenReview.net.
Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In EMNLP-IJCNLP'19: Proc. of the 2019 Conference
on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 6279–6284. Association for Computational Linguistics.
Ehsan Kamalloo, Mehdi Rezagholizadeh, and Ali Ghodsi. 2022. When chosen wisely, more data is what you need: A universal sample-efficient strategy for data augmentation. In *ACL'22: Findings of the Association for Computational Linguistics*, pages 1048–
1062. Association for Computational Linguistics.
Akbar Karimi, Leonardo Rossi, and Andrea Prati. 2021.
AEDA: an easier data augmentation technique for text classification. In *EMNLP'21: Findings of the Association for Computational Linguistics*, pages 2748–
2754. Association for Computational Linguistics.
Hazel H. Kim, Daecheol Woo, Seong Joon Oh, JeongWon Cha, and Yo-Sub Han. 2022. ALP: data augmentation using lexicalized pcfgs for few-shot text classification. In AAAI'22: Thirty-Sixth AAAI Conference on Artificial Intelligence, pages 10894–10902. AAAI
Press.
Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha P. Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In NAACL-HLT'19:
Proc. of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3609–3619. Association for Computational Linguistics.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained transformer models. *CoRR*, abs/2003.02245.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL'20: Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Association for Computational Linguistics.
Patrick S. H. Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 million probably-asked questions and what you can do with them. *Trans. Assoc. Comput. Linguistics*,
9:1098–1115.
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In NDSS'19:
26th Annual Network and Distributed System Security Symposium. The Internet Society.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Qiaoyang Luo, Lingqiao Liu, Yuhao Lin, and Wei Zhang. 2021. Don't miss the labels: Label-semantic augmented meta-learner for few-shot text classification. In *ACL/IJCNLP'21: Findings of the Association for Computational Linguistics*, volume ACL/IJCNLP 2021, pages 2773–2782. Association for Computational Linguistics.
Zhengjie Miao, Yuliang Li, and Xiaolan Wang. 2021.
Rotom: A meta-learned data augmentation framework for entity matching, data cleaning, text classification, and beyond. In *SIGMOD'21: International* Conference on Management of Data, Virtual Event, China, June 20-25, 2021, pages 1303–1316. ACM.
Nikolaos Mittas and Lefteris Angelis. 2013. Ranking and clustering software cost estimation models through a multiple comparisons algorithm. IEEE
Trans. Software Eng., 39(4):537–551.
Nathan Ng, Kyunghyun Cho, and Marzyeh Ghassemi.
2020. SSMBA: self-supervised manifold based data augmentation for improving out-of-domain robustness. In EMNLP'20: Proc. of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1268–1283. Association for Computational Linguistics.
Tong Niu and Mohit Bansal. 2018. Adversarial oversensitivity and over-stability strategies for dialogue models. In *CoNLL'18: Proc. of the 22nd Conference on Computational Natural Language Learning*,
pages 486–496. Association for Computational Linguistics.
Minh Hieu Phan and Philip O. Ogunbona. 2020. Modelling context and syntactical features for aspectbased sentiment analysis. In ACL'20: Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3211–3220. Association for Computational Linguistics.
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia V.
Loukachevitch, Evgeniy V. Kotelnikov, Núria Bel, Salud María Jiménez Zafra, and Gülsen Eryigit. 2016.
Semeval-2016 task 5: Aspect based sentiment analysis. In *NAACL-HLT'16: Proc. of the 10th International Workshop on Semantic Evaluation*, pages 19–30. The Association for Computer Linguistics.
Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015.
Semeval-2015 task 12: Aspect based sentiment analysis. In *NAACL-HLT'15: Proc. of the 9th International Workshop on Semantic Evaluation*, pages 486–
495. The Association for Computer Linguistics.
Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In ACL'14: Proc. of the 8th International Workshop on Semantic Evaluation, pages 27–35. The Association for Computer Linguistics.
Shuhuai Ren, Jinchao Zhang, Lei Li, Xu Sun, and Jie Zhou. 2021. Text autoaugment: Learning compositional augmentation policy for text classification. In EMNLP'21: Proc. of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9029–9043. Association for Computational Linguistics.
Rico Sennrich. 2012. Perplexity minimization for translation model domain adaptation in statistical machine translation. In EACL'12: 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 539–549. The Association for Computer Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In ACL'16: Proc. of the 54th Annual Meeting of the Association for Computational Linguistics. The Association for Computer Linguistics.
Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun.
2021. Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning. In *ACL/IJCNLP'21: Findings of the Association* for Computational Linguistics, volume ACL/IJCNLP
2021 of *Findings of ACL*, pages 1569–1576. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP'13: Proc. of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. ACL.
Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022a. Logic-driven context extension and data augmentation for logical reasoning of text. In ACL'22: Findings of the Association for Computational Linguistics, pages 1619–1629. Association for Computational Linguistics.
Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In EMNLP'16: Proc.
of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606–615. The Association for Computational Linguistics.
Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang.
2022b. Promda: Prompt-based data augmentation for low-resource NLU tasks. In ACL'22: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics, pages 4242–4255. Association for Computational Linguistics.
Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang.
2022c. Promda: Prompt-based data augmentation for low-resource NLU tasks. In *ACL'22: Proc. of the* 60th Annual Meeting of the Association for Computational Linguistics, pages 4242–4255. Association for Computational Linguistics.
Jason W. Wei and Kai Zou. 2019. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In EMNLP-IJCNLP'19:
Proc. of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 6381–6387. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL'18: Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pages 1112–1122. Association for Computational Linguistics.
Xing Wu, Chaochen Gao, Meng Lin, Liangjun Zang, and Songlin Hu. 2022. Text smoothing: Enhance various data augmentation methods on text classification tasks. In *ACL'22: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics*,
pages 871–875. Association for Computational Linguistics.
Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In *NeurIPS'20: Advances* in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems.
Kevin Yang, Olivia Deng, Charles Chen, Richard Shin, Subhro Roy, and Benjamin Van Durme. 2022. Addressing resource and privacy constraints in semantic parsing through data augmentation. In *ACL'22: Findings of the Association for Computational Linguistics*,
pages 3685–3695. Association for Computational Linguistics.
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, and Woo-Myoung Park. 2021. Gpt3mix: Leveraging large-scale language models for text augmentation. In *EMNLP'21: Findings of the Association for* Computational Linguistics, pages 2225–2239. Association for Computational Linguistics.
Tianchi Yue, Shulin Liu, Huihui Cai, Tao Yang, Shengkang Song, and Tinghao Yu. 2022. Improving chinese grammatical error detection via data augmentation by conditional error generation. In *ACL'22:*
Findings of the Association for Computational Linguistics, pages 2966–2975. Association for Computational Linguistics.
Chen Zhang, Qiuchi Li, and Dawei Song. 2019.
Aspect-based sentiment classification with aspectspecific graph convolutional networks. In EMNLPIJCNLP'19: Proc. of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 4567–4577. Association for Computational Linguistics.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *NeurlIPS'15: Advances in Neural Information Processing Systems 28: Annual Conference* on Neural Information Processing Systems, pages 649–657.
Jing Zhou, Yanan Zheng, Jie Tang, Li Jian, and Zhilin Yang. 2022a. Flipda: Effective and robust data augmentation for few-shot learning. In *Proc. of the 60th* Annual Meeting of the Association for Computational Linguistics, pages 8646–8665. Association for Computational Linguistics.
Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022b. MELM:
data augmentation with masked entity language modeling for low-resource NER. In *ACL'22: Proc. of the* 60th Annual Meeting of the Association for Computational Linguistics, pages 2251–2262. Association for Computational Linguistics.
## A Hyperparameter Settings A.1 Hyperparameter Settings For Bo O S Tau G
Some important parameters are set as follows.
- k is set to 5 for the k-fold cross-boosting on all datasets.
- The number of augmentation instances per example N˜ is 8.
- The transformation probability of each token in a sentence is set to 0.1 for all augmentation methods.
- The fixed confidence and perplexity thresholds are set as α = 0.99 and β = 5 based on grid search. We provide sensitivity test of α and β in Appendix C.2.
- The learning rates of base models LSTM and DeBERTa-BASE are set as 10−3and 10−5, respectively.
- The batch size and maximum sequence modeling length are 16 and 80, respectively.
- The L2 regularization parameter λ is 10−8; we use Adam as the optimizer for all models during the training process.
## B Baseline Backends
We use BO O S TAU G to improve five state-of-the-art baseline text augmentation methods, all of which are used as the text augmentation back end of BO O S TAU G. Please refer to Table 6 for detailed experimental results.
- EDA(TextAttack8) (Wei and Zou, 2019) performs text augmentation via random word insertions, substitutions and deletions.
- SynonymAug(NLPAug9) (Niu and Bansal, 2018) replaces words in the original text with their synonyms. This method has been shown to be effective in improving the robustness of models on certain tasks.
- SpellingAug (Coulombe, 2018): it substitutes words according to spelling mistake dictionary.
- SplitAug (Li et al., 2019) (NLPAug): it splits some words in the sentence into two words randomly.
- BackTranslationAug (Sennrich et al.,
2016) (NLPAug): it is a sentence level augmentation method based on sequence translation.
- ContextualWordEmbsAug (Kumar et al.,
2020) (NLPAug): it substitutes similar words ac-8https://github.com/QData/TextAttack 9https://github.com/makcedward/nlpaug
cording to the PLM (i.e., Roberta-base (Liu et al., 2019)) given the context.
## C Additional Experiments C.1 **Natural Language Inference Experiments**
The experimental results in Table 5 show that the performance of both BERT and DeBERTa models can be improved by applying BO O S TAU G. With BO O S TAU G, the accuracy of the BERT model on SNLI improves from 70.72% to 73.08%, and on MNLI from 51.11% to 52.49%. The DeBERTa model also shows significant improvement with EDA, achieving 86.39% accuracy on SNLI and 78.04% on MNLI. These results demonstrate the effectiveness of BO O S TAU G in improving the generalizability of natural language inference models, and its compatibility with different state-of-the-art pre-trained models such as BERT and DeBERTa.
Table 5: The additional experimental results on the SNLI and MNLI datasets for natural language inference.
The back end of BO O S TAU G is EDA.
![13_image_0.png](13_image_0.png)
![13_image_1.png](13_image_1.png)
## C.2 Hyper-Parameter Sensitivity Experiment
We provide the experimental results of BO O S TAU G
on the Laptop14 and Restaurant14 datasets in Figure 5.
## C.3 Performance Of Bo O S Tau G **On Different** Backends
To investigate the generalization ability of BO O S TAU G, we evaluate its performance based on the existing augmentation backends. From the results shown in Table 6, we find that the performance of these text augmentation back ends can be improved by using our proposed BO O S TAU G.
Especially by cross-referencing the results shown in Table 2, we find that the conventional text augmentation methods can be enhanced if appropriate instance filtering strategies are applied.
Another interesting observation is that PLMs are not effective for text augmentation, e.g.,
WordEmdsAug is outperformed by EDA in most comparisons10. Moreover, PLMs are resourceintense and usually cause a biased feature space.
This is because PLMs can generate some unknown words, which are outside the testing set, during the pre-training stage. Our experiments indicate that using PLM as an augmentation instance filter, instead of a text augmentation tool directly, can help alleviate the feature space shift.
## C.4 Visualization Of Feature Space
Figure 6 shows the feature space shift of the ABSC
datasets, where the augmentation back end of BO O S TAU G is EDA.
## C.5 Trajectory Visualization Of Rq4
Figure 7 shows the performance trajectory visualization of MonoAug and EDA. Compared to BO O S TAU G, MonoAug and existing augmentation methods usually trigger performance sacrifice while augmentation instances for each example are more than 3.
Table 6: Performance comparison of BO O S TAU G based on different augment back ends.
Backend ModelMAMS SST2 SST5 **AGNews10K**
![14_image_0.png](14_image_0.png)
![14_image_1.png](14_image_1.png)
![14_image_2.png](14_image_2.png)
![14_image_3.png](14_image_3.png)
![14_image_4.png](14_image_4.png)
Acc F1 Acc F1 Acc F1 Acc F1
![14_image_8.png](14_image_8.png)
LSTM 56.96 56.18 82.37 82.37 44.39 43.60 87.60 87.36 BERT 82.78 82.04 90.77 90.76 52.90 53.02 92.47 92.26 DeBERTa 83.31 82.87 95.28 95.28 56.47 55.58 92.30 92.13 LSTM 59.21 59.58 85.83 85.83 45.93 43.59 88.45 88.16 BERT 84.01 83.44 92.33 92.33 53.94 52.80 92.48 92.25 DeBERTa 84.51 83.97 96.09 **96.09** 57.78 **56.15** 92.95 92.76 LSTM 58.50 57.65 85.23 85.23 43.39 42.45 87.93 87.63 BERT 83.23 82.70 92.01 92.01 52.26 51.03 91.82 91.59 DeBERTa 83.98 83.44 95.22 95.22 **57.91** 55.88 92.77 92.54 LSTM 58.65 57.23 85.64 85.64 46.04 43.97 87.65 87.42 BERT 83.05 82.49 92.20 92.20 51.86 51.39 91.92 91.69 DeBERTa 82.67 82.26 94.76 94.76 57.67 55.90 92.70 92.51 LSTM 59.54 57.58 86.30 86.30 46.47 44.15 88.38 88.10 BERT 83.31 82.72 91.76 91.76 52.49 50.27 92.43 92.24 DeBERTa 83.35 82.87 95.33 95.33 57.22 56.08 93.88 **93.70**
![14_image_5.png](14_image_5.png)
![14_image_6.png](14_image_6.png)
![14_image_7.png](14_image_7.png)
![14_image_9.png](14_image_9.png)
![14_image_10.png](14_image_10.png)
![15_image_0.png](15_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The licenses of all artifacts are well announced on their websites
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. The documentation of all artifacts are well organized on their websites
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
nguyen-etal-2023-gradient | Gradient-Boosted Decision Tree for Listwise Context Model in Multimodal Review Helpfulness Prediction | https://aclanthology.org/2023.findings-acl.106 | Multimodal Review Helpfulness Prediction (MRHP) aims to rank product reviews based on predicted helpfulness scores and has been widely applied in e-commerce via presenting customers with useful reviews. Previous studies commonly employ fully-connected neural networks (FCNNs) as the final score predictor and pairwise loss as the training objective. However, FCNNs have been shown to perform inefficient splitting for review features, making the model difficult to clearly differentiate helpful from unhelpful reviews. Furthermore, pairwise objective, which works on review pairs, may not completely capture the MRHP goal to produce the ranking for the entire review list, and possibly induces low generalization during testing. To address these issues, we propose a listwise attention network that clearly captures the MRHP ranking context and a listwise optimization objective that enhances model generalization. We further propose gradient-boosted decision tree as the score predictor to efficaciously partition product reviews{'} representations. Extensive experiments demonstrate that our method achieves state-of-the-art results and polished generalization performance on two large-scale MRHP benchmark datasets. | # Gradient-Boosted Decision Tree For Listwise Context Model In Multimodal Review Helpfulness Prediction
Thong Nguyen1, Xiaobao Wu2, Xinshuai Dong3**, Anh Tuan Luu**2∗
,
Cong-Duy Nguyen2, Zhen Hai4, **Lidong Bing**4 1National University of Singapore, Singapore 2Nanyang Technological University, Singapore 3Carnegie Mellon University, USA
4DAMO Academy, Alibaba Group [email protected], [email protected]
## Abstract
Multimodal Review Helpfulness Prediction
(MRHP) aims to rank product reviews based on predicted helpfulness scores and has been widely applied in e-commerce via presenting customers with useful reviews. Previous studies commonly employ fully-connected neural networks (FCNNs) as the final score predictor and pairwise loss as the training objective.
However, FCNNs have been shown to perform inefficient splitting for review features, making the model difficult to clearly differentiate helpful from unhelpful reviews. Furthermore, pairwise objective, which works on review pairs, may not completely capture the MRHP goal to produce the ranking for the entire review list, and possibly induces low generalization during testing. To address these issues, we propose a listwise attention network that clearly captures the MRHP ranking context and a listwise optimization objective that enhances model generalization. We further propose gradient-boosted decision tree as the score predictor to efficaciously partition product reviews' representations. Extensive experiments demonstrate that our method achieves state-of-the-art results and polished generalization performance on two large-scale MRHP benchmark datasets.
## 1 Introduction
E-commerce platforms, such as Amazon and Lazada, have achieved steady development. These platforms generally provide purchasers' reviews to supply justification information for new consumers and help them make decisions. Nevertheless, the quality and usefulness of reviews can vary hugely:
some are helpful with coherent and informative content while others unhelpful with trivial or irrelevant information. Due to this, the Multimodal Review Helpfulness Prediction (MRHP) task is proposed. It ranks the reviews by predicting their helpfulness scores based on the textual and visual
∗Corresponding Author modality of products and reviews, because helpful reviews should comprise not only precise and informative textual material, but also consistent images with text content (Liu et al., 2021; Nguyen et al.,
2022). This can help consumers find helpful reviews instead of unhelpful ones, resulting in more appealing E-commerce platforms.
In MRHP, multimodal reviews naturally form ranking partitions based on user votings, where each partition exhibits distinct helpfulness feature level (Ma et al., 2021). As such, the MRHP score regressor's function is to assign scores to indicate the partition for hidden features of product reviews.
However, current MRHP approaches employ fullyconnected neural networks (FCNNs), which cannot fulfill the partition objective. In particular, FCNNs are ineffective in feature scaling and transformation, thus being inadept at feature space splitting and failing to work efficiently in ranking problems that involve ranking partitions (Beutel et al., 2018; Qin et al., 2021). An illustration would be in Figure 1, where the helpfulness scores predicted by FCNNs do not lucidly separate helpful and unhelpful reviews. Severely, some unhelpful reviews possess logits that can even stay in the range of helpful ones, bringing about fallacious ranking.
In addition to incompetent model architectures, existing MRHP frameworks also employ suboptimal loss function: they are mostly trained on a pairwise loss to learn review preferences, which unfortunately mismatches the listwise nature of review ordering prediction. Firstly, the mistmatch might empirically give rise to inefficient ranking performance (Pasumarthi et al., 2019; Pobrotyn and Białobrzeski, 2021). Second, pairwise traning loss considers all pairs of review as equivalent. In consequence, the loss cannot differentiate a pair of useful and not useful reviews from a pair of moderately useful and not useful ones, which results in a model that distinguishes poorly between useful and moderately useful reviews.
![1_image_0.png](1_image_0.png)
To address these issues, we first propose a Gradient-Boosted Decision Tree (GBDT) as the helpfulness score regressor to utilize both its huge capacity of partitioning feature space (Leboeuf et al., 2020) and differentiability compared with standard decision trees for end-to-end training. We achieve the partition capability with the split (internal) nodes of the tree implemented with non-linear single perceptron, to route review features to the specific subspace in a soft manner.
Furthermore, we develop a theoretical analysis to demonstrate that pairwise training indeed has lower model generalization than listwise approach.
We proceed to propose a novel listwise training objective for the proposed MRHP architecture. We also equip our architecture with a listwise attention network that models the interaction among the reviews to capture the listwise context for the MRHP
ranking task.
In sum, our contributions are four-fold:
- We propose a novel gradient-boosted decision tree score predictor for multimodal review helpfulness prediction (MRHP) to partition product review features and properly infer helpfulness score distribution.
- We propose a novel listwise attention module for the MRHP architecture that conforms to the listwise context of the MRHP task by relating reviews in the list.
- We perform theoretical study with the motivation of ameliorating the model generalization error, and accordingly propose a novel MRHP training objective which satisfies our aim.
- We conducted comprehensive experiments on two benchmark datasets and found that our approach significantly outperforms both textonly and multimodal baselines, and accomplishes state-of-the-art results for MRHP.
## 2 Background
In this section, we recall the Multimodal Review Helpfulness Prediction (MRHP) problem. Then, we introduce theoretical preliminaries which form the basis of our formal analysis of the ranking losses for the MRHP problem in the next section.
## 2.1 Problem Definition
Following (Liu et al., 2021; Han et al., 2022; Nguyen et al., 2022), we formulate MRHP as a ranking task. In detail, we consider an instance Xito consist of a product item pi, composed of product description T
pi and images I
pi, and its respective review list Ri = {ri,1, ri,2*, . . . , r*i,|Ri|}.
Each review ri,j carries user-generated text T
ri,j, images I
ri,j , and an integer scalar label yi,j ∈
{0, 1*, . . . , S*} denoting the helpfulness score of review ri,j . The ground-truth result associated with Xiis the descending order determined by the helpfulness score list Yi = {yi,1, yi,2*, . . . , y*i,|Ri|}. The MRHP task is to generate helpfulness scores which match the groundtruth ranking order, formulated as follows:
si,j = f(pi, ri,j ), (1)
where f represents the helpfulness prediction model taking ⟨pi, ri,j ⟩ as the input.
## 2.2 Analysis Of Generalization Error
The analysis involves the problem of learning a deep θ-parameterized model f θ: *X → Y* that maps the input space X to output space Y and a stochastic learning algorithm A to solve the optimization problem as follows:
$$f^{\theta^{*}}=\operatorname*{arg\,min}_{f^{\theta}}\mathbb{E}_{(\mathbf{x,y})\sim\mathbb{P}}\left[l(f^{\theta};(\mathbf{x,y}))\right],\quad(2)$$
where P denotes the distribution of (x, y), l the loss function on the basis of the difference between yˆ = f θ(x) and y, and Rtrue(f θ) =
E(x,y)∼P
-l(f θ; (x, y))is dubbed as the true risk.
Since P is unknown, Rtrue is alternatively solved through optimizing a surrogate empirical risk Remp(f θ D) = 1N
PN
i=1 l(f θ; (xi, yi)), where D =
{(xi, yi)}
N
i=1 denotes a training dataset drawn from P that f θ D is trained upon.
Because the aim of deep neural model training is to produce a model f θthat provides a small gap between the performance over D, i.e. Remp(f θ D), and over any unseen test set from P, i.e. Rtrue(f θ D), the analysis defines the main focus to be the generalization error E(f θ D) = Rtrue(f θ D) − Remp(f θ D), the objective to be achieving a tight bound of E(f θ D),
and subsequently the foundation regarding the loss function's Lipschitzness as:
Definition 1. *(Lipschitzness). A loss function* l(yˆ, y) is γ-Lipschitz with respect to yˆ *if for* γ ≥
0, ∀u, v ∈ R
K*, we have:*
$$|l(\mathbf{u},\mathbf{y})-l(\mathbf{v},\mathbf{y})|\leq\gamma|\mathbf{u}-\mathbf{v}|,\qquad(3)$$
where | · | denotes the l1-norm, K *the dimension of* the output yˆ.
Given the foundation, we have the connection between the properties of loss functions and the generalization error:
Theorem 1. *Consider a loss function that* 0 ≤
l(yˆ, y) ≤ L that is convex and γ-Lipschitz with respect to yˆ. Suppose the stochastic learning algorithm A is executed for T iterations, with an annealing rate λtto solve problem (2). Then, the following generalization error bound holds with probability at least 1 − δ (Akbari et al., *2021):*
$$\begin{array}{c}{{E(f_{\cal D}^{\theta})=R_{\mathrm{true}}(f_{\cal D}^{\theta})-R_{\mathrm{emp}}(f_{\cal D}^{\theta})\leq L\sqrt{\frac{\log(2/\delta)}{2N}}+}}\\ {{2\gamma^{2}\sum_{t=1}^{T}\lambda_{t}\left(2\sqrt{\frac{\log(2/\delta)}{T}}+\sqrt{\frac{2\log(2/\delta)}{N}}+\frac{1}{N}\right).}}\end{array}$$
Theorem (1) implies that by establishing a loss function L with smaller values of γ and L, we can burnish the model generalization performance.
## 3 Methodology
In this section, we elaborate on our proposed architecture, listwise attention network, tree-based helpfulness regressor, and listwise ranking loss along with its comparison against the pairwise one from the theoretical perspective. The overall architecture is illustrated in Figure 2.
## 3.1 Multimodal Encoding
Our model receives product description T
pi, product images I
pi, review text T
ri,j , and review images I
ri,j as input. We perform the encoding procedure for those inputs as follows.
Textual Encoding. For both product text T
pi and review text T
ri,j , we index their sequences of words into the word embeddings and forward to the respective LSTM layer to yield token-wise representations:
$$\begin{array}{r l}{\mathbf{H}^{p_{i}}=\mathrm{LSTM}^{p}(\mathbf{W_{e m b}}(T^{p_{i}})),}\\ {\mathbf{H}^{r_{i,j}}=\mathrm{LSTM}^{r}(\mathbf{W_{e m b}}(T^{r_{i,j}})),}\end{array}$$
$$\begin{array}{l}{(5)}\\ {(6)}\end{array}$$
where Hpi ∈ R
l pi×d, Hri,j ∈ R
l ri,j ×d, l pi and l ri,j denote sequence lengths of the product and review text, respectively, d the hidden dimension.
Visual Encoding. We adapt a pre-trained Faster RCNN to extract ROI features of m objects {e pi t}
m t=1 and {e ri,j t}
m t=1 for product and review images, respectively. We then feed those object features into the self-attention module to obtain visual representations as:
$$\begin{array}{c}{{\mathbf{V}^{p_{i}}=\mathrm{SelfAttn}(\{\mathbf{e}_{t}^{p_{i}}\}_{t=1}^{m}),}}\\ {{\mathbf{V}^{r_{i,j}}=\mathrm{SelfAttn}(\{\mathbf{e}_{t}^{r_{i,j}}\}_{t=1}^{m}),}}\end{array}$$
(7) (8) $\frac{1}{2}$
t=1), (7)
t=1), (8)
where Vpi, Vri,j ∈ R
m×d, and d denotes the hidden size.
## 3.2 Coherence Reasoning
We then learn intra-modal, inter-modal, and intraentity coherence among product-review elements.
Intra-modal Coherence. There are two types of intra-modal coherence relations: (1) product text -
review text and (2) product image - review image. Initially, we designate self-attention modules to capture the intra-modal interaction as:
$$\mathbf{H}_{i,j}^{\mathrm{intraM}}=\mathrm{SelfAttn}([\mathbf{H}^{p_{i}},\mathbf{H}^{r_{i,j}}]),\tag{9}$$ $$\mathbf{V}_{i,j}^{\mathrm{intraM}}=\mathrm{SelfAttn}([\mathbf{V}^{p_{i}},\mathbf{V}^{r_{i,j}}]).\tag{10}$$
Then, intra-modal interaction features are passed to a CNN, then condensed into hidden vectors via pooling layer:
$$\mathbf{z}_{i,j}^{\mathrm{intraM}}=\mathrm{Pool}(\mathbf{CNN}([\mathbf{H}_{i,j}^{\mathrm{intraM}},\mathbf{V}_{i,j}^{\mathrm{intraM}}])),\tag{11}$$
where [·] denotes the concatenation operator.
Inter-modal Coherence. The inter-modal coherence comprises two relation types: (1) product text
(pt) - review image (ri) and (2) product image (pi) -
![3_image_1.png](3_image_1.png)
![3_image_0.png](3_image_0.png)
$$(23)$$
review text (rt). Similar to the intra-modal coherence, we first perform cross-modal correlation by leveraging the self-attention mechanism:
$$\mathbf{H}_{i,j}^{\mathrm{pt-ri}}=\mathrm{SelfAttn}([\mathbf{H}^{p_{i}},\mathbf{V}^{r_{i,j}}]),$$
Thereafter, we pool the above features and concatenate the pooled vectors to attain the inter-modal
vector:
$$\begin{array}{r}{{\mathbf{z}_{i,j}^{\mathrm{pt-ri}}=\mathrm{Pool}(\mathbf{H}_{i,j}^{\mathrm{pt-ri}}),}}\\ {{\mathbf{z}_{i,j}^{\mathrm{pi-rt}}=\mathrm{Pool}(\mathbf{H}_{i,j}^{\mathrm{pi-rt}}),}}\\ {{\mathbf{z}_{i,j}^{\mathrm{interM}}=\left[\mathbf{z}_{i,j}^{\mathrm{pt-ri}},\mathbf{z}_{i,j}^{\mathrm{pi-rt}}\right].}}\end{array}$$
i,j ), (14)
i,j ), (15)
i,j i. (16)
Intra-entity Coherence. Analogous to the intermodal coherence, we also conduct self-attention and pooling computation, but on the (1) product text (pt) - product image (pi) and (2) review text
(rt) - review image (ri) as follows:
$$\begin{array}{r}{\mathbf{H}_{i}^{\mathrm{pt-pi}}=\mathrm{SelfAttn}([\mathbf{H}^{p_{i}},\mathbf{V}^{p_{i}}]),}\\ {\mathbf{H}_{i,j}^{\mathrm{nt-ri}}=\mathrm{SelfAttn}([\mathbf{H}^{r_{i,j}},\mathbf{V}^{r_{i,j}}]),}\\ {\mathbf{z}_{i}^{\mathrm{pt-pi}}=\mathrm{Pool}(\mathbf{H}_{i}^{\mathrm{pt-pi}}),}\\ {\mathbf{z}_{i,j}^{\mathrm{nt-ri}}=\mathrm{Pool}(\mathbf{H}_{i,j}^{\mathrm{nt-ri}}),}\\ {\mathrm{intra}}\\ {\mathbf{z}_{i,j}^{\mathrm{intraR}}=\left[\mathbf{z}_{i}^{\mathrm{pt-pi}},\mathbf{z}_{i,j}^{\mathrm{nt-ri}}\right].}\end{array}$$
i), (19)
i,j ), (20)
i,j i. (21)
Eventually, the concatenation of the intra-modal, inter-modal, and intra-entity vectors becomes the result of the coherence reasoning phase:
$$\mathbf{z}_{i,j}=\left[\mathbf{z}_{i,j}^{\mathrm{intra}\mathrm{M}},\mathbf{z}_{i,j}^{\mathrm{interM}},\mathbf{z}_{i,j}^{\mathrm{intra}\mathrm{R}}\right]\,.$$
i,j . (22)
## 3.3 Listwise Attention Network
In our proposed listwise attention network, we encode list-contextualized representations to consider relative relationship among reviews. We achieve this by utilizing self-attention mechanism to relate list-independent product reviews' features
{zi,1, zi,2*, . . . ,* zi,|Ri|} as follows:
$$\{\mathbf{z}_{i,j}^{\mathrm{list}}\}_{j=1}^{|R_{i}|}=\mathrm{SelfAttn}(\{\mathbf{z}_{i,j}\}_{j=1}^{|R_{i}|}),$$
j=1), (23)
where Ri denotes the review list associated with product pi.
## 3.4 Gradient-Boosted Decision Tree For Helpfulness Estimation
In this section, we delineate our gradient-boosted decision tree to predict helpfulness scores that efficaciously partition review features.
Tree Structure. We construct a dtree-depth binary decision tree composed of internal nodes N
(*|N |* = 2dtree−1 − 1) and leaf nodes L (|L| =
2 dtree−1). Our overall tree structure is depicted in Figure 2.
Score Prediction. Receiving the list-attended vectors {z list i}
N
i=1, our decision tree performs soft partitioning through probabilistic routing for those vectors to their target leaf nodes. In such manner, each internal node n calculates the routing decision probability as:
$$\begin{array}{c}{{p_{n}^{\mathrm{left}}=\sigma(\mathrm{Linear}(\mathbf{z}^{\mathrm{list}})),}}\\ {{p_{n}^{\mathrm{right}}=1-p_{n}^{\mathrm{left}},}}\end{array}$$
$$\begin{array}{l}{(24)}\\ {(25)}\end{array}$$
list)), (24)
n, (25)
$$(22)$$
where p left nand p right n denote the likelihood of directing the vector to the left sub-tree and right sub-tree, respectively. Thereupon, the probability of reaching leaf node l is formulated as follows:
$$\mu_{l}=\prod_{n\in{\mathcal{P}}(l)}(p_{n}^{\mathrm{left}})^{1^{l_{n}}}\cdot(p_{n}^{\mathrm{right}})^{1^{r_{n}}},\qquad(26)$$
where 1 ln denotes the indicator function of whether leaf node l belongs to the left sub-tree of the internal node n, equivalently for 1 rn , and P(l)
the node sequence path to leaf l. For example, in Figure 2, the routing probability to leaf 6 is µ6 = p right 1p left 3p right 6.
For the score inference at leaf node l, we employ a linear layer for calculation as follows:
$$s_{l,i,j}=\operatorname{Linear}_{l}(\mathbf{z}_{i,j}^{\mathrm{list}}).$$
i,j ). (27)
where s*l,i,j* denotes the helpfulness score generated at leaf node l. Lastly, due to the probabilistic routing approach, the final helpfulness score fi,j is the average of the leaf node scores weighted by the probabilities of reaching the leaves:
$$f_{i,j}=f(p_{i},r_{i,j})=\sum_{l\in{\cal L}}s_{l,i,j}\cdot\mu_{l}\,.\tag{28}$$
Since MRHP task aims to produce helpfulness order for a list of reviews, we propose to follow a listwise approach to compare the predicted helpfulness scores with the groundtruth.
Initially, we convert two lists of prediction scores
{fi,j}
|Ri| j=1 and groundtruth labels {yi,j}
|Ri| j=1 into two probability distributions.
$$f_{i,j}^{\prime}=\frac{\exp(f_{i,j})}{|R_{i}|},\;\;y_{i,j}^{\prime}=\frac{\exp(y_{i,j})}{|R_{i}|}.\tag{29}$$
Subsequently, we conduct theoretical derivation and arrive in interesting properties of the listwise computation.
Theoretical Derivation. Our derivation demonstrates that discrimination computation of both listwise and pairwise functions (Liu et al., 2021; Han et al., 2022; Nguyen et al., 2022) satisfy the preconditions in Theorem (1).
**Lemma 1**.: _Let $\mathcal{L}$ be a finite set of $\mathcal{L}$. Then $\mathcal{L}$ is a finite set of $\mathcal{L}$._ Proof.: Let $\mathcal{L}$ be a finite set of $\mathcal{L}$. Let $\mathcal{L}$ be a finite set of $\mathcal{L}$. Let $\mathcal{L}$ be a finite set of $\mathcal{L}$.
uct set, then L
list *is convex and* γ list*-Lipschitz with* respect to f′
i,j .
Lemma 2. *Given pairwise discrimination function on the total training set as* L
pair =
P
|P| i=1
-−fi,r+ + fi,r− + α
+*, where* r
+, r− denote two random indices in Ri and yi,r+ > yi,r− ,
and α = max 1≤j≤|Ri|
(yi,j ) − min 1≤j≤|Ri|
(yi,j )*, then* L
pair *is convex and* γ pair*-Lipschitz with respect to* fi,r+ , fi,r− .
Based upon the above theoretical basis, we investigate the connection between L
list and L
pair.
Theorem 2. Let L
list and L
pair are γ list-Lipschitz and γ pair*-Lipschitz, respectively. Then, the following inequality holds:*
$$\gamma^{\rm list}\leq\gamma^{\rm pair}.\tag{30}$$ **Theorem 3**.: _Let $0\leq{\cal L}^{\rm list}\leq L^{\rm list}$ and $0\leq{\cal L}^{\rm pair}\leq L^{\rm pair}$. Then, the following inequality holds:_
(30) air $\leq$ ?
$$(27)$$
$$L^{\mathrm{list}}\leq L^{\mathrm{pair}}.$$
$$(31)$$
We combine Theorem (1), (2), and (3), to achieve the following result.
Theorem 4. *Consider two models* f list Dand f pair D*under common settings trained to minimize* L
list and L
pair*, respectively, on dataset* D =
{pi, {ri,j}
|Ri| j=1}
|P| i=1. Then, we have the following inequality:
$$E(f_{\cal D}^{\rm list})\leq E(f_{\cal D}^{\rm pair}),\tag{32}$$ _where $E(f_{\cal D})=R_{\rm true}(f_{\cal D})-R_{\rm emp}(f_{\cal D})$._
As in Theorem (4), models optimized by listwise function achieve a tighter bound on the generalization error than the ones with the pairwise function, thus upholding better generalization performance.
We provide proofs of all the lemmas and theorems in Appendix A. Indeed, empirical results in Section 4.6 also verify our theorems.
With such foundation, we propose to utilize listwise discrimination as the objective loss function to train our MRHP model:
$${\mathcal{L}}^{\mathrm{list}}=-\sum_{i=1}^{|P|}\sum_{j=1}^{|R_{i}|}y_{i,j}^{\prime}\log(f_{i,j}^{\prime}).\qquad(33)$$
## 4 Experiments 4.1 Datasets
For evaluation, we conduct experiments on two large-scale MRHP benchmark datasets: LazadaMRHP and Amazon-MRHP. We present the dataset statistics in Appendix B.
Amazon-MRHP (Liu et al., 2021) includes crawled product and review content from Amazon.com, the international e-commerce brand, between 2016 and 2018. All of the product and review texts are expressed in English.
Lazada-MRHP (Liu et al., 2021) comprises product information and user-generated reviews from Lazada.com, a popular e-commerce platform in Southeast Asia. Both product and review texts are written in Indonesian.
Both datasets are composed of 3 categories: (1)
Clothing, Shoes & Jewelry (Clothing), (2) *Electronics* (Electronics), and (3) *Home & Kitchen* (Home).
We divide the helpfulness votes of the reviews into 5 partitions, i.e. [1, 2), [2, 4), [4, 8), [8, 16), and
[16, ∞), corresponding to 5 helpfulness scores, i.e. yi,j ∈ {0, 1, 2, 3, 4}.
## 4.2 Implementation Details
For input texts, we leverage pretrained word embeddings with fastText embedding (Bojanowski et al., 2017) and 300-dimensional GloVe word vectors (Pennington et al., 2014) for Lazada-MRHP
and Amazon-MRHP datasets, respectively. Each embedded word sequence is passed into an 1-layer LSTM whose hidden dimension is 128. For input images, we extract their ROI features of 2048 dimensions and encode them into 128-dimensional vectors. Our gradient-boosted decision tree score predictor respectively exhibits a depth of 3 and 5 in Lazada-MRHP and Amazon-MRHP datasets, which are determined on the validation performance. We adopt Adam optimizer, whose batch size is 32 and learning rate 1e−3, to train our entire architecture in the end-to-end fashion.
## 4.3 Baselines
We compare our approach with an encyclopedic list of baselines:
- **BiMPM** (Wang et al., 2017): a ranking model that uses 2 BiLSTM layers to encode input sentences.
- **EG-CNN** (Chen et al., 2018): a RHP baseline which leverages character-level representations and domain discriminator to improve cross-domain RHP performance.
- **Conv-KNRM** (Dai et al., 2018): a CNNbased system which uses kernel pooling on multi-level n-gram encodings to produce ranking scores.
- **PRH-Net** (Fan et al., 2019): a RHP baseline that receives product metadata and raw review text as input.
- **SSE-Cross** (Abavisani et al., 2020): a crossmodal attention-based approach to filter nonsalient elements in both visual and textual input components.
- **DR-Net** (Xu et al., 2020): a combined model of decomposition and relation networks to learn cross-modal association.
- MCR (Liu et al., 2021): an MRHP model that infers helpfulness scores based on crossmodal attention-based encodings.
- **SANCL** (Han et al., 2022): a baseline which extracts salient multimodal entries via probebased attention and applies contrastive learning to refine cross-modal representations.
- **Contrastive-MCR** (Nguyen et al., 2022): an MRHP approach utilizing adaptive contrastive strategy to enhance cross-modal representations and performance optimization.
## 4.4 Main Results
Inspired by previous works (Liu et al., 2021; Han et al., 2022; Nguyen et al., 2022), we report Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG@N), where N = 3 and N = 5. We include the performance of baseline models and our approach in Table 1 and 2.
On Amazon dataset, we consistently outperform prior methods of both textual and multimodal settings. Particularly, our architecture improves over Contrastive-MCR on MAP of 15.2 points in Clothing, NDCG@3 of 20.4 points in Electronics, and NDCG@5 of 21.0 points in Home subset. Furthermore, we accomplish a gain in MAP of 2.2 points in Clothing over PRH-Net, NDCG@3 of 16.4 points in Electronics and NDCG@5 of 11.8 points in Home category over Conv-KNRM baseline, where PRH-Net and Conv-KNRM are the best prior text-only baselines.
For Lazada dataset, which is in Indonesian, we outperform Contrastive-MCR with a significant margin of MAP of 10.4 points in Home, NDCG@5 of 11.6 points in Electronics, and NDCG@3 of 12.4 points in Clothing domain. The text-only variant of our model also gains a considerable improvement of 4.7 points of NDCG@5 in Clothing, 5.0 points of MAP in Electronics over PRH-Net, and 1.4 points of NDCG@3 in Home over ConvKNRM model.
These outcomes demonstrate that our method is able to produce more sensible helpfulness scores to polish the review ranking process, not only being efficacious in English but also generalizing to other language as well. Over and above, it is worth pointing out in Lazada-Electronics, the textual setting of our approach even achieves higher helpfulness
| Text-only Multimodal |
|------------------------|
Setting Method Clothing Electronics **Home**
MAP N@3 N@5 MAP N@3 N@5 **MAP N@3 N@5**
BiMPM 57.7 41.8 46.0 52.3 40.5 44.1 56.6 43.6 47.6
EG-CNN 56.4 40.6 44.7 51.5 39.4 42.1 55.3 42.4 46.7
Conv-KNRM 57.2 41.2 45.6 52.6 40.5 44.2 57.4 44.5 48.4
PRH-Net 58.3 42.2 46.5 52.4 40.1 43.9 57.1 44.3 48.1 Our Model 60.5 51.7 52.8 59.8 56.9 57.9 **63.4 59.4 60.2** SSE-Cross 65.0 56.0 59.1 53.7 43.8 47.2 60.8 51.0 54.0 DR-Net 65.2 56.1 59.2 53.9 44.2 47.5 61.2 51.8 54.6
MCR 66.4 57.3 60.2 54.4 45.0 48.1 62.6 53.5 56.6
SANCL 67.3 58.6 61.5 56.2 47.0 49.9 63.4 54.3 57.4 Contrastive-MCR 67.4 58.6 61.6 56.5 47.6 50.8 63.5 54.6 57.8
Our Model 82.6 80.3 79.3 74.2 68.0 69.8 **81.7 76.5 78.8**
Table 1: Helpfulness review prediction results on the Amazon-MRHP dataset.
Setting Method Clothing Electronics **Home**
MAP N@3 N@5 MAP N@3 N@5 **MAP N@3 N@5**
BiMPM 60.0 52.4 57.7 74.4 67.3 72.2 70.6 64.7 69.1 EG-CNN 60.4 51.7 57.5 73.5 66.3 70.8 70.7 63.4 68.5 Conv-KNRM 62.1 54.3 59.9 74.1 67.1 71.9 71.4 65.7 70.5
PRH-Net 62.1 54.9 59.9 74.3 67.0 72.2 71.6 65.2 70.0
Our Model 66.4 59.6 64.6 79.3 63.8 78.0 **72.9 67.1 71.5** SSE-Cross 66.1 59.7 64.8 76.0 68.9 73.8 72.2 66.0 71.0 DR-Net 66.5 60.7 65.3 76.1 69.2 74.0 72.4 66.3 71.4
MCR 68.8 62.3 67.0 76.8 70.7 75.0 73.8 67.0 72.2 SANCL 70.2 64.6 68.8 77.8 71.5 76.1 75.1 68.4 73.3
Contrastive-MCR 70.3 64.7 69.0 78.2 72.4 76.5 75.2 68.8 73.7 Our Model 78.5 77.1 79.0 87.9 86.7 88.1 **85.6 78.8 83.1**
| Text-only Multimodal |
|------------------------|
## 4.5 Ablation Study
To verify the impact of our proposed (1) Gradientboosted decision tree regressor, (2) Listwise ranking loss, and (3) Listwise attention network, we conduct ablation experiments on the Home category of the Amazon and Lazada datasets.
GBDT Regressor. In this ablation, we substitute our tree-based score predictor with various FCNNs score regressor. Specifically, we describe each substitution with a sequence of dimensions in its fully-connected layers, and each hidden layer is furnished with a Tanh activation function.
As shown in Table 3, FCNN-based score regressors considerably hurt the MRHP performance, with a decline of NDCG@3 of 16.7 points, and MAP of 6.9 points in the Amazon and Lazada datasets, respectively. One potential explanation is that without the decision tree predictor, the model lacks the partitioning ability to segregate the features of helpful and non-helpful reviews.
Listwise Ranking Loss. As can be observed in Table 3, replacing listwise objective with the pairwise one degrades the MRHP performance substantially, with a drop of NDCG@3 of 11.8 scores in Amazon, and NDCG@5 of 7.3 scores in Lazada dataset.
Based on Theorem 4 and Table 4, we postulate that removing listwise training objective impairs model generalization, revealed in the degraded MRHP
testing performance.
Listwise Attention Network (LAN). We proceed to ablate our proposed listwise attention module and re-execute the model training. Results in Table 3 betray that inserting listwise attention brings about performance upgrade with 16.9 and 9.1 points of MAP in Amazon-MRHP and Lazada-
| Dataset | Model | MAP | N@3 | N@5 |
|------------------------------|------------------------------|-------|-------|-------|
| Our Model | 81.7 | 76.5 | 78.8 | |
| - w/ dzi,j -8-4-2-1 NN | 64.6 | 55.2 | 58.6 | |
| - w/ dzi,j -32-16-8-4-2-1 NN | 70.6 | 59.8 | 63.8 | |
| Amazon | - w/ dzi,j -32-32-32-32-1 NN | 64.9 | 57.1 | 59.9 |
| - w/o L list | 72.4 | 64.7 | 67.1 | |
| - w/o LAN | 64.8 | 55.8 | 59.3 | |
| Our Model | 85.6 | 78.8 | 83.1 | |
| - w/ dzi,j -8-4-2-1 NN | 76.2 | 69.3 | 74.3 | |
| - w/ dzi,j -32-16-8-4-2-1 NN | 78.7 | 71.9 | 77.6 | |
| - w/ dzi,j -32-32-32-32-1 NN | 77.6 | 70.9 | 75.2 | |
| - w/o L list | 78.0 | 71.3 | 75.8 | |
| - w/o LAN | 76.5 | 69.9 | 74.4 | |
| Lazada | | | | |
![7_image_0.png](7_image_0.png)
![7_image_2.png](7_image_2.png)
MRHP, respectively. We can attribute the improvement to the advantage of listwise attention, i.e. supplying the MRHP model with the context among product reviews to assist the model into inferring the reviews' ranking positions more precisely.
## 4.6 Analysis Of Generalization Error
Figures 3 and 4 illustrate the approximation of the generalization error Eˆ(f θ D) = Rval(f θ D) −
Rtrain(f θ D) of the model after every epoch, where Rval and Rtrain indicate the average loss values of the trained model f θ D on the validation and training sets, respectively. Procedurally, due to different scale of the loss values, we normalize them to the range [0, 1]. The plots demonstrate that generalization errors of our MRHP model trained with the listwise ranking loss are constantly lower than those obtained by pairwise loss training, thus exhibiting better generalization performance. Additionally, as further shown in Table 4, f θ, list Dincurs a smaller training-testing performance discrepancy
△MAP = |MAPtraining − MAPtesting| than f θ, pair D,
along with Figures 3 and 4 empirically substantiating our Theorem (4).
## 4.7 Case Study
In Figure 1, we present helpfulness prediction results predicted by our proposed MRHP model and Contrastive-MCR (Nguyen et al., 2022), the previous best baseline. While our model is capable of producing helpfulness scores that evidently sepa-
![7_image_1.png](7_image_1.png)
rate helpful with unhelpful product reviews, scores generated by Contrastive-MCR do mingle them.
Hypothetically, our method could partition product reviews according to their encoded helpfulness features to obtain inherent separation. We provide more detailed analysis of the partitioning capability of our model and individual produced helpfulness scores in Appendix D and E.
## 5 Related Work
For real-world applications, existing methods are oriented towards extracting hidden features from input samples (Kim et al., 2006; Krishnamoorthy, 2015; Liu et al., 2017; Chen et al., 2018; Nguyen et al., 2021). Modern approaches have gradually taken into account additional and useful modalities, for instance meta-data (Tuan et al., 2016; Fan et al., 2019; Qu et al., 2020), images (Liu et al.,
2021; Han et al., 2022), etc. They also depend on hand-crafted features, such as argument-based (Liu et al., 2017), lexical (Krishnamoorthy, 2015; Luu et al., 2015), and semantic features (Yang et al., 2015; Luu et al., 2016; Nguyen and Luu, 2022) to utilize automatic deep representation learning to train the helpfulness predictor. Some also utilize unsupervised learning techniques to polish the learned representations of input samples (Wu et al.,
2020, 2023a; Nguyen and Luu, 2021; Wu et al.,
2022, 2023b).
Despite performance upgrade, deep neural approaches for multimodal RHP (MRHP) problem, have been shown to still be inadept at modeling partitioned and ranking data (Qin et al., 2021), which is the crucial characteristic of MRHP reviews (Ma et al., 2021). In this work, we seek to address those issues for the MRHP system with our proposed tree-based helpfulness predictor and listwise architectural framework.
## 6 Conclusion
In this paper, for the MRHP task, we introduce a novel framework to take advantage of the partitioned structure of product review inputs and the ranking nature of the problem. Regarding the partitioned preference, we propose a gradientboosted decision tree to route review features towards proper helpfulness subtrees managed by decision nodes. For the ranking nature, we propose listwise attention network and listwise training objective to capture review list-contextualized context.
Comprehensive analysis provides both theoretical and empirical grounding of our approach in terms of model generalization. Experiments on two largescale MRHP datasets showcase the state-of-the-art performance of our proposed framework.
## Limitations
Firstly, from the technical perspective, we have advocated the advantages of our proposed listwise loss for the MRHP task in terms of generalization capacity. Nevertheless, there are other various listwise discrimination functions that may prove beneficial for the MRHP model training, for example NeuralNDCG (Pobrotyn and Białobrzeski, 2021),
ListMLE (Xia et al., 2008), etc. Moreover, despite the novelty of our proposed gradient-boosted tree in partitioning product reviews into helpful and unhelpful groups, our method does not employ prior contrastive representation learning, whose objective is also to segregate helpful and unhelpful input reviews. The contrastive technique might discriminate reviews of distinctive helpfulness features to bring further performance gain to multimodal review helpfulness prediction. At the moment, we leave the exploration of different listwise discrimination functions and contrastive learning as our prospective future research direction.
Secondly, our study can be extended to other problems which involve ranking operations. For instance, in recommendation, there is a need to rank the items according to their appropriateness to present to the customers in a rational order. Our gradient-boosted decision tree could divide items into corresponding partitions in order for us to recommend products to the customer from the highly appropriate partition to the less appropriate one.
Therefore, we will discover the applicability of our proposed architecture in such promising problem domain in our future work.
## Acknowledgements
This work was supported by Alibaba Innovative Research (AIR) programme with research grant AN-GC-2021-005.
## References
Mahdi Abavisani, Liwei Wu, Shengli Hu, Joel Tetreault, and Alejandro Jaimes. 2020. Multimodal categorization of crisis events in social media. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14679–14689.
Ali Akbari, Muhammad Awais, Manijeh Bashar, and Josef Kittler. 2021. How does loss function affect generalization performance of deep learning? application to human age estimation. In *International* Conference on Machine Learning, pages 141–151.
PMLR.
Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H Chi. 2018. Latent cross: Making use of context in recurrent recommender systems.
In *Proceedings of the Eleventh ACM International* Conference on Web Search and Data Mining, pages 46–54.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the association for computational linguistics*, 5:135–146.
Cen Chen, Yinfei Yang, Jun Zhou, Xiaolong Li, and Forrest Bao. 2018. Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2
(Short Papers), pages 602–607.
Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional neural networks for soft-matching n-grams in ad-hoc search. In *Proceedings of the eleventh ACM international conference on web search and data mining*, pages 126–134.
Miao Fan, Chao Feng, Lin Guo, Mingming Sun, and Ping Li. 2019. Product-aware helpfulness prediction of online reviews. In *The World Wide Web Conference*, pages 2715–2721.
Wei Han, Hui Chen, Zhen Hai, Soujanya Poria, and Lidong Bing. 2022. Sancl: Multimodal review helpfulness prediction with selective attention and natural contrastive learning. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 5666–5677.
Soo-Min Kim, Patrick Pantel, Timothy Chklovski, and Marco Pennacchiotti. 2006. Automatically assessing review helpfulness. In Proceedings of the 2006 Conference on empirical methods in natural language processing, pages 423–430.
Srikumar Krishnamoorthy. 2015. Linguistic features for review helpfulness prediction. Expert Systems with Applications, 42(7):3751–3759.
Jean-Samuel Leboeuf, Frédéric LeBlanc, and Mario Marchand. 2020. Decision trees as partitioning machines to characterize their generalization properties.
Advances in Neural Information Processing Systems, 33:18135–18145.
Haijing Liu, Yang Gao, Pin Lv, Mengxue Li, Shiqiang Geng, Minglan Li, and Hao Wang. 2017. Using argument-based features to predict and analyse review helpfulness. *arXiv preprint arXiv:1707.07279*.
Junhao Liu, Zhen Hai, Min Yang, and Lidong Bing.
2021. Multi-perspective coherent reasoning for helpfulness prediction of multimodal reviews. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5927–
5936.
Anh Tuan Luu, Jung-jae Kim, and See Kiong Ng.
2015. Incorporating trustiness and collective synonym/contrastive evidence into taxonomy construction. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*,
pages 1013–1022.
Anh Tuan Luu, Yi Tay, Siu Cheung Hui, and See Kiong Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 403–413.
Jiaqi Ma, Xinyang Yi, Weijing Tang, Zhe Zhao, Lichan Hong, Ed Chi, and Qiaozhu Mei. 2021. Learning-torank with partitioned preference: fast estimation for the plackett-luce model. In *International Conference* on Artificial Intelligence and Statistics, pages 928–
936. PMLR.
Thong Nguyen and Anh Tuan Luu. 2021. Contrastive learning for neural topic model. Advances in Neural Information Processing Systems, 34:11974–11986.
Thong Nguyen, Anh Tuan Luu, Truc Lu, and Tho Quan. 2021. Enriching and controlling global semantics for text summarization. *arXiv preprint* arXiv:2109.10616.
Thong Nguyen, Xiaobao Wu, Anh-Tuan Luu, CongDuy Nguyen, Zhen Hai, and Lidong Bing. 2022.
Adaptive contrastive learning on multimodal transformer for review helpfulness predictions. arXiv preprint arXiv:2211.03524.
Thong Thanh Nguyen and Anh Tuan Luu. 2022. Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 36, pages 11103–11111.
Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, and Marc Najork. 2019. Self-attentive document interaction networks for permutation equivariant ranking. *arXiv preprint arXiv:1910.09676*.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* on empirical methods in natural language processing
(EMNLP), pages 1532–1543.
Przemysław Pobrotyn and Radosław Białobrzeski. 2021.
Neuralndcg: Direct optimisation of a ranking metric via differentiable relaxation of sorting. arXiv preprint arXiv:2102.07831.
Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Mike Bendersky, and Marc Najork. 2021. Are neural rankers still outperformed by gradient boosted decision trees?
Xiaoru Qu, Zhao Li, Jialin Wang, Zhipeng Zhang, Pengcheng Zou, Junxiao Jiang, Jiaming Huang, Rong Xiao, Ji Zhang, and Jun Gao. 2020. Category-aware graph neural networks for improving e-commerce review helpfulness prediction. In Proceedings of the 29th ACM International Conference on Information
& Knowledge Management, pages 2693–2700.
Luu Anh Tuan, Siu Cheung Hui, and See Kiong Ng.
2016. Utilizing temporal information for taxonomy construction. *Transactions of the Association for* Computational Linguistics, 4:551–564.
Zhiguo Wang, Wael Hamza, and Radu Florian. 2017.
Bilateral multi-perspective matching for natural language sentences. In *Proceedings of the 26th International Joint Conference on Artificial Intelligence*,
pages 4144–4150.
Xiaobao Wu, Xinshuai Dong, Thong Nguyen, Chaoqun Liu, Liangming Pan, and Anh Tuan Luu. 2023a. Infoctm: A mutual information maximization perspective of cross-lingual topic modeling. arXiv preprint arXiv:2304.03544.
Xiaobao Wu, Xinshuai Dong, Thong Thanh Nguyen, and Anh Tuan Luu. 2023b. Effective neural topic modeling with embedding clustering regularization.
In *International Conference on Machine Learning*.
PMLR.
Xiaobao Wu, Chunping Li, Yan Zhu, and Yishu Miao.
2020. Short text topic modeling with topic distribution quantization and negative sampling decoder. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 1772–1782, Online. Association for Computational Linguistics.
Xiaobao Wu, Anh Tuan Luu, and Xinshuai Dong. 2022.
Mitigating data sparsity for short text topic modeling by topic-semantic contrastive learning. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 2748–2760, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In *Proceedings of the* 25th international conference on Machine learning, pages 1192–1199.
Nan Xu, Zhixiong Zeng, and Wenji Mao. 2020. Reasoning with multimodal sarcastic tweets via modeling cross-modality contrast and semantic association. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 3777–
3786.
Yinfei Yang, Yaowei Yan, Minghui Qiu, and Forrest Bao.
2015. Semantic analysis and helpfulness prediction of text for online product reviews. In *Proceedings* of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 38–44.
## A Proofs
Lemma 1. *Given listwise loss on the total training set as* L
list = −P
$\sum\limits_{i=1}^{|P|}\sum\limits_{j=1}^{|R_i|}y_{i,j}^\prime\log(f_{i,j}^\prime)$, where $P$ denotes to $f_{i,j}^\prime$.
the product set, then L
list *is convex and* γ list*-Lipschitz with respect to* f′
i,j .
Proof. Taking the second derivative of Equation (33), we have
$$(34)$$
$$(35)$$
$$\nabla_{f_{i,j}^{\prime}}^{2}{\mathcal{L}}^{\mathrm{list}}=\sum_{i=1}^{|P|}\sum_{j=1}^{|R_{i}|}{\frac{y_{i,j}^{\prime}}{(f_{i,j}^{\prime})^{2}}}>0,$$
> 0, (34)
proving the convexity of L
list.
The Lipschitz property of L
list can be derived from such property of the logarithm function, which states that
$$|\log(u)-\log(v)|=\left|\log(1+{\frac{u}{v}}-1)\right|\leq\left|{\frac{u}{v}}-1\right|=\left|{\frac{1}{v}}(u-v)\right|\leq\gamma|u-v|,$$
where the first inequality stems from log(1 + x) ≤ x ∀x > −1 and γ is chosen s.t. |v| ≥ 1 γ
.
Let $x=\frac{u_{i,j}}{y_{i,j}}$, $z=\frac{u_{i,j}}{y_{i,j}}$. Applying the above result for $\mathcal{L}^{m}$, we obtain $$|\log(u_{i,j})-\log(v_{i,j})|=\left|\log\left(\frac{u_{i,j}}{y_{i,j}}\right)-\log\left(\frac{v_{i,j}}{y_{i,j}}\right)\right|\leq\gamma\left|\frac{u_{i,j}}{y_{i,j}}-\frac{v_{i,j}}{y_{i,j}}\right|,\tag{36}$$ Multiplying both sides by $y_{i,j}$, and integrating the summation on all inequalities for $i\in\{1,2,\ldots,|P|\}$
ui,j vi,j
and j ∈ {1, 2*, . . . ,* |Ri|}, we achieve
$$\sum_{i=1}^{|P|}\sum_{j=1}^{|R_{i}|}|y_{i,j}\log(u_{i,j})-y_{i,j}\log(v_{i,j})|\leq\gamma\sum_{i=1}^{|P|}\sum_{j=1}^{|R_{i}|}|u_{i,j}-v_{i,j}|\,.$$
$$(36)$$
$$(37)$$
Utimately, we obtain:
$$|{\mathcal{L}}^{\mathrm{list}}(\mathbf{u},\mathbf{y})-{\mathcal{L}}^{\mathrm{list}}(\mathbf{v},\mathbf{y})|\leq\gamma^{\mathrm{list}}|\mathbf{u}-\mathbf{v}|,$$ the $\gamma^{\mathrm{list}}$-Lipschitz property of ${\mathcal{L}}^{\mathrm{list}}$.
list|u − v|, (38)
Where γ
list = γ. This proves the γ
Lemma 2. *Given pairwise loss on the total training set as* L
**of $\mathcal{L}$**. _Let as $\mathcal{L}^{\text{pair}}=\sum\limits_{i=1}^{|P|}\left[-f_{i,r^{+}}+f_{i,r^{-}}+\alpha\right]^{+}$, where $\alpha=\max\limits_{1\leq j\leq|R_{i}|}(y_{i,j})-\min\limits_{1\leq j\leq|R_{i}|}(y_{i,j})$, then $f_{i,r^{-}}$._
r
+, r− denote two random indices in Ri and yi,r+ > yi,r− *, and* α = max L
pair *is convex and* γ pair-Lipschitz with respect to fi,r+ , fi,r− .
Proof. Let h pair i(⟨fi,r+ , fi,r− ⟩), yi) = [−fi,r+ + fi,r− + α]
+, ui = ⟨fi,u+ , fi,u− ⟩, vi = ⟨fi,rv+ , fi,rv− ⟩
be two inputs of h pair i. For θ ∈ [0, 1], we have
h pair i(θui + (1 − θ)vi, yi) = h pair i(θ⟨fi,u+ , fi,u− ⟩ + (1 − θ)⟨fi,v+ , fi,v− ⟩, yi) = h pair i(⟨θfi,u+ + (1 − θ)fi,v+ , θfi,u− + (1 − θ)fi,v− ⟩, yi) = -−(θfi,u+ + (1 − θ)fi,v+ ) + (θfi,u− + (1 − θ)fi,v− ) + α + (39) = -θ(−fi,u+ + fi,u− + α) + (1 − θ)(−fi,v+ + fi,v− + α) + ≤ θ[−fi,u+ + fi,u− + α] + + (1 − θ)[−fi,v+ + fi,v− + α] + = θhpair i(ui, yi) + (1 − θ)h pair i(v, yi).
Employing summation of the inequality on all i ∈ {1, 2*, . . . ,* |P|}, we have
$$\mathcal{L}^{\text{pair}}(\theta\mathbf{u}+(1-\theta)\mathbf{v},\mathbf{y})\leq\theta\sum_{i=1}^{|P|}h_{i}^{\text{pair}}(\mathbf{u}_{i},\mathbf{y}_{i})+(1-\theta)\sum_{i=1}^{|P|}h_{i}^{\text{pair}}(\mathbf{v}_{i},\mathbf{y}_{i})=\theta\mathcal{L}^{\text{pair}}(\mathbf{u},\mathbf{y})+(1-\theta)\mathcal{L}^{\text{pair}}(\mathbf{v},\mathbf{y}),\tag{40}$$
which proves the convexity of L
pair.
Regarding the Lipschitz property, we first show that h pair iholds the property:
$$|h_{i}^{\text{part}}(\mathbf{u}_{i},\mathbf{y}_{i})-h_{i}^{\text{part}}(\mathbf{v}_{i},\mathbf{y}_{i})|=\left[(-u_{i}^{+}+u_{i}^{-}+\alpha)-(-v_{i}^{+}+v_{i}^{-}+\alpha)\right]^{+}=\left[-u_{i}^{+}+u_{i}^{-}-v_{i}^{-}+u_{i}^{-}\right]^{+}.\tag{41}$$ Note that $y_{\text{min}}\leq u_{i}^{+},u_{i}^{-},v_{i}^{+},v_{i}^{-}\leq y_{\text{max}}$, since we take the non-negative values in (41). Thus,
$$|h_{i}^{\mathrm{pair}}({\bf u}_{i},{\bf y}_{i})-h_{i}^{\mathrm{pair}}({\bf v}_{i},{\bf y}_{i})|\leq2(y_{\mathrm{max}}-y_{\mathrm{min}}).$$
i(vi, yi)| ≤ 2(ymax − ymin). (42)
Similarly, applying the aforementioned observation, we have:
$$|{\bf u}_{i}-{\bf v}_{i}|=\left|u_{i}^{+}-v_{i}^{+}\right|+\left|u_{i}^{-}-v_{i}^{-}\right|\geq2(y_{\rm max}-y_{\rm min}).$$
Combining (42) and (43) leads to:
$$|h_{i}^{\mathrm{pair}}({\bf u}_{i},{\bf y}_{i})-h_{i}^{\mathrm{pair}}({\bf v}_{i},{\bf y}_{i})|\leq\gamma^{\mathrm{pair}}|{\bf u}_{i}-{\bf v}_{i}|,$$
pair|ui − vi|, (44)
, that $\gamma^{\mathrm{pair}}\geq1.$ Ad.
such that $\gamma^{\text{pair}}\geq1$. Adopting the summation of (44) on all $i\in\{1,2,\ldots,|P|\}$, we obtain: $$|\mathcal{L}^{\text{pair}}(\mathbf{u},\mathbf{y})-\mathcal{L}^{\text{pair}}(\mathbf{v},\mathbf{y})|=\left|\sum_{i=1}^{|P|}h_{i}^{\text{pair}}(\mathbf{u}_{i},\mathbf{y}_{i})-\sum_{i=1}^{|P|}h_{i}^{\text{pair}}(\mathbf{v}_{i},\mathbf{y}_{i})\right|\leq\gamma^{\text{pair}}\sum_{i=1}^{|P|}|\mathbf{u}_{i}-\mathbf{v}_{i}|=\gamma^{\text{pair}}|\mathbf{u}_{i}|.$$ The Lipschitz property of $\mathcal{L}_{\text{pair}}$ follows result (45).
all $i\in\{1,2,...,|P|\}$, we
pair|u − v|.
$$(42)$$
$$(43)$$
$$\mathbf{\nabla}-\mathbf{v}|\,.$$
Theorem 2. Let L
list and L
pair are γ list*-Lipschitz and* γ pair-Lipschitz, respectively. Then, the following inequality holds:
pair. (46)
$$\gamma^{\mathrm{list}}\leq\gamma^{\mathrm{pair}}.$$
Proof. In order to prove Theorem (2), we first need to find the formulation of γ list and γ pair. We leverage the following lemma:
Lemma 3. A function L is γ-Lipschitz, if γ satisfies the following condition (Akbari et al., *2021):*
$$\gamma=\operatorname*{sup}_{f_{i,j}}\left|{\mathcal{L}}_{i,j}^{\prime}(f_{i,j})\right|.$$
. (47)
With the foundation in mind, we take the derivative of L
list i,j and L
pair i,j :
$$(\mathcal{L}_{i,j}^{\text{int}}(f_{i,j}))^{\prime}=\left[y^{\prime}_{i,j}\log\frac{\sum\limits_{t=1}^{|R_{i}|}\exp(f_{i,t})}{\exp(f_{i,j})}\right]^{\prime}=y^{\prime}_{i,j}\left[\frac{\exp(f_{i,j})}{\sum\limits_{t=1}^{|R_{i}|}\exp(f_{i,t})}-1\right]=-y^{\prime}_{i,j}\left[\frac{\sum\limits_{t=1}^{|R_{i}|}\exp(f_{i,t})}{\sum\limits_{t=1}^{|R_{i}|}\exp(f_{i,t})}\right]\,\tag{48}$$ $$(\mathcal{L}_{i,j}^{\text{int}}(f_{i,j}))^{\prime}=\pm1.\tag{49}$$
(50) $\blacksquare$
(48) and (49) imply that
$$\left|\left[\mathcal{L}_{i,j}^{\mathrm{list}}(f_{i,j})\right]^{\prime}\right|\leq y_{i,j}^{\prime}\leq1=\left|\left[\mathcal{L}_{i,j}^{\mathrm{pair}}(f_{i,j})\right]^{\prime}\right|.$$ L Lemma (3), we obtain $\gamma^{\mathrm{list}}\leq\gamma^{\mathrm{pair}}$. $L^{\mathrm{list}}$_and $0\leq\mathcal{L}^{\mathrm{pair}}\leq L^{\mathrm{pair}}$. Then, the following inequality holds:_
Combining equation (50) and Lemma (3), we obtain γ Theorem 3. Let 0 ≤ Llist ≤ L
list and 0 ≤ Lpair ≤ L
$$L^{\mathrm{list}}\leq L^{\mathrm{pair}}.$$
pair. (51)
Llist = − X |P| i=1 X |Ri| j=1 y ′i,j log f ′ i,j (52) j=1 y ′i,j log |P Ri| t=1 exp(fi,t) = X |P| i=1 X |Ri| exp(fi,j ) j=1 y ′i,j t=1 exp fi,t − fi,j = X |P| i=1 X |Ri| logX |Ri| (54) j=1 y ′i,j log t=1 exp(fi,t) − fi,j + log |Ri| = X |P| i=1 X |Ri| 1 |Ri| X |Ri| (55) j=1 y ′i,j t=1 fi,t − fi,j + log |Ri| ≤ X |P| i=1 X |Ri| 1 |Ri| X |Ri| (56) ≤ X |P| i=1 X |Ri| j=1 y ′i,j (f max − f min + log |Ri|) (57) = |P|(f max − f min) + |P| log |Ri|, (58) where f min ≤ fi,j ≤ f max, ∀i, j. Now, such bounds of fi,j on L pair yields:
$$(53)$$
$$({\mathfrak{S}}{\mathfrak{2}})$$
$$({\mathsf{S}}{\mathsf{4}})$$
$$(\mathbf{55})$$
$$(56)$$
(57) $\binom{58}{5}$ .
$$(59)$$
$${\mathcal{L}}^{\mathrm{pair}}=\sum_{i=1}^{|P|}\left[-f_{i,r^{+}}+f_{i,r^{-}}+\alpha\right]^{+}\leq|P|(f^{\mathrm{max}}-f^{\mathrm{min}})+|P|(y^{\mathrm{max}}-y^{\mathrm{min}}),$$
min), (59)
where $y^{\max}=\max\limits_{1\leq i\leq|P|1\leq j\leq|R_{i}|}(y_{i,j})$, $y^{\min}=\min\limits_{1\leq i\leq|P|1\leq j\leq|R_{i}|}(y_{i,j})$. Note that Table 5 reveals that $\max|R_{i}|\leq2043$. Therefore, $\log|R_{i}|\leq3.31$, whereas $y^{\max}-y^{\min}=4$, giving rise to the conclusion $\log|R_{i}|\leq y^{\max}-y^{\min}$. Therefore, $$L^{\rm lat}\leq L^{\rm pair},\tag{60}$$
Proof. Adoption of Jensen's inequality on Llist gives:
which concludes the proof of Theorem (3).
Theorem 4. *Consider two models* f list Dand f pair Dlearned under common settings utilizing listwise and pairwise ranking losses, respectively, on dataset D = {pi, {ri,j}
|Ri| j=1}
|P| i=1. Then, we have the following inequality:
E(f list D ) ≤ E(f pair D). (61)
where E(fD) = Rtrue(fD) − Remp(fD).
The inequality immediately follows from Theorems (1), (2) and (3). From Theorems (1) and (2),
because T and N are constant, the second term of L
list is always smaller than that of L
pair. From Theorems (1) and (3), we realize that L
list ≤ L
pair, thus proving the smaller value of the first term of L
list.
## B Dataset Statistics
| Dataset | Category | Train | Dev | Test | Max #R/P |
|-----------|------------|----------|---------|--------|------------|
| CS&J | 12K/277K | 3K/71K | 4K/87K | 691 | |
| Amazon | Elec. | 10K/260K | 3K/65K | 3K/80K | 836 |
| H&K | 15K/370K | 4K/93K | 5K/111K | 2043 | |
| CS&J | 7K/104K | 2K/26K | 2K/32K | 540 | |
| Lazada | Elec. | 4K/42K | 1K/11K | 1K/13K | 346 |
| H&K | 3K/37K | 1K/10K | 1K/13K | 473 | |
In this section, we provide dataset statistics of the Amazon and Lazada datasets on the MRHP task. All of the numerical details are included in Table 5.
Table 5: Statistics of MRHP datasets. Max \#R/P denotes the maximum number of reviews associated with each product.
## C Generalization Errors Of The Models Trained With Listwise And Pairwise Ranking Losses
In this Appendix, we illustrate the empirical evolution of generalization errors of pairwise-trained and listwise-trained models on the remaining categories of the Amazon-MRHP and Lazada-MRHP datasets.
The discovered characteristics regarding generalization in Figures 5 and 6 agree with those in Section 4.6, corroborating the intensified generalizability of our proposed listwise ranking loss.
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
## D Analysis Of Partitioning Function Of Gradient-Boosted Decision Tree
We examine the partitioning operation of our proposed gradient-boosted decision tree for the multimodal review helpfulness prediction. In particular, we inspect the µ = [µ1, µ2, . . . , µ|L|] probabilities, which route review features to the target leaf nodes in a soft approach. Our procedure is to gather µ at the leaf nodes for all reviews, estimate their mean value with respect to each leaf, then plot the results on Clothing and Home of the Amazon and Lazada datasets, respectively, in Figures 7, 8, 9, 10, and 11.
From the figures, we can observe our proposed gradient-boosted decision tree's behavior of assigning high routing probabilities {µi}
|L| i=1 to different partitions of leaf nodes, with the partitions varying according to the helpfulness scale of the product reviews. In consequence, we can claim that our GBDT divides the product reviews into corresponding partitions to their helpfulness degrees, thus advocating the partitioned preference of the input reviews.
![16_image_0.png](16_image_0.png)
µi
![16_image_1.png](16_image_1.png)
![16_image_2.png](16_image_2.png)
![17_image_0.png](17_image_0.png)
![17_image_1.png](17_image_1.png)
![17_image_2.png](17_image_2.png)
E Examples of Product and Review Samples We articulate product and review samples in Figure 1, comprising their textual and visual content, with the helpfulness scores generated by Contrastive-MCR (Nguyen et al., 2022), whose score predictor is FCNN-based, and our GBDT-based model.
## Product B00005Mg3K
Libbey Imperial 16-Piece Tumbler and Rocks Glass Set
![18_image_0.png](18_image_0.png)
| Review Information | NN-based | Tree-based |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------|
| Score | Score | |
| Review 1 - Label: 1 | 1.467 | -0.724 |
| These are fun, but I did learn that ice maker ice shaped like little half moon as many USA freezers have as their automatic ice maker, fit the curves of this class perfectly and will use surface water tension cohesion to slide up the glass inside to your mouth and act like a dam to block your drink believe it or not. So i have gotten used to that for personal use and know how to tilt the glass now, but when friends come, I use square tubes from an ice tray so I don't have to explain it to them or chance them spilling on themselves. Review 2 - Label: 1 | 1.147 | -0.874 |
| If I could give less than a star I would. I am very disappointed in how low quality this product is and would not recommend buying it. Review 3 - Label: 1 | 6.622 | -0.964 |
| Very cool & futuristic looking. Review 4 - Label: 1 | 1.731 | -0.868 |
| These are attractive glasses which seem a good deal more classy than the cost here would imply. They feel higher end and when you plink one with your fingernail it'll give off a fine crystal like ring. They are every bit as attractive as they look in the pictures. Review 5 - Label: 3 | 0.494 | 0.882 |
| Mixed reviews did not deviated me from getting this set. Just the add shape is a turn on. A very well packed box arrived bubble wrap with every glass intact. The glasses are beautiful and everything I expected. One thing though, It's interesting that there is only one picture on the page. This picture shows no detail. Used to many types of glass drinkware, the first thing I noticed is the "seams" on each glass (see pictures). This makes obvious the fact that these are mold made. This is the reason for 4 stars. Being using them for just a couple of weeks by the time I wrote this review. Will update as time goes on. Table 6: Generated helpfulness scores on reviews 1-5 for product B00005MG3K. | | |
| Review Information | NN-based | Tree-based |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------|
| Score | Score | |
| Review 6 - Label: 1 | 0.044 | -0.778 |
| I hate going through the hassle of returning things but it had to be done with this purchase. Review 7 - Label: 1 | 0.684 | -0.800 |
| The short glasses are nice, but the tall ones break easily. SUPER easily. I had two of them break just by holding them. I will absolutely not be reordering this. Review 8 - Label: 1 | 0.443 | -0.897 |
| I love these. We had them in a highly stylized Japanese restaurant and were psyched to find them here. Tall glasses have a "seam". No tipping or breakage yet as mentioned by other reviewers. Review 9 - Label: 2 | 2.333 | 0.435 |
| It's true that the taller 18-oz glasses are delicate. If you're the kind of person who buys glassware expecting every glass to last 20 years, this set isn't for you. If you're the kind of person who enjoys form over function, I'd highly recommend them. Review 10 - Label: 1 | 6.074 | -0.844 |
| Quality is good. Does not hold water from the underside if you put it in the dishwasher. Review 11 - Label: 1 | 2.615 | -0.923 |
| I have owned these glasses for 20-plus years. After breaking most of the tall ones, I looked around for months to find great glasses but still thought these were the best, so I bought more. Review 12 - Label: 3 | 7.529 | 0.836 |
| I am sooooooo disappointed in these glasses. They are thin. Of course, right after opening we put in the dishwasher and upon taking them out it looked like they were washed with sand! We could even see the fingerprints. And we have a watersoftener! In the photo I have included, this is after one dishwasher washing! Table 7: Generated helpfulness scores on reviews 6-12 for product B00005MG3K. | | |
## Product B00Q82T3Xe
Dasein Frame Tote Top Handle Handbags Designer Satchel Leather Briefcase Shoulder Bags Purses
![21_image_0.png](21_image_0.png)
| Review Information | NN-based | Tree-based |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------|
| Score | Score | |
| Review 1 - Label: 1 | 0.281 | -0.192 |
| I really loved this and used it to carry my laptop to and from work. I used the cross-body strap. However, the metal hardware of the strap broke after three months, and the stitching where the cross-body strap attached to the purse ripped off the same week. Love this ourselves but the handles are too short for me to wear comfortably without the cross body strap. Review 2 - Label: 1 | 2.938 | -0.138 |
| Hello, I am Alicia and work as a researcher in the health area. Moreover, I was looking for a feminine, classical and practical bag-briefcase for my work. I would like to begin with the way you show every product. I love when I can see the inner parts and the size of the bag, not only using measures but when you show a model using the product too. Also, the selection of colour is advantageous a big picture with the tone selected. There are many models, sizes and prices. I consider that is a right price for the quality and design of the product. The products I bought have a high-quality appearance, are professional and elegant, like in the pictures! I was not in a hurry, so I was patient, and the product arrived a couple of days before the established date. The package was made thinking in the total protection of every product I bought, using air-plastic bubbles and a hard carton box. Everything was in perfect conditions. I use them for every day- work is very resistant, even in rain time I can carry many things, folders and sheet of paper, a laptop. Their capacity is remarkable. The inner part is very soft and stands the dirty. I am enjoying my bags! All the people say they are gorgeous! Review 3 - Label: 1 | 0.460 | -0.226 |
| This purse has come apart little by little within a month of receiving it. First the thread that held on the zipper began to unravel. Then the decorative seam covering began to come off all over the purse. Yesterday I was on my way into the grocery and the handle broke as I was walking. I've only had it a few months. Poorly made. Review 4 - Label: 1 | -0.646 | -0.067 |
| I bought this because of reviews but i am extremely disappointed... This bag leather is too hard and i don't think i will use it Review 5 - Label: 2 | 5.094 | -0.493 |
| There are slight scratches on the hardware otherwise great size and it's a gorgeous bag. Got it for use while I'm in a business casual environment. Table 8: Generated helpfulness scores on reviews 1-5 for product B00Q82T3XE. | | |
| Review Information | NN-based | Tree-based |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------|
| Score | Score | |
| Review 6 - Label: 1 | -1.794 | -0.222 |
| Tight bag, has no flexibility. stiff. But I do receive a lot of compliments. Review 7 - Label: 1 | 0.819 | -0.284 |
| I love this bag!!! I use it every day at work and it has held up to months of use with no sign of wear and tear. It holds my laptop, planner, and notebooks as well as my large wallet and pencil case. It holds so much! I've gotten so many compliments on it. It feels and looks high quality. Review 8 - Label: 3 | 0.259 | 0.939 |
| This bag is perfect! It doubles as somewhat of a "briefcase" for me, as it fits my IPad, planner, and files, while still accommodating my wallet and normal "purse" items. My only complaint was that Jre scratches already on the gold metal accents when I unwrapped it from the packaging. Otherwise- great deal for the price! Review 9 - Label: 2 | 2.695 | 0.462 |
| I believe this the most expensive looking handbag I have ever owned. When your handbag comes in its own bag, you are on to something wonderful. I also purchased a router in the same order, and I'm serious, the handbag was better wrapped and protected. Now for a review : The handbag is stiff, but I expected that from other reviews. The only reason I didn't give a five star rating is because it is not as large as I hoped. A laptop will not fit. Only a tablet. This is a regular good size purse, so don't expect to be able to carry more than usual. I probably won't be able to use it for my intented purpose, but it is so beautiful, I don't mind. Review 10 - Label: 1 | -0.235 | -0.189 |
| Look is great can fit HP EliteBook 8470p (fairly bulky laptop 15 inch), but very snug. I can only fit my thin portfolio and the laptop into bag. Review 11 - Label: 1 | 6.290 | -0.194 |
| This bag is really great for my needs for work, and is cute enough for every day. Other reviews are correct that this is a very stiff-leather bag, but I am fine with that. I love the color and the bag is super adorable. I get so many compliments on this. Also, I travelled recently and this was a perfect bag to use as your "personal item" on the airplane- it zips up so you don't have to worry about things falling out and is just right for under the seat. I love the options of having handles AND the long strap. I carry an Iphone 6+ (does not fit down in the outside pocket completely but I use the middle zipper pouch for my tech), wallet, glasses, sunglasses, small makeup bag, a soapdish sized container that I use for holding charger cords (fits perfect in the inside liner pockets), and on the other side of the zipper pouch I carry an A5-sized Filofax Domino. Table 9: Generated helpfulness scores on reviews 6-11 for product B00Q82T3XE. | | |
| Review Information | NN-based | Tree-based |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|--------------|
| Score | Score | |
| Review 12 - Label: 3 | 2.262 | 0.923 |
| Absolutely stunning and expensive looking for the price. I just came back from shopping for a tote bad at Macy's and so I had the chance to look and feel at all the different bags both high end brand names and generic. This has a very distinguished character to it. A keeper. The size it rather big for an evening out as long as it is not a formal one. I like that it can accommodate a tablet plus all other things we women consider must haves. The silver metal accents are just of enough amount to give it ump but not superfluous to make it look tacky. The faux ostrich material feel so real. The whole bag is very well balance. Inside it has two zippered pockets and two open pockets for cell phone and sun glasses. Outside it has one zippered pocket by the back. I won't be using the shoulder strap too much as the the handles are long enough to be carried on the shoulders. Review 13 - Label: 4 | 7.685 | 1.969 |
| I added pictures. I hate the fact that people selling things do not give CLEAR defined pictures. This purse was well shipped. Not one scratch... and I don't think there COULD have been a scratch made in shipping. The handles and the bottom are a shiny patent leather look. The majority of the case is a faux ostrich look. It has a 'structure' to it. Not a floppy purse. There is a center divider that is soft and has a zipper to store things. One side (inside) has two pockets that do not zipper. One side (inside) has a zippered pocket. It comes with a long shoulder strap. Please see my photos. So far I really like this purse. The water bottle is a standard 16.9oz. Review 14 - Label: 2 | 2.309 | 0.584 |
| Love this purse! | When I opened the package it seemed like it was | |
| opening purse I had purchased for $450.00 it was packaged so nicely!! Every little detail of the purse was covered for shipping protection. This was/is extremely impressive to me for a purse I paid less than $40.00 for. Wow. It's roomie & has many pockets inside. And med/large purse I'd say, but I like that it's larger in length than height. It's very classic looking yet different with texturing. I always get many compliments on it. Believe me I have Many purses & currently this is one of my favorites!! I have already & will continue to purchase Dasein brand handbags. Table 10: Generated helpfulness scores on reviews 12-14 for product B00Q82T3XE. | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix B
## C ✓ **Did You Run Computational Experiments?** Section 4
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zeng-etal-2023-extract | Extract and Attend: Improving Entity Translation in Neural Machine Translation | https://aclanthology.org/2023.findings-acl.107 | While Neural Machine Translation (NMT) has achieved great progress in recent years, it still suffers from inaccurate translation of entities (e.g., person/organization name, location), due to the lack of entity training instances. When we humans encounter an unknown entity during translation, we usually first look up in a dictionary and then organize the entity translation together with the translations of other parts to form a smooth target sentence. Inspired by this translation process, we propose an Extract-and-Attend approach to enhance entity translation in NMT, where the translation candidates of source entities are first extracted from a dictionary and then attended to by the NMT model to generate the target sentence. Specifically, the translation candidates are extracted by first detecting the entities in a source sentence and then translating the entities through looking up in a dictionary. Then, the extracted candidates are added as a prefix of the decoder input to be attended to by the decoder when generating the target sentence through self-attention. Experiments conducted on En-Zh and En-Ru demonstrate that the proposed method is effective on improving both the translation accuracy of entities and the overall translation quality, with up to 35{\%} reduction on entity error rate and 0.85 gain on BLEU and 13.8 gain on COMET. | # Extract And Attend: Improving Entity Translation In Neural Machine Translation
Zixin Zeng1∗
, Rui Wang2, Yichong Leng3, Junliang Guo2, Xu Tan2,Tao Qin2,**Tie-yan Liu**2 1 Peking University, 2 Microsoft Research Asia 3 University of Science and Technology of China [email protected] 2 {ruiwa,junliangguo,xuta,taoqin,tyliu}@microsoft.com [email protected]
## Abstract
While Neural Machine Translation (NMT) has achieved great progress in recent years, it still suffers from inaccurate translation of entities
(e.g., person/organization name, location), due to the lack of entity training instances. When we humans encounter an unknown entity during translation, we usually first look up in a dictionary and then organize the entity translation together with the translations of other parts to form a smooth target sentence. Inspired by this translation process, we propose an Extract-andAttend approach to enhance entity translation in NMT, where the translation candidates of source entities are first extracted from a dictionary and then attended to by the NMT model to generate the target sentence. Specifically, the translation candidates are extracted by first detecting the entities in a source sentence and then translating the entities through looking up in a dictionary. Then, the extracted candidates are added as a prefix of the decoder input to be attended to by the decoder when generating the target sentence through self-attention. Experiments conducted on En-Zh and En-Ru demonstrate that the proposed method is effective on improving both the translation accuracy of entities and the overall translation quality, with up to 35% reduction on entity error rate and 0.85 gain on BLEU and 13.8 gain on COMET.
## 1 Introduction
Neural machine translation (NMT) automatically translates sentences between different languages, which has achieved great success (Bahdanau et al.,
2015; Sutskever et al., 2014; He et al., 2016; Song et al., 2019; Wang et al., 2021). Most current works consider to improve the overall translation quality.
However, the words in a sentence are not equally important, and the translation accuracy of named entities (e.g., person, organization, location) largely affects user experience, an illustration of which is
∗This work was completed at Microsoft. Corresponding authors: Rui Wang and Xu Tan ({ruiwa,xuta}@microsoft.com).
shown in Table 1. Unfortunately, the translation accuracy of named entities in a sentence is not quite good with current NMT systems (Hassan et al.,
2018; Läubli et al., 2020) due to the lack of training instances, and accordingly more effort is needed.
Recalling the process of human translation, when encountering an unknown entity in a sen tence, humans look up the translation of the entity in mental or external dictionaries, and organize the translation of the entity together with the translations of other parts to form a smooth target sentence based on grammar and language sense
(Gerver, 1975; Cortese, 1999). As the original intention of neural networks is to mimic the human brain, the human translation process is also an important reference when dealing with entities in NMT. However, none of the previous works on improving the entity translation in NMT consider both steps in human translation: 1) some works annotate the types and positions of the entities without using the dictionary (Li et al., 2018b; Modrzejewski et al., 2020); 2) some works first extract the entity translations from a dictionary (Wang et al., 2017)
or an entity translation model (Li et al., 2018a; Yan et al., 2019; Li et al., 2019), and then directly use them to replace the corresponding entities in the translated sentence via post-processing, which only takes the first step of human translation and may affect the fluency of the target sentence; 3) a couple of works use data augmentation or multi-task training to handle the entities in NMT (Zhao et al.,
2020a; Hu et al., 2022), which do not explicitly obtain the translation for each entity as the first step in human translation.
Inspired by the human translation process, we propose an Extract-and-Attend approach to improve the translation accuracy of named entities in NMT. Specifically, in the "Extract" step, translation candidates of named entities are extracted by first detecting each named entity in the source sentence and then translating to target language
| Source | 北岛的绘画展在巴黎地平线画廊开幕。 |
|-----------|-----------------------------------------------------------------------|
| Reference | Bei Dao's painting exhibition opens at Horizon Gallery in Paris. |
| Output 1 | North Island's painting exhibition opens at Horizon Gallery in Paris. |
| Output 2 | Bei Dao's picture exhibition opens on Horizon Gallery in Paris. |
based on the dictionary. Considering that some types of entities (e.g. person names) have relatively high diversity and low coverage in dictionaries, we also develop a transliteration1 pipeline to handle the entities uncovered by the dictionary. In the "Attend" step, the extracted candidates are added to the beginning of the decoder input as a prefix to be attended to by the decoder via self-attention. The Extract-and-Attend approach enjoys the following advantages: 1) the translation candidates of the named entities are explicitly extracted and incorporated during translation, which provides specific references for the decoder to generate the target sentence; 2) the extracted candidates are incorporated via self-attention instead of hard replacement, which considers the context of the whole sentence and leads to smooth outputs. The main contributions of this paper are summarized as follows:
- We propose to mimic the human translation process when dealing with entities in NMT, including extracting the translations of entities based on dictionary and organizing the entity translations together with the translations of other parts to form a smooth translation.
- Accordingly, we propose an Extract-and-Attend approach to improve the quality of entity translation in NMT, which effectively improves the translation quality of the named entities.
- Experiments conducted on En-Zh and EnRu demonstrate that the proposed Extract-andAttend approach significantly reduces the error rate on entity translation. Specifically, it reduces the entity error rate by up to 35% while also improving BLEU by up to 0.85 points and COMET
up to 13.8 points.
## 2 Related Work
To improve the entity translation in NMT, some works focus on annotating named entities to pro-1Transliteration is to convert between languages while keeping the same pronunciation (Karimi et al., 2011).
vide type and position information. For example, the inline annotation method (Li et al., 2018b)
inserts special tokens before and after the entities in the source sentence. The source factor method (Ugawa et al., 2018; Modrzejewski et al.,
2020) adds entity type embeddings to the tokens of the entities in the encoder. Xie et al. (2022) attach entity classifiers to the encoder and decoder. One main challenge when dealing with entities is that the entities are quite diverse while the corresponding data is limited compared to the large number of entities. Dictionaries are important supplements to the limited data on entities, which are not utilized in these works.
With the help of bilingual dictionaries, one common approach to improve the entity translation in NMT is to first extract the translation of source entities based on a dictionary (Wang et al., 2017)
or an entity translation model (Li et al., 2018a; Yan et al., 2019; Li et al., 2019), and then locate and replace the corresponding tokens in the target sentence via post-processing. However, such approach only takes the first step of human translation
(i.e., extracting the entity translations), since the entity translations are inserted to the target sentence by hard replacement, which affects the fluency of the target sentence. Moreover, this approach is sensitive to the inaccurate predictions made by NER (Modrzejewski et al., 2020).
Recently, some works take advantage of additional resources (e.g., dictionary) via data augmentation or multi-task training to improve the translation quality on entities. Zhao et al. (2020b)
augment the parallel corpus based on paired entities extracted from multilingual knowledge graphs, while DEEP (Hu et al., 2022) augments monolingual data with paired entities for a denoising pretraining task. The entity translation can also be enhanced by multi-task training with knowledge reasoning (Zhao et al., 2020a) and integrating lexical constraints (Wang et al., 2022). These methods don't look up translation candidates in bilingual
![2_image_0.png](2_image_0.png)
dictionares during inference. Considering that entities are quite diverse, providing specific translation candidates from dictionary may further improve the quality of entity translation.
Bilingual dictionaries are also utilized for improving translation quality on rare words or domain-specific terminology. One common approach is to augment training data with pseudo parallel sentences generated based on the dictionary (Zhang and Zong, 2016; Nag et al., 2020; Zhao et al., 2020b; Peng et al., 2020). Some works adjust the output probabilities over the vocabulary in the decoder according to the dictionary (Arthur et al., 2016; Zhao et al., 2018; Zhang et al., 2021).
Zhong and Chiang (2020) attach the definitions of the rare words in the dictionary to enhance the rare word translation. Similarly, Dinu et al. (2019)
and Exel et al. (2020) proposed to inject terminology by replacing or inserting translations inline in the source sentence. Though the human translation process when encountering an unknown rare word/terminology or entity is the same, we argue that the two-step human translation process is more suitable for entities. This is because rare words can be polysemous and require context-based disambiguation; on the other hand, each entity is usually linked with a single sense after controlling for entity type. Accordingly, retrieved translations of entities are less ambiguous than other words. On the contrary, domain-specific terminology always has a single sense which has little relevant to context, and thus it is usually with much higher accuracy to identify the terminologies in the domain-specific sentences than entities. Another uniqueness of entities is that some entities are translated by the same rule, which makes it possible to generalize to unseen entities. For example, when translating the names of Chinese people from Chinese to English, Pinyin2is commonly used.
## 3 Improving Entity Translation In Nmt
Inspired by the translation process of humans when encountering an unknown entity, where the translation of the entity is extracted from a dictionary and then organized with the translations of other parts to form a fluent target sentence, we propose an Extract-and-Attend approach. Specifically, we first extract the translation candidates of the entities in the source sentence, and then attend the translation candidates into the decoding process via self-attention, which helps the decoder to generate a smooth target sentence based on the specific entity translations. An overview of the proposed Extract-and-Attend approach is shown in Fig. 1, where a Transformer-based (Vaswani et al., 2017)
encoder-decoder structure is adopted. Specifically, to extract the translation candidates, entities in the source sentence are first detected based on NER
(Li et al., 2020), then the translation candidates are obtained from a bilingual dictionary. Considering that some types of named entities (e.g., person 2https://en.wikipedia.org/wiki/Pinyin
![3_image_0.png](3_image_0.png)
names) are quite diverse and the coverage in the dictionary of such entities is limited, we also develop a transliteration pipeline to handle entities uncovered by the dictionary. To make the decoder attend to the translation candidates, we add the translation candidates in order as a prefix of the decoder input. In the following sections, we will provide the details of "Extract" and "Attend".
## 3.1 Extracting Translation Candidates
Extracting the translation candidates for entities in the source sentence provides explicit references when generating the target sentence in NMT. There are two steps when extracting the entity translation candidates, where the entities in the source sentences are first detected by NER and then translated to the target language. If the entity is found in the bilingual dictionary, we retrieve its translation(s). Although there may be multiple translation candidates for one entity, the entity usually links to a single sense after disambiguating by entity type, and the multiple candidates in the dictionary for one named entity are commonly all correct. For example, "John Wilson" can be translated to "约 翰·维尔逊" or "约翰·威尔森". During training, we consider the one with shortest Levenshtein distance3compared to the ground truth translation to encourage the decoder to copy the given candidate. During inference, considering that only the source sentence is available, we select the one with highest frequency in the training set.
The coverage in the dictionary is limited for some types of entities (e.g, person names). Meanwhile, a large number of named entities (e.g.,
person names and some of locations) are translated by transliteration (i.e., translated according 3https://en.wikipedia.org/wiki/Levenshtein_distance to the pronunciations). Accordingly, we consider to use transliteration to handle such entities if they are uncovered by the dictionary. Transliteration in different countries often follow different rules. For example, names of Chinese persons are transliterated into English via Pinyin, while names of Korean persons are often transliterated via McCune-Reischauer4. Current transliteration models (Kundu et al., 2018; Karimi et al., 2011; Le et al., 2019) do not consider different nationalities for a single language pair, which is an important cause for transliteration errors. Considering this, we develop a nationality-aware transliteration pipeline, which consists of a nationality classifier and a nationality-aware transliteration model. As shown in Fig. 2, the nationality classifier takes the source entity and source sentence as input, and predicts the nationality of the entity. Then, the nationality tag is concatenated with the entity and translated by the word-level transliteration model.
## 3.2 Attending To Translation Candidates
We consider to let the decoder attend to the extracted translation candidates via self-attention, which has shown to be more effective in improving entity translation compared to alternative designs
(see Section 5.3). Accordingly, we concatenate extracted candidate translations with "[SEP]" and place it before the "<bos>" token of the decoder input. In order to identify the alignments between the translation candidates and the corresponding entities in the source sentence, we add entity type embeddings to word embeddings of the entities in the source sentence as (Modrzejewski et al., 2020),
and concatenate the corresponding translation candidates in the same order as they are in the source 4https://en.wikipedia.org/wiki/McCune-Reischauer sentence. We demonstrate that our model can correctly align the entities and the corresponding translation candidates in Appendix A.1 via case study.
We use independent position embeddings for the translation candidates and the target sentence as shown in Fig. 1. The loss on the tokens of translation candidates is ignored. In this way, the decoder can attend to the translation candidates through the attention mechanism in the decoder, which helps improve the performance of the model on translating entities.
## 4 Experimental Settings
In this section, we describe experimental settings including datasets, model configurations, the evaluation criterion and baselines.
## 4.1 Datasets
We conduct experiments on English-Chinese (EnZh) and English-Russian (En-Ru) translation. We chose language pairs so that the source and target languages come from different scripts5, because cross-script entity translation is more challenging.
Following Modrzejewski et al. (2020), three types of named entities are considered, i.e., person name, organization and location. Note that the proposed framework is not limited to the three types and can be applied to other entities (e.g., domain entities).
Entity dictionary. Entity pairs and corresponding nationality information are obtained from two multilingual knowledge graphs (i.e., DBPedia and Wikidata). For En-Ru, we extract 401K, 175K and 50K pairs of PER, LOC and ORG entities respectively. For En-Zh, we extract 338K, 200K, 38K
pairs of PER, LOC and ORG entities respectively.
Besides, we increase the coverage of the entity dictionary by mining entity pairs from parallel data.
First, we use spaCy NER models6to recognize entities from parallel sentences, then use awesomealign (Dou and Neubig, 2021) to align the source and target tokens and extract the corresponding translations. Infrequent entity pairs or empty alignment results are filtered out. Specifically, we obtain 179K person names, 51K locations, and 63K organizations for En-Ru, and 152K person names, 32K
locations, and 39K organizations for En-Zh.
Dataset for transliteration pipeline. Most person names and part of locations can be translated by transliteration. Because the dictionary has relatively high coverage for location entities, we train the transliteration pipeline based on parallel person names, and use it for both person names and unseen locations. To train the nationality classifier, we extract English biographies from DBPedia and link them to the entity dictionary, which are translated into Chinese and Russian with custom NMT models. In total, we collect 54K sentences with person names and nationalities, where 48.2K,
1.5K and 3.9K of them are used as training set, validation set and test set, respectively. We also merge countries that share the same official language (e.g. USA and UK), and regard the nationalities with fewer than 1000 examples as "Other".
For the nationality-aware transliteration model, the paired person names with nationality information from the collected entity dictionary are used. For En-Zh, 316K, 5K, and 17K are used as training set, validation set and test set respectively, and for En-Ru, 362K, 13K, 26K are used as training set, validation set and test set respectively. Besides, we also collect common monolingual person names from various databases7, and create pseudo entity pairs via back translation (Sennrich et al., 2016).
In total, 10K, 1.6M and 560K entities are collected for English, Chinese and Russian respectively.
Dataset for NMT model. The training data is obtained from UN Parallel Corpus v1.0 and News Commentary Corpus v158. The test data is constructed by concatenating test sets of the WMT
News Translation Task (2015-2021) and deduplicating samples. Dataset statistics are shown in Table 2. For En-Zh, there are 6.6K PER entities, 4.4K ORG entities and 1.9K LOC entities. For En-Ru, there are 4.9K PER entities, 2.5K ORG
entities and 1.2K LOC entities. We use Moses9to tokenize English and Russian corpus, and perform word segmentation on Chinese corpus with jieba10.
We perform joint byte-pair encoding (BPE) by subwordnmt11 with a maximum of 20K BPE tokens.
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
Table 2: Statistics of NMT datasets (Train/Val/Test).
## 4.2 Model Configurations And Training Pipeline
The nationality classifier is fine-tuned from pretrained BERT checkpoint (base, cased) available on HuggingFace12. Both the NMT model and the nationality-aware transliteration model use Transformer base architecture (Vaswani et al., 2017) with 6-layer encoder and decoder, hidden size as 512 and 8 attention heads.
## 4.3 Evaluation Criterion And Baselines
To evaluate the overall translation quality, we compute BLEU and COMET (Rei et al., 2020) scores13.
To evaluate the translation quality on entities, we consider using error rate of entity translation as the evaluation criterion. Following Modrzejewski et al.
(2020), we evaluate entity error rate by recognizing named entities from the reference sentence, and then checking occurrence in the output sentence, where it is regarded as error if it does not occur.
We compare our Extract-and-Attend approach with the following baselines14:
- *Transformer.* The Transformer model is directly trained on parallel corpus.
- *Transformer with Dictionary.* The entity dictionary is directly added to the parallel corpus to train a transformer model.
- *Replacement.* After identifying entities in the source sentence with NER and aligning them with target tokens, the corresponding tokens are replaced by translation candidates.
- Placeholder (Yan et al., 2019; Li et al., *2019).* It first replaces the entities in the source sentence with placeholders based on NER and then restores the placeholders in the output sentence with the extracted translation candidates.
12https://huggingface.co/bert-base-uncased 13The wmt22-comet-da model is used to calculate COMET
scores 14The entity resources used in Transformer with Dictionary, Replacement and Placeholder are obtained as is Section 3.1
- Annotation (Modrzejewski et al., *2020).* Entity type embeddings are added to the original word embeddings for the tokens of entities in the source sentence.
- Multi-task (Zhao et al., *2020a)* It improves the entity translation in NMT by multi-task learning on machine translation and knowledge reasoning.
## 5 Experimental Results
In this section, we demonstrate the effectiveness of the proposed Extract-and-Attend approach by comparing it with multiple baselines. We also conduct experiments to verify the design aspects of
"Extract" and "Attend".
## 5.1 Main Results
BLEU, COMET and entity error rates of the Extract-and-Attend approach with the baselines are shown in Table 3 and Table 4, where the proposed approach consistently performs the best on all the metrics and language pairs. From the results, it can be observed that: 1) The proposed method reduces the error rate by up to 35% and achieves a gain of up to 0.85 BLEU and 13.8 COMET compared to the standard Transformer model; 2) Compared with the annotation method (Modrzejewski et al.,
2020), which annotates the entities in the source sentence based on NER without incorporating any additional resources (e.g., dictionary), the proposed Extract-and-Attend approach takes advantage of the entity dictionary and nationality-aware transliteration pipeline, and reduces the entity error rate by up to 26% while achieving up to 0.77 points gain on BLEU and 3.0 points on COMET; 3) Compared with the replacement and placeholder (Yan et al., 2019; Li et al., 2019) methods, the Extractand-Attend approach is more robust to NER errors
(see A.3) than hard replacement and reduces the error rate by up to 16% while gaining up to 2.1 BLEU and 7.2 COMET; 4) Compared to the multitask (Zhao et al., 2020a) method, the Extract-andAttend approach explicitly provides the translation candidates when decoding, which reduces the entity error rate by up to 35% and improves BLEU
by up to 0.8 points and COMET up to 4.4 points.
We also provide the error rates for different entity types in Appendix A.2, and analyze the effect of dictionary coverage in Appendix A.4 Entity error rates calculated according to Section 4.3 may incur false negative errors, which has
| Model | En → Ru | Ru → En | En → Zh | Zh → En | | | | |
|---------------------------|-----------|-----------|-----------|-----------|-------|------|-------|------|
| BLEU | COMET | BLEU | COMET | BLEU | COMET | BLEU | COMET | |
| Transformer | 31.83 | 52.2 | 34.63 | 54.0 | 26.32 | 34.8 | 27.45 | 41.5 |
| Transformer w/ Dictionary | 31.85 | 53.6 | 34.67 | 56.1 | 26.36 | 38.1 | 27.49 | 43.2 |
| Replacement | 30.52 | 55.2 | 32.01 | 56.7 | 25.92 | 41.4 | 27.21 | 45.0 |
| Placeholder | 31.88 | 57.6 | 34.72 | 59.1 | 26.41 | 42.9 | 27.50 | 47.2 |
| Annotation | 31.91 | 59.4 | 34.84 | 60.5 | 26.44 | 45.8 | 27.73 | 48.0 |
| Multi-task | 31.88 | 57.8 | 34.76 | 60.3 | 26.38 | 45.0 | 27.64 | 47.4 |
| Extract & Attend (ours) | 32.68 | 62.2 | 35.41 | 63.5 | 26.79 | 48.6 | 27.98 | 50.1 |
Table 3: BLEU and COMET scores on WMT newstest. BLEU and COMET scores are statistically higher than baselines across all language pairs with 95% statistical significance (Koehn, 2004).
| Model | En → Ru | Ru → En | En → Zh | Zh → En |
|---------------------------|-----------|-----------|-----------|-----------|
| Transformer | 60.0 | 51.3 | 42.7 | 41.0 |
| Transformer w/ Dictionary | 59.2 | 50.4 | 42.1 | 40.6 |
| Replacement | 49.6 | 49.8 | 29.5 | 28.9 |
| Placeholder | 49.7 | 49.3 | 28.6 | 27.9 |
| Annotation | 43.2 | 44.5 | 37.4 | 30.0 |
| Multi-task | 58.9 | 50.0 | 42.4 | 40.4 |
| Extract & Attend (ours) | 42.7 | 41.6 | 27.7 | 27.5 |
Table 4: Error rates (%) on WMT newstest.
two main causes. First, as noted by Modrzejewski et al. (2020), it is common for NER models to make erroneous predictions. Second, there may be multiple correct translations for one entity, but the ones different from that in the reference sentence are regarded as errors. For example, BMA (British Medical Association) can either be copied in the target sentence, or translated into its Chinese form
"英国医学会". Therefore, we also perform human evaluation wmttest150 (see Table 5), where 150 sentence pairs with entities are randomly sampled from the En → Zh test set. Compared to automatic evaluation results in Table 4, entity error rates based on human evaluation become lower after eliminating the false negatives, while the relative performance of different models remain almost consistent. Therefore, though there are false negatives in the automatic evaluation as in Section 4.3, it is still a valid metric for evaluating entity translation.
Moreover, we observe that the Extract-and-Attend approach performs the best on all three entity types and reduces the total error rate by 32%.
## 5.2 Analysis On Extracting
To investigate the effectiveness of our transliteration pipeline, we implement a variant denoted as Extract-and-Attend (w/o Transliteration), in which we only extract translation candidates covered by the dictionary. From Table 6, we can see that the translation quality of person names is significantly
| Model | PER | ORG | LOC | Total |
|---------------------------|-------|-------|-------|---------|
| Transformer | 26.4 | 14.3 | 13.4 | 17.9 |
| Transformer w/ Dictionary | 25.8 | 13.6 | 13.4 | 16.7 |
| Replacement | 19.8 | 12.4 | 12.6 | 14.8 |
| Placeholder | 18.9 | 12.4 | 10.9 | 13.6 |
| Annotation | 17.9 | 11.4 | 10.9 | 13.3 |
| Multi-task | 21.7 | 12.4 | 12.6 | 15.5 |
| Extract & Attend | 16.0 | 11.4 | 9.2 | 12.1 |
| (ours) | | | | |
improved, reducing the error rate by 37%; transliteration is also effective for locations, reducing the error rate by 9%. Overall, the transliteration model improves BLEU by 0.33 and COMET by 4.1.
Model BLEU COMET PER LOC
| Extract & Attend | 26.79 | 48.6 | 25.6 | 31.6 |
|-----------------------------------------|---------|--------|--------|--------|
| (with Transliteration) Extract & Attend | 26.46 | 46.5 | 40.8 | 34.8 |
| (w/o Transliteration) | | | | |
Table 6: BLEU, COMET and error rates (%) for En → Zh.
Considering that different transliteration rules may be applied for different countries, we propose to incorporate nationality information during transliteration. To evaluate the effectiveness of utilizing the nationality information in the transliteration pipeline, we compare the performance of the
| Transliteration | En → Ru | Ru → En | En → Zh | Zh → En |
|-------------------|-----------|-----------|-----------|-----------|
| Nationality-aware | 79 | 85 | 95 | 97 |
| w/o Nationality | 74 | 82 | 90 | 88 |
proposed nationality-aware transliteration pipeline with the transliteration model trained on paired entities without nationality information. As shown in Table 7, adding nationality information during transliteration consistently improves transliteration quality across all language pairs, and is most helpful for Zh → En, where the transliteration accuracy is improved by 9%.
Table 7: Accuracy of transliteration (%).
| Model | BLEU | COMET | Error rate |
|------------------|--------|---------|--------------|
| Extract & Attend | 26.79 | 48.6 | 27.7 |
| (Decoder) | | | |
| Extract & Attend | 26.56 | 46.2 | 29.8 |
| (Encoder) | | | |
We also conduct experiments to evaluate the effect of attending translation candidates in the encoder compared to the decoder. Similar to Zhong and Chiang (2020), we append translation candidates to the source tokens, where the position embeddings of the translation candidates are shared with the first token of the corresponding entities in the source sentence. Relative position embeddings denoting token order within the translation candidate are also added. As shown in Table 8, adding the translation candidates to the decoder is better than adding to the encoder. Intuitively, attending to translation candidates in the encoder may incur additional burden to the encoder to handle multiple languages.
Table 8: BLEU, COMET and error rates (%) for En → Zh.
Some entities have multiple translation candidates in the entity dictionary. To study whether to provide multiple candidates for each named entity, we extract up to three candidates from the entity dictionary. To help the model distinguish different candidates, we use a separator between candidates of the same entity, which is different from the one used to separate the candidates for different entities.
Table 9 shows that adding multiple translation candidates slightly reduces the translation quality in terms of BLEU, COMET and entity error rate. Intuitively, all the retrieved translation candidates for an entity are typically correct, and using one translation candidate for each entity provides sufficient information.
Table 9: BLEU, COMET and error rates (%) for En → Zh.
## 5.4 Inference Time
Extracting translation candidates requires additional inference time, including the delays from NER and transliteration pipeline. Specifically, the average inference time for standard Transformer, Replacement, Placeholder, Annotation, Multi-task and our method are 389ms, 552ms, 470ms, 416ms, 395ms, 624ms15.
## 5.3 Analysis On Attending
| Model | BLEU | COMET | Error rate |
|-------------------------------------|--------|---------|--------------|
| Extract & Attend | 26.79 | 48.6 | 27.7 |
| (single candidate) Extract & Attend | 26.75 | 47.9 | 27.8 |
| (multiple candidates) | | | |
## 6 Conclusion
In this paper, we propose an Extract-and-Attend approach to improve the translation quality in NMT
systems. Specifically, translation candidates for entities in the source sentence are first extracted, and then attended to by the decoder via self-attention.
Experimental results demonstrate the effectiveness of the proposed approach and design aspects.
Knowledge is an important resource to enhance the entity translation in NMT, while we only take advantage of the paired entities, nationality and biography information. In the future work, it is interesting to investigate how to make better use of the knowledge, which can be obtained from knowledge graphs and large-scale pre-trained models. Besides, the proposed Extract-and-Attend approach also has some limitations. First, our method requires additional entity resources, which may be difficult to obtain for certain language pairs. With the development of multilingual entity datasets like Paranames (Sälevä and Lignos, 2022), we are optimistic such resources will be more accessible in the near future. Second, as demonstrated in Section 5.4, extracting translation candidates increases inference time. Due to space limitation, more limitations are discussed in Appendix A.6.
## References
Philip Arthur, Graham Neubig, and Satoshi Nakamura.
2016. Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1557–1567, Austin, Texas. Association for Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. *CoRR*, abs/1409.0473.
Giuseppina Cortese. 1999. Cognitive processes in translation and interpreting. joseph h. danks, gregory m.
shreve, stephen b. fountain, and michael k. mcbeath
(eds.). london: Sage, 1997. pp. 294. *Applied Psycholinguistics*, 20(2):318–327.
Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063–
3068, Florence, Italy. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Miriam Exel, Bianka Buschbeck, Lauritz Brandt, and Simona Doneva. 2020. Terminology-constrained neural machine translation at SAP. In *Proceedings of* the 22nd Annual Conference of the European Association for Machine Translation, pages 271–280, Lisboa, Portugal. European Association for Machine Translation.
David Gerver. 1975. A psychological approach to simultaneous interpretation. *Meta: Translators' Journal*,
20:119–128.
Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation.
arXiv preprint arXiv:1803.05567.
Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. *Advances in neural* information processing systems, 29.
Junjie Hu, Hiroaki Hayashi, Kyunghyun Cho, and Graham Neubig. 2022. DEEP: DEnoising entity pretraining for neural machine translation. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 1753–1766, Dublin, Ireland. Association for Computational Linguistics.
Sarvnaz Karimi, Falk Scholer, and Andrew Turpin. 2011.
Machine transliteration survey. *ACM Computing Surveys (CSUR)*, 43(3):1–46.
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In *Proceedings of the* 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics.
Soumyadeep Kundu, Sayantan Paul, and Santanu Pal.
2018. A deep learning based approach to transliteration. In Proceedings of the seventh named entities workshop, pages 79–83.
Samuel Läubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, and Antonio Toral. 2020.
A set of recommendations for assessing humanmachine parity in language translation. *J. Artif. Intell.*
Res., 67:653–672.
Ngoc Tan Le, Fatiha Sadat, Lucie Menard, and Dien Dinh. 2019. Low-resource machine transliteration using recurrent neural networks. *ACM Transactions* on Asian and Low-Resource Language Information Processing (TALLIP), 18(2):1–14.
Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li.
2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering, 34(1):50–70.
Xiaoqing Li, Jinghui Yan, Jiajun Zhang, and Chengqing Zong. 2018a. Neural name translation improves neural machine translation. In *China Workshop on Machine Translation*, pages 93–100. Springer.
Xiaoqing Li, Jinghui Yan, Jiajun Zhang, and Chengqing Zong. 2019. Neural name translation improves neural machine translation. In *Machine Translation*,
pages 93–100, Singapore. Springer Singapore.
Zhongwei Li, Xuancong Wang, Ai Ti Aw, Eng Siong Chng, and Haizhou Li. 2018b. Named-entity tagging and domain adaptation for better customized translation. In *Proceedings of the Seventh Named Entities Workshop*, pages 41–46, Melbourne, Australia.
Association for Computational Linguistics.
Pierre Lison and Jörg Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. In International Conference on Language Resources and Evaluation.
Maciej Modrzejewski, Miriam Exel, Bianka Buschbeck, Thanh-Le Ha, and Alexander Waibel. 2020. Incorporating external annotation to improve named entity translation in NMT. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 45–51, Lisboa, Portugal.
European Association for Machine Translation.
Sreyashi Nag, Mihir Kale, Varun Lakshminarasimhan, and Swapnil Singhavi. 2020. Incorporating bilingual dictionaries for low resource semi-supervised neural machine translation.
Wei Peng, Chongxuan Huang, Tianhao Li, Yun Chen, and Qun Liu. 2020. Dictionary-based data augmentation for cross-domain neural machine translation.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Jonne Sälevä and Constantine Lignos. 2022. Paranames:
A massively multilingual entity name corpus. *arXiv* preprint arXiv:2202.14035.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics.
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. Mass: Masked sequence to sequence pretraining for language generation. In *International* Conference on Machine Learning, pages 5926–5936.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks.
In *NIPS*.
Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura, and Manabu Okumura. 2018. Neural machine translation incorporating named entity. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3240–3250, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Rui Wang, Xu Tan, Renqian Luo, Tao Qin, and Tie-Yan Liu. 2021. A survey on low-resource neural machine translation. *arXiv preprint arXiv:2107.04239*.
Shuo Wang, Zhixing Tan, and Yang Liu. 2022. Integrating vectorized lexical constraints for neural machine translation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 7063–7073, Dublin, Ireland. Association for Computational Linguistics.
Yuguang Wang, Shanbo Cheng, Liyang Jiang, Jiajun Yang, Wei Chen, Muze Li, Lin Shi, Yanfeng Wang, and Hongtao Yang. 2017. Sogou neural machine translation systems for wmt17. In *Proceedings of the* Second Conference on Machine Translation, pages 410–415.
Shufang Xie, Yingce Xia, Lijun Wu, Yiqing Huang, Yang Fan, and Tao Qin. 2022. End-to-end entityaware neural machine translation. *Mach. Learn.*,
111(3):1181–1203.
Jinghui Yan, Jiajun Zhang, JinAn Xu, and Chengqing Zong. 2019. The impact of named entity translation for neural machine translation. In *Machine Translation*, pages 63–73, Singapore. Springer Singapore.
Jiajun Zhang and Chengqing Zong. 2016. Bridging neural machine translation and bilingual dictionaries.
Tong Zhang, Long Zhang, Wei Ye, Bo Li, Jinan Sun, Xiaoyu Zhu, Wen Zhao, and Shikun Zhang. 2021.
Point, disambiguate and copy: Incorporating bilingual dictionaries for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3970–
3979, Online. Association for Computational Linguistics.
Yang Zhao, Yining Wang, Jiajun Zhang, and Chengqing Zong. 2018. Phrase table as recommendation memory for neural machine translation. In *Proceedings* of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4609–
4615. International Joint Conferences on Artificial Intelligence Organization.
Yang Zhao, Lu Xiang, Junnan Zhu, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2020a. Knowledge graph enhanced neural machine translation via multitask learning on sub-entity granularity. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 4495–4505.
Yang Zhao, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2020b. Knowledge graphs enhanced neural machine translation. In *IJCAI*, pages 4039–4045.
Xing Jie Zhong and David Chiang. 2020. Look it up: Bilingual and monolingual dictionaries improve neural machine translation. *arXiv preprint* arXiv:2010.05997v2.
## A Appendix A.1 Case Study
We also conduct a case study on the En → Zh test set to demonstrate the capability of our model when handing multiple entities in a sentence. As shown in Table 10, the outputs of our model normally has correct alignments between the translations and the corresponding entities in the source sentence.
Besides, the baseline model has a strong tendency to copy unfamiliar entities in the source sentence, while our model can alleviate this problem and encourage the translation model to incorporate proper transliteration.
| Source | Simone , Gabby and Laurie all took the same path as Aly and Madison to make the Olympic team . |
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Reference | 西蒙、加布丽埃勒和劳瑞进入奥运代表队的途径跟阿里及麦迪逊一样。 |
| Baseline | Simone、Gabby和劳瑞进入奥运代表队的途径跟Aly及麦迪逊一样。 |
| Ours | 西蒙、加比和劳瑞进入奥运代表队的途径跟阿里及麦迪逊一样。 |
| Source | Lomachenko defends his belt against Miguel Marriaga on Saturday night at 7 on ESPN . |
| Reference | 在周六晚上7点的ESPN比赛中,洛马琴科战胜了米格尔·马里亚加,保全 了他的地位。 |
| Baseline | Lomachenko 在周六晚上7点在ESPN上为Miguel Marriaga辩护。 |
| Ours | 洛马琴科周六晚7点在ESPN对阵米格尔-玛利亚加的比赛中卫冕他的腰 带。 |
| Source | iCloud ' s main data center at Gui-An New Area will be the first data center Apple has set up in China . On completion , it will be used to store the data of Apple users in China . |
| Reference | iCloud贵安新区主数据中心也将是苹果公司在中国设立的第一个数据中 心项目,项目落成后,将用于存储中国苹果 用户的数据。 |
| Baseline | iCloud在桂安新区的主要数据中心将是苹果在中国建立的第一个数据中 心。完成后,它将用于存储中国苹果用户的数据。 |
| Ours | iCloud在贵安新区的主要数据中心将是苹果在中国建立的第一个数据中 心。完成后,它将用于存储中国苹果用户的数据。 |
## A.2 Error Rates By Entity Type
To alleviate the problem of false errors caused by NER, We aggregate across all language pairs and calculated the average error rate for each type of entity. From Table 11, it is shown that our method outperforms all baselines for PER, ORG and LOC
entities.
| Model | PER | ORG | LOC |
|---------------------------|-------|-------|-------|
| Transformer | 50.4 | 42.4 | 37.5 |
| Transformer w/ Dictionary | 49.8 | 41.7 | 37.2 |
| Replacement | 35.2 | 38.9 | 35.2 |
| Placeholder | 34.5 | 39.0 | 33.9 |
| Annotation | 35.7 | 40.2 | 34.1 |
| Multi-task | 49.2 | 41.4 | 37.6 |
| ours | 29.9 | 38.1 | 33.4 |
Table 11: Error rates (%) on WMT newstest by entity type.
## A.3 Robustness Against Ner Errors
To test the robustness against NER errors, we filter the samples in which incorrect candidates are collected, which can result from NER errors and transliteration errors. Compared to the Transformer baseline, in 32% of the cases, the extract and attend method is misguided by the incorrect candidates, while for the replacement and placeholder approaches 100% of the cases is misguided. Accordingly, our method is arguably more robust against NER errors.
## A.4 Analysis Of Dictionary Coverage
To analyze the performance of our approach on domains not well covered by the dictionary, we evaluate our approach and baselines on OpenSubtitles dataset (Lison and Tiedemann, 2016). Because there is no official test set for this dataset, we randomly sample 10K En-Zh sentence pairs. There are 3.6K PER entities, 1.1K ORG entities and 1.1K LOC entities in this test set. Compared to the dictionary coverage of 32.4% for WMT newstest, the dictionary coverage is only 15.2% for the Opensubtitles test set. The overall entity error rates are shown in Table 12. Our results show that even when the coverage of the entity dictionary is relatively low, the proposed Extract-and-Attend framework achieves consistent improvement in entity error rates compared to alternative methods.
| Model | Error Rate(%) |
|---------------------------|-----------------|
| Transformer | 29.6 |
| Transformer w/ Dictionary | 29.2 |
| Replacement | 26.8 |
| Placeholder | 26.3 |
| Annotation | 27.9 |
| Multi-task | 28.2 |
| ours | 24.9 |
uniqueness, experiments on other language pairs are still needed.
Table 12: Entity error rates (%) on OpenSubtitles test set for En → Zh.
## A.5 Comparison With Vecconstnmt
Some researchers have proposed VecConstNMT to mine and integrate lexical constraints from parallel corpora, which can potentially improve entity translation quality (Wang et al., 2022). We compare our method with VecConstNMT on En → Zh and Zh → En. For En → Zh, and the results are shown in Table 13. Possible reasons that our method outperforms their method include: (1) our method uses additional resources such as dictionaries (2) a relatively small portion of lexical constraints are related to entity translation.
| Model | En → Zh | Zh → En |
|-------------|-----------|-----------|
| VecConstNMT | 31.8 | 28.1 |
| ours | 27.7 | 27.5 |
Table 13: Error rates (%) on WMT newstest.
## A.6 Extended Discussion Of Limitations
Though errors caused by NER are alleviated by attending to the translation candidates via selfattention, the quality of the extracted translation candidates is still affected by NER accuracy and dictionary coverage, and higher quality of translation candidates normally leads to better performance. Another issue worth noting is the evaluation criterion for entity translation. As mentioned in Section 5.1, automatically calculating the error rate on entities based on NER and the reference sentence incurs false negative errors, and better criteria to evaluate the translation quality of entities are needed. What's more, in this paper we assume transliteration rules are the same for regions using the same language and assume that nationality is the same as language of origin, which may be inappropriate in some rare cases. Last but not least, considering that languages may have their own
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
zheng-lapata-2023-real | Real-World Compositional Generalization with Disentangled Sequence-to-Sequence Learning | https://aclanthology.org/2023.findings-acl.108 | Compositional generalization is a basic mechanism in human language learning, which current neural networks struggle with. A recently proposed Disentangled sequence-to-sequence model (Dangle) shows promising generalization capability by learning specialized encodings for each decoding step. We introduce two key modifications to this model which encourage more disentangled representations and improve its compute and memory efficiency, allowing us to tackle compositional generalization in a more realistic setting. Specifically, instead of adaptively re-encoding source keys and values at each time step, we disentangle their representations and only re-encode keys periodically, at some interval. Our new architecture leads to better generalization performance across existing tasks and datasets, and a new machine translation benchmark which we create by detecting naturally occurring compositional patterns in relation to a training set. We show this methodology better emulates real-world requirements than artificial challenges. |
## Real-World Compositional Generalization With Disentangled Sequence-To-Sequence Learning
Hao Zheng and **Mirella Lapata**
Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB
[email protected] [email protected]
## Abstract
Compositional generalization is a basic mechanism in human language learning, which current neural networks struggle with. A recently proposed Disent**angle**d sequence-to-sequence model (Dangle) shows promising generalization capability by learning specialized encodings for each decoding step. We introduce two key modifications to this model which encourage more disentangled representations and improve its compute and memory efficiency, allowing us to tackle compositional generalization in a more realistic setting. Specifically, instead of adaptively re-encoding source keys and values at each time step, we disentangle their representations and only re-encode keys periodically, at some interval. Our new architecture leads to better generalization performance across *existing* tasks and datasets, and a new machine translation benchmark which we create by detecting *naturally occurring* compositional patterns in relation to a training set. We show this methodology better emulates realworld requirements than artificial challenges.1
## 1 Introduction
The Transformer architecture (Vaswani et al., 2017)
and variants thereof have become ubiquitous in natural language processing. Despite widespread adoption, there is mounting evidence that Transformers as sequence transduction models struggle with *compositional generalization* (Kim and Linzen, 2020; Keysers et al., 2020; Li et al., 2021).
It is basically the ability to produce and understand a potentially infinite number of novel linguistic expressions by systematically combining known atomic components (Chomsky, 2014; Montague, 1970). Attempts to overcome this limitation have explored various ways to explicitly inject compositional bias through data augmentation (Jia and Liang, 2016; Akyürek et al., 2021; Andreas, 2020; 1Our code and dataset will be available at https://
github.com/mswellhao/Dangle.
Wang et al., 2021) or new training objectives (Conklin et al., 2021; Oren et al., 2020; Yin et al., 2021).
The majority of existing approaches have been designed with semantic parsing in mind, and as a result adopt domain- and task-specific grammars or rules which do not extend to other tasks (e.g., machine translation).
In this work we aim to improve generalization via general architectural modifications which are applicable to a wide range of tasks. Our starting point are Zheng and Lapata (2022) who unveil that one of the reasons hindering compositional generalization in Transformers relates to their representations being entangled. They introduce Dangle, a sequence-to-sequence model, which learns more Disent**angle**d representations by adaptively re-encoding (at each time step) the source input.
For each decoding step, Dangle learns specialized source encodings by conditioning on the newly decoded target which leads to better compositional generalization compared to vanilla Transformers where source encodings are shared throughout decoding. Although promising, their results are based on synthetic datasets, leaving open the question of whether Dangle is effective in real-world settings involving both complex natural language and compositional generalization.
We present two key modifications to Dangle which encourage learning *more disentangled* representations *more efficiently*. The need to perform re-encoding at each time step substantially affects Dangle's training time and memory footprint. It becomes prohibitively expensive on datasets with long target sequences, e.g., programs with 400+
tokens in datasets like SMCalFlow (Andreas et al.,
2020). To alleviate this problem, instead of adaptively re-encoding at each time step, we only reencode periodically, at some interval. Our decoder is no different from a vanilla Transformer decoder except that it just re-encodes once in a while in order to update its history information. Our second modification concerns disentangling the representations of source keys and values, based on which the encoder in Dangle (and also in Transformers)
passes source information to the decoder. Instead of computing keys and values using shared source encodings, we disassociate their representations:
we encode source *values once* and re-encode keys periodically.
We evaluate the proposed model on existing benchmarks (Andreas et al., 2020; Li et al., 2021) and a new dataset which we create to better emulate a real-world setting. We develop a new methodology for *detecting* examples representative of compositional generalization in naturally occurring text.
Given a training and test set: (a) we discard examples from the test set that contain out-of-vocabulary
(OOV) or rare words (in relation to training) to exclude novel atoms which are out of scope for compositional generalization; (b) we then measure how compositional a certain test example is with respect to the training corpus; we introduce a metric which allows us to identify a candidate pool of highly compositional examples; (c) using uncertainty estimation, we further select examples from the pool that are both compositional in terms of surface form and challenging in terms of generalization difficulty. Following these three steps, we create a *machine translation* benchmark using the IWSLT 2014 German-English dataset as our training corpus and the WMT 2014 German-English shared task as our test corpus.
Experimental results demonstrate that our new architecture achieves better generalization performance across tasks and datasets and is adept at handling real-world challenges. Machine translation experiments on a diverse corpus of 1.3M WMT examples show it is particularly effective for long-tail compositional patterns.
## 2 Background: The Dangle Model
We first describe Dangle, the Disent**angle**d Transformer model introduced in Zheng and Lapata
(2022) focusing on their encoder-decoder architecture which they show delivers better performance on complex tasks like machine translation.
Let X = [x1, x2*, ..., x*n] denote a source sequence; let fEncoder and fDecoder denote a Transformer encoder and decoder, respectively. X is first encoded into a sequence of N contextualized representations:
$$\begin{array}{r c l}{{N}}&{{=}}&{{f_{\mathrm{Encoder}}(X)}}&{{\qquad}}&{{\mathrm{(1)}}}\\ {{}}&{{}}&{{}}&{{}}\\ {{}}&{{}}&{{}}&{{1712}}\end{array}$$
which are then used to decode target tokens
[y1, y2*, ..., y*m] one by one. At the t-th decoding step, the Transformer takes yt as input, reusing the source encodings N and target memory Mt−1 which contains the history hidden states of all decoder layers corresponding to past tokens [y1, y2*, ..., y*t−1]:
$$y_{t+1},M_{t}=f_{\mathrm{Decoder}}(y_{t},M_{t-1},N)$$
$$(2)$$
This step not only generates a new token yt+1, but also updates the internal target memory Mt by concatenating Mt−1 with the newly calculated hidden states corresponding to yt.
Dangle differs from vanilla Transformers in that it concatenates the source input with the previously decoded target to construct target-dependent input for *adaptive* decoding:
$$\begin{array}{r c l}{{C_{t}}}&{{=}}&{{[x_{1},x_{2},...,x_{n},y_{1},...,y_{t}]}}\\ {{H_{t}}}&{{=}}&{{f_{\mathrm{Adaptive\_Encoder}}(C_{t})}}\end{array}$$
$$\begin{array}{c}{{(3)}}\\ {{(4)}}\end{array}$$
The adaptive encoder consists of two components.
Ctis first fed to k1 Transformer encoder layers to fuse the target information:
$${\bar{H}}_{t}=f_{\mathrm{Adaptive\_Encoder}_{1}}(C_{t})$$
$$(5)$$
(Ct) (5)
where H¯tis a sequence of contextualized representations [h¯t,1, h¯t,2, ..., h¯t,n, h¯t,n+1, ..., h¯t,n+t].
Then, the first n vectors corresponding to source tokens are extracted and fed to another k2 Transformer encoder layers for further processing:
$$H_{t}\quad=\quad f_{\mathrm{Adaptive\_Encoder_{2}}}(\bar{H}_{t}[:n])$$
$$(6)$$
(H¯t[: n]) (6)
Finally, the adaptive source encodings Httogether with the target context [y1, y2*, ..., y*t] are fed to a Transformer decoder to predict yt+1:
$$y_{t+1},M_{t}=f_{\mathrm{Decoder}}(y_{<t+1},\{\},H_{t})$$
$$\left(T\right)$$
In a departure from vanilla Transformers, Dangle does not reuse the target memory from previous steps, but instead re-computes all target-side hidden states based on new source encodings Ht.
Similarly to Transformers, Dangle accesses source information at each decoding step via encoder-decoder attention layers where the same encodings Ht are used to compute both keys Kt and values Vt:
$$\begin{array}{r l r l}{=}&{{}H_{t}W^{K}}&{}&{{}}&{{}}\\ {=}&{{}H_{t}W^{V}}&{}&{{}}&{{}}\\ {=}&{{}}&{{}\mathrm{Attention}(Q_{t},K_{t},V_{t})}&{}&{{}}\end{array}$$
$$\begin{array}{c}{{K_{t}}}\\ {{V_{t}}}\\ {{O_{t}}}\end{array}$$
where key and value projections WK and WVare parameter matrices; and Qt, Kt, Vt, and Ot are respectively query, key, value, and output matrices, at time step t.
## 3 The R-Dangle Model
In this section, we describe the proposed model, which we call R-Dangle as a shorthand for Realworld Disent**angle**d Transformer.
## 3.1 Re-Encoding At Intervals
The need to perform re-encoding (and also redecoding) at each time step substantially increases Dangle's training cost and memory footprint, so that it becomes computationally infeasible for real-world language tasks with very long target sequences (e.g., in the region of hundreds of tokens). Adaptively re-encoding at every time step essentially means separating out relevant source concepts for each prediction. However, the Transformer is largely capable of encoding source phrases and decoding corresponding target phrases
(or logical form fragments in semantic parsing),
as evidenced by its remarkable success in many machine translation and semantic parsing benchmarks (Vaswani et al., 2017; Keysers et al., 2020; Zheng and Lapata, 2021). This entails that the entanglement problem (i.e., not being able to disassociate the representations of different concepts for a sequence of predictions) does not occur very frequently. We therefore relax the strict constraint of re-encoding at every step in favor of the more flexible strategy of re-encoding at intervals.
Given source sequence X = [x1, x2*, ..., x*n], we specify P = [t1, t2*, ..., t*l](ti+1 − ti = o) in advance, i.e., a sequence of re-encoding points with interval o. Then, during decoding, when reaching a re-encoding point t(t = ti), we update source encodings Ht and target memory Mt:
$$\begin{array}{r c l}{{H_{t}}}&{{=}}&{{f_{\mathrm{Adaptive\_Encoder}}(C_{t})}}\\ {{y_{t+1},M_{t}}}&{{=}}&{{f_{\mathrm{Decoder}}(y_{<t+1},\{\},H_{t})}}\end{array}$$
where fAdaptive_Encoder denotes the adaptive encoder described in Section 2. For the next time step t(ti *< t < t*i+1), we fall back to the vanilla Transformer decoder using the source encodings Hti computed at time step ti:
$$y_{t+1},M_{t}=f_{\mathrm{Decoder}}(y_{t},M_{t-1},H_{t_{i}})$$
) (13)
Note that we always set t1 to 1 to perform adaptive encoding at the first time step.
## 3.2 Disentangling Keys And Values
During decoding, Dangle accesses source information via cross-attention (also known as encoderdecoder attention) layers where the same source encodings are used to compute both keys and values. The core design principle underlying Dangle is that learning specialized representations for different purposes will encourage the model to zero in on relevant concepts, thereby disentangling their representations. Based on the same philosophy, we assume that source keys and values encapsulate different aspects of source information, and that learning more specialized representations for them would further improve disentanglement, through the separation of the concepts involved.
A straightforward way to implement this idea is using two separately parameterized encoders to calculate two groups of source encodings (i.e., corresponding to keys and values, respectively) during re-encoding. However, in our preliminary experiments, we observed this leads to serious overfitting and performance degradation. Instead, we propose to encode values once and update keys only during adaptive encoding. We compute source *values* via the standard Transformer encoder:
$$H^{v}\quad=\quad f_{\mathrm{Encoder}}(X)$$
$$(14)$$
$$\begin{array}{l}{(15)}\\ {(16)}\end{array}$$
and adaptively re-encode source *keys* at an interval:
$$H_{t}^{k}=f_{\text{Adaptive\_Encoder}}(C_{t})\tag{15}$$ $y_{t+1},M_{t}=f_{\text{Kv\_Decoder}}(y_{<t+1},\{\},H^{v},H_{t}^{k})$ (16)
where fKV_Decoder denotes a slightly modified Transformer decoder where source keys and values in each cross-attention layer are calculated based on different source encodings:
$$\begin{array}{r c l}{{K_{t}}}&{{=}}&{{H_{t}^{k}W^{K}}}\\ {{V}}&{{=}}&{{H^{v}W^{V}}}\\ {{O_{t}}}&{{=}}&{{\mathrm{Attention}(Q_{t},K_{t},V)}}\end{array}$$
$\left(17\right)$ $\left(18\right)$ $\left(19\right)$ ...
$$\begin{array}{l}{(11)}\\ {(12)}\end{array}$$
At time step t (where ti *< t < t*i+1), we perform vanilla Transformer decoding:
$$y_{t+1},M_{t}=f_{\rm KV\_Decoder}}(y_{t},M_{t-1},H^{v},H^{k}_{t_{i}})\tag{20}$$
$$(13)$$
Note that fully sharing values could potentially cause some entanglement, however, we we did not observe this in practice. We also experimented with a variant where keys are shared and values are repeatedly re-computed but empirically observed
$\sigma_{\rm max}=\sigma_{\rm max}$.
| Selected | Examples | Compositional Degree | Uncertainty |
|------------|-------------------------------------|------------------------|---------------|
| ✗ | but what can we do about this ? | 2 / 8 = 0.25 | - |
| ✗ | please report all changes here . | 5 / 6 = 0.83 | 0.054 |
| ✔ | you have disabled your javascript ! | 5 / 6 = 0.83 | 0.274 |
it obtains significantly worse generalization performance than the value-sharing architecture described above. This indicates that entanglement is more likely to occur when sharing keys.
## 4 A Real-World Compositional Generalization Challenge
Models of compositional generalization are as good as the benchmarks they are evaluated on. A few existing benchmarks are made of artificially synthesized examples using a grammar or rules to systematically control for different types of generalization
(Lake and Baroni, 2018a; Kim and Linzen, 2020; Keysers et al., 2020; Li et al., 2021). Unfortunately, synthetic datasets lack the complexity of real natural language and may lead to simplistic modeling solutions that do not generalize to real world settings (Dankers et al., 2022). Other benchmarks (Finegan-Dollak et al., 2018; Shaw et al.,
2021) focus on naturally occurring examples but create train-test splits based on the properties of their formal meaning representations (e.g., logical forms). However, formal annotations of meaning are not readily available for tasks other than semantic parsing. Since compositional generalization is a general problem, it is desirable to define it on the basis of natural language alone rather than by means of semantic parsing and the availability of formal annotations.
It is fair to assume that a SOTA model deployed in the wild, e.g., a Transformer-based machine translation system, will be constantly presented with new test examples. Many of them could be similar to seen training instances or compositionally different but in a way that does not pose serious generalization challenges. An ideal benchmark for evaluating compositional generalization should therefore consist of phenomena that are of practical interest while challenging for SOTA models. To this end, we create ReaCT, a new REAlworld dataset for Compositional generalization in machine Translation. Our key idea is to obtain a generalization test set by *detecting* compositional patterns in relation to an existing training set from a large and diverse pool of candidates. Specifically, we use the IWSLT 2014 German → English dataset as our training corpus and the WMT 2014 German → English shared task as our test corpus
(see Section 5 for details) and detect from the pool of WMT instances those that exemplify compositional generalization with respect to IWSLT. This procedure identifies naturally occurring compositional patterns which we hope better represent practical generalization requirements than artificially constructed challenges.
In the following, we describe how we identify examples that demand compositional generalization. While we create our new benchmark with machine translation in mind, our methodology is general and applicable to other settings such as semantic parsing. For instance, we could take a relatively small set of annotated user queries as our training set and create a generalization challenge from a large pool of unlabeled user queries.
Filtering Out-of-Vocabulary Atoms Compositional generalization involves generalizing to new compositions of *known* atoms. The WMT corpus includes many new semantic and syntactic atoms that are not attested in IWSLT. A large number of these are out-of-vocabulary (OOV) words which are by definition unknown and out of scope for compositional generalization. We thus discard WMT
examples with words occurring less than 3 times in the IWSLT training set which gives us approximately a pool of 1.3M examples. For simplicity, we do not consider any other types of new atoms such as unseen word senses or syntactic patterns.
Measuring Compositionality How to define the notion of compositional generalization is a central question in creating a benchmark. Previous definitions have mostly centered around linguistic notions such as constituent or context-free grammars (Kim and Linzen, 2020; Keysers et al., 2020; Li et al., 2021). These notions are appropriate for synthetic examples or logical forms as their underlying hierarchical structures are well-defined and can be obtained with ease.
Since we do not wish to synthesize artificial examples but rather detect them in real-world utterances, relying on the notion of constituent might be problematic. Sentences in the wild are often noisy and ungrammatical and it is far from trivial to analyze their syntactic structure so as to reliably identify new compositions of known constituents.
We overcome this problem by devising a metric based on n-gram matching which assesses how compositional a certain example is with respect to a training corpus.
Specifically, we first create a lookup dictionary of atomic units by extracting all n-grams that occur more than 3 times in the training corpus. Given a candidate sentence, we search the dictionary for the minimum number of n-grams that can be composed to form the sentence. For example, for sentence "x1x2x3x4x5" and dictionary (x1, x2, x3x4, x5, x1x2, x3x4x5,), the minimum set of such n-grams is (x1x2, x3x4x5). A
sentence's *compositional degree* with respect to the training corpus is defined as the ratio of the minimum number of n-grams to its length
(e.g., 2/5 = 0.4 for the above example). We select the top 60,000 non-overlapping examples with the highest compositional degree as our *candidate* pool. As we discuss in Section 6, compositional degree further allows us to examine at a finer level of granularity how model performance changes as test examples become increasingly compositional.
Estimating Uncertainty Examples with the same compositional degree could pose more or less difficulty to neural sequence models (see last two utterances in Table 1). Ideally, we would like to identify instances that are compositional in terms of surface form and hard in terms of the underlying generalization (see third example in Table 1).
We detect such examples using a metric based on uncertainty estimation and orthogonal to compositional degree. We quantify predictive uncertainty based on model ensembles, a method which has been successfully applied to detecting misclassifications and out-of-distribution examples (Lakshminarayanan et al., 2017; Malinin and Gales, 2021).
We follow the uncertainty estimation framework introduced in Malinin and Gales (2021) for se-
| Comp | Word n-gram | POS n-gram | | | | |
|-----------|-------------------|------------------------|-----------------|-------|----|----|
| Dataset | # examples Degree | 2 | 3 | 2 | 3 | |
| COGS | 21,000 | 0.392 | 6,097 24,275 12 | 27 | | |
| CoGnition | 10,800 | 0.502 | 1,865 13,344 | 1 | 38 | |
| CFQ | 11,968 | 0.268 | 168 | 2,736 | 8 | 30 |
| ReaCT | 3,000 | 0.811 19,315 33,652 76 | 638 | | | |
quence prediction tasks. Specifically, we train 10 Transformer models with different random initializations on IWSLT (our training corpus), and run inference over the candidate pool created in the previous stage; for each example in this pool, we measure the disagreement between ensemble models using *reverse mutual information*, a novel measure (Malinin, 2019; Malinin and Gales, 2021)
which quantifies *knowledge uncertainty*, i.e., a model's uncertainty in its prediction due to lack of understanding of the data rather than any intrinsic uncertainty associated with the task (e.g., a word could have multiple correct translations). We use the token-level approximation of knowledge uncertainty.
We empirically find that the most uncertain examples are extremely noisy and barely legible
(e.g., they include abbreviations, typos, and nonstandard spelling). We therefore throw away the top 2,000 uncertain examples and randomly sample 3,000 instances from the next 18,000 most uncertain examples in an attempt to create a generalization test set with diverse language patterns and different levels of uncertainty.
Analysis We analyze the compositional nature of ReaCT by comparing it to several popular benchmarks. Specifically, for all datasets, we count the number of novel test set n-grams that have not been seen in the training. We extract n-grams over words and parts of speech (POS); word-based n-grams represent more superficial lexical composition while n-grams based on POS tags reflect more of syntactic composition.
As shown in Table 2, despite being considerably smaller compared to other benchmarks (see \# examples column), ReaCT presents substantially more diverse patterns in terms of lexical and syntactic composition. It displays a much bigger number of novel word n-grams, which is perhaps not surprising. Being a real-world dataset, ReaCT has a larger vocabulary and more linguistic variation. While
Train Test
![5_image_0.png](5_image_0.png)
| ( PRP RB IN NN . ) | |
|--------------------------------------------------------------------------------------|---------------------------------------------------------------------------|
| - you see , this is what india is today . the ground reality is based on | ( DT |
| NN NN VBZ VBN IN ) a cyclical world view . | the account data is provided to you directly via e-mail . |
| - | a couple of hours ( DT NN IN NNS ) later , the sun will shine on the next |
| magnifying glass . - but this could also be used for good . ( MD RB VB VBN IN NN . ) | both setting of tasks must successfully be mastered under supervision . |
| - | the national science foundation , other countries ( JJ NN NN , JJ NN ) |
| are very interested in doing this - no , they are full of misery . ( VBP JJ IN NN .) | its warm water temperature , small depth are convenient for bathing . |
Table 3: Novel syntactic compositions in ReaCT test set (syntactic atoms of same type are color coded). POS-tag sequences for these atoms are shown in parentheses (PRP:pronoun, RB:adverb, IN: preposition, NN/S:noun singular/plural, DT: determiner, JJ: adjective, MD:modal, VBZ: verb, 3rd person singular, present tense, VBP: verb, present tense, other than third person singular, VBN: verb past participle.)
our dataset creation process does not explicitly target novel syntactic patterns (approximated by POS
n-grams), ReaCT still includes substantially more compared to other benchmarks. This suggests that it captures the complexity of real-world compositional generalization to a greater extent than what is achieved when examples are synthesized artificially. We show examples with novel POS n-gram compositions in Table 3).
## 5 Experimental Setup
Datasets We evaluated R-Dangle on two machine translation datasets and one semantic parsing benchmark which we selected to maximally reflect natural language variations and real-world generalization challenges. These include: (a) **ReaCT**,
the machine translation benchmark developed in this paper; we used the IWSLT 2014 De→En test set as an in-domain test set and created an outof-distribution test set from the WMT'14 De→En training corpus; (b) **CoGnition** (Li et al., 2021) is a semi-natural machine translation benchmark focusing on English-Chinese sentence pairs; source sentences were taken from the Story Cloze Test and ROCStories Corpora (Mostafazadeh et al., 2016, 2017), and target sentences were constructed by post-editing the output of a machine translation engine; (c) **SMCalFlow-CS** (Andreas et al., 2020)
is a semantic parsing dataset for task-oriented dialogue, featuring real-world human-generated utterances about calendar management; following previous work (Yin et al., 2021; Qiu et al., 2022), we report experiments on the compositional skills split, considering a few-shot learning scenario (with 6, 16, and 32 training examples). See Appendix A for more details on these datasets.
Models On machine translation, our experiments evaluated two variants of R-Dangle depending on whether keys and values are shared (R-Dangleshr)
or separate (R-Danglesep). We implemented all machine translation models with fairseq (Ott et al., 2019). We compared R-Dangle against a vanilla Transformer (Vaswani et al., 2017) and the original Dangle model (Zheng and Lapata, 2022) which used the popular fairseq configuration transformer_iwslt_de_en. We also implemented bigger variants of these models using 12 encoder layers and 12 decoder layers which empirically led to better performance.
R-Dangleshr and R-Danglesep also use a 12-layer decoder. We tuned the number of layers of the adaptive components (k1 = 2 and k2 = 10) on the development set. For R-Danglesep, we adopted a 10-layer value encoder and a 10-layer key encoder (k1 = 2 and k2 = 8), with the top 8 layers in the two encoders being shared. This configuration produced 12 differently parametrized transformer encoder layers, maintaining identical model size to comparison systems.
Previous work (Qiu et al., 2022) has shown the advantage of pre-trained models on the SMCalFlow-CS dataset. For our semantic parsing experiments, we therefore built R-Dangle on top of BART-large (Lewis et al., 2020). We only report results with R-Dangleshr as the R-Danglesep architecture is not compatible with BART. We again set k1 = 2 and k2 = 10. We provide more detail on model configurations in Appendix B.
## 6 Results
Disentangling Keys and Values Improves Generalization Table 4 reports the BLEU score (Papineni et al., 2002) achieved by the two R-Dangle
| CoGnition | 1 | 2 | 4 | 8 |
|-------------|------|------|------|------|
| R-Dangleshr | 62.5 | 62.3 | 62.3 | 61.9 |
| R-Danglesep | 63.4 | 63.1 | 62.3 | 62.1 |
| ReaCT | 1 | 2 | 4 | 8 |
| R-Dangleshr | 11.8 | 11.9 | 11.8 | 11.6 |
| R-Danglesep | 12.3 | 12.2 | 11.9 | 11.7 |
variants on ReaCT and CoGnition, across different re-encoding intervals. R-Danglesep is consistently better than R-Dangleshr which confirms that representing keys and values separately is beneficial.
We also observe that smaller intervals lead to better performance (we discuss this further later).
Table 5 compares R-Danglesep (with interval 1)
against baseline models. In addition to BLEU, we report novel compound translation error rate, a metric introduced in Li et al. (2021) to quantify the extent to which novel compounds are mistranslated.
We compute error rate over instances and an aggregate score over contexts. R-Danglesep delivers compositional generalization gains over Dangle and vanilla Transformer models (both in terms of BLEU and compound translation error rate), even though their performance improves when adopting a larger 12-layer network. R-Danglesep achieves a new state of the art on CoGnition (a gain of 0.9 BLEU points over Dangle and 1.5 BLEU points over the Transformer baseline). R-Danglesep fares similarly on ReaCT; it is significantly superior to the Transformer model by 0.9 BLEU points, and Dangle by 0.5 BLEU points. Moreover, improvements on compositional generalisation are not at the expense of in-domain performance (R-Dangle obtains similar performance to the Transformer and Dangle on the IWSLT2014 in-domain test set).
## R-Dangle Can Handle Long-Tail Compositional
Patterns Better We next examine model performance on real-world examples with diverse language and different levels of composition. Specifically, we train R-Danglesep (interval=1) and a Transformer on the IWSLT14 corpus and test on the pool of 1.3M WMT examples obtained after filtering OOV words. Figure 1a plots the difference in BLEU between the two models against compositional degree. This fine-grained evaluation reveals that they perform similarly on the major-
![6_image_0.png](6_image_0.png)
ity of less compositional examples (BLEU difference is around zero), however, the performance gap becomes larger with more compositional examples (higher difference means higher BLEU for R-Danglesep). This indicates that R-Dangle is particularly effective for handling long-tail compositional patterns.
## R-Dangle **Boosts The Performance Of Pretrained**
Models The "pre-train and fine-tune" paradigm
(Peters et al., 2018; Devlin et al., 2019; Raffel et al.,
2020; Lewis et al., 2020) has been widely adopted in NLP, and semantic parsing is no exception (Shin et al., 2021; Qiu et al., 2022). We further investigate R-Dangle's performance when combined with a pre-trained model on the SMCalFlow-CS dataset
(across the three cross-domain settings). Table 6 shows that R-Dangleshr boosts the performance of BART-large, which suggests that generalization improvements brought by R-Dangle are complementary to generalization benefits afforded by largescale pre-training (see Zheng and Lapata 2022 for a similar conclusion). The proposed model effectively marries pre-training with disentangled representation learning to achieve better generalization.
In Table 6, we also compare R-Dangle with other top-performing models on SMCalFlow-CS. These include: (a) a sequence-to-sequence model with a BERT encoder and an LSTM decoder using a copy mechanism (BERT2SEQ; Yin et al. 2021); (b) the coarse-to-fine (C2F) model of Dong and Lapata
(2018) which uses a BERT encoder and a structured decoder that factorizes the generation of a program into sketch and value predictions; (c) and combi-
| Models | CoGnition | ReaCT | | | | |
|--------------------------------------|-------------|------------|----------|----------|----------|------|
| ↓ ErrRInst | ↓ ErrRAggr | ↑ ind-test | ↑cg-test | ↑IWSLT14 | ↑cg-test | |
| Transformer (Zheng and Lapata, 2022) | 30.5 | 63.8 | 69.2 | 59.4 | 34.4 | 9.5 |
| Dangle (Zheng and Lapata, 2022) | 22.8 | 50.6 | 69.1 | 60.6 | - | - |
| Transformer (our implementation) | 23.4 | 53.7 | 70.8 | 61.9 | 36.0 | 11.4 |
| Dangle (our implementation) | 19.7 | 47.0 | 70.6 | 62.5 | 36.1 | 11.8 |
| R-Danglesep (interval = 1 ) | 16.0 | 42.1 | 70.7 | 63.4 | 36.0 | 12.3 |
System 8-C 16-C 32-C
BERT2SEQ - 33.6 53.5 BERT2SEQ+SS - 46.8 61.7 C2F - 40.6 54.6
C2F+SS - 47.4 61.9
T5 34.7 44.7 59.0
T5+CSL 51.6 61.4 70.4 BART 32.1 47.2 61.9
+R-Dangleshr (interval = 6) 36.3 50.6 64.1
nations of these two models with span-supervised attention (+SS; Yin et al. 2021). We also include a T5 model and variant thereof trained on additional data using a model called Compositional Structure Learner (CSL) to generate examples for data augmentation (T5+CSL; Qiu et al. 2022). R-Dangle with BART performs best among models that do not use data augmentation across compositional settings. Note that our proposal is orthogonal to CSL
and could also benefit from data augmentation.
Larger Re-encoding Intervals Reduce Training Cost The results in Table 4 indicate that reencoding correlates with R-Dangle's generalization ability, at least for machine translation. Both model variants experience a drop in BLEU points when increasing the re-encoding interval to 8. We hypothesize that this sensitivity to interval length is task-related; target sequences in machine translation are relatively short and representative of real language, whereas in SMCalFlow-CS, the average length of target sequences (in formal language)
is 99.5 and the maximum length is 411. It is computationally infeasible to train R-Dangle with small intervals on this dataset, however, larger intervals still produce significant performance gains.
Figure 1b shows how accuracy and training time vary with interval length on SMCalFlow-CS with the 16-C setting. Larger intervals substantially reduce training cost with an optimal speed-accuracy trade off in between 10 and 50. For instance, interval 40 yields a 4x speed-up compared to interval 10 while achieving 50.3% accuracy. Finding a tradeoff between generalization and efficiency is an open research problem which we leave to future work.
## 7 Related Work
The realization that neural sequence-to-sequence models struggle with compositional generalization has led to numerous research efforts aiming to precisely define this problem and explore possible solutions to it. A line of research focuses on benchmarks which capture different aspects of compositional generalization. Finegan-Dollak et al. (2018)
repurpose existing semantic parsing benchmarks for compositional generalization by creating more challenging splits based on logical form patterns.
In SCAN (Lake and Baroni, 2018b) compositional generalization is represented by unseen combinations of seen actions (e.g., JUMP LTURN). Keysers et al. (2020) define compositional generalization as generalizing to examples with maximum compound divergence (e.g., combinations of entities and relations) while guaranteeing similar atom distribution to the training set. Kim and Linzen (2020)
design five linguistic types of compositional generalization such as generalizing phrase nesting to unseen depths. In ReaCT, our definition of compositional generalization is dependent on the data distribution of the candidate corpus, which determines what compositional patterns are of practical interest and how frequently they occur.
Another line of work focuses on modeling solutions, mostly ways to explicitly instil compositional bias into neural models. This can be achieved by adopting a more conventional grammar-based approach (Herzig and Berant, 2021) or incorporating a lexicon or lexicon-style alignments into sequence models (Akyurek and Andreas, 2021; Zheng and Lapata, 2021). Other work employs heuristics, grammars, and generative models to synthesize examples for data augmentation (Jia and Liang, 2016; Akyürek et al., 2021; Andreas, 2020; Wang et al., 2021; Qiu et al., 2022) or modifies standard training objectives with new supervision signals like attention supervision or meta-learning (Oren et al., 2020; Conklin et al., 2021; Yin et al., 2021).
Our work builds on Dangle (Zheng and Lapata, 2022), a disentangled sequence-to-sequence model, which tries to tackle compositional generalization with architectural innovations. While Dangle is conceptually general, our proposal is tailored to the Transformer and features two key modifications to encourage more disentangled representations and better computational efficiency.
## 8 Conclusions
In this paper we focused on two issues related to compositional generalization. Firstly, we improve upon Dangle, an existing sequence-to-sequence architecture that generalizes to unseen compositions by learning specialized encodings for each decoding step. We show that re-encoding keys periodically, at some interval, improves both efficiency and accuracy. Secondly, we propose a methodology for identifying compositional patterns in real-world data and create a new dataset which better represents practical generalization requirements. Experimental results show that our modifications improve generalization across tasks, metrics, and datasets and our new benchmark provides a challenging testbed for evaluating new modeling efforts.
## Limitations
Our machine translation experiments revealed that optimal generalization performance is obtained with small interval values. However, R-Dangle with small intervals still runs much slower than an equivalent Transformer model. Despite our modifications, large R-Dangle models with small intervals on large datasets remain computationally expensive. In this paper, we only explored a simple periodic re-encoding strategy. However, more complex and flexible ways of re-encoding could be used to further improve computational efficiency.
For instance, we could adopt a dynamic strategy which *learns* when re-encoding is necessary.
## Acknowledgments
We thank our anonymous reviewers for their feedback. We also wish to thank Tom Hosking and Biao Zhang for helpful discussions. Finally, we gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council
(grant EP/W002876/1).
## References
Ekin Akyürek, Afra Feyza Akyurek, and Jacob Andreas.
2021. Learning to recombine and resample data for compositional generalization. In *Proceedings of the* 9th International Conference on Learning Representations, Online.
Ekin Akyurek and Jacob Andreas. 2021. Lexicon learning for few shot sequence modeling. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 4934–4946, Online.
Association for Computational Linguistics.
Jacob Andreas. 2020. Good-enough compositional data augmentation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7556–7566, Online. Association for Computational Linguistics.
Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, and Alexander Zotov. 2020. TaskOriented Dialogue as Dataflow Synthesis. *Transactions of the Association for Computational Linguistics*, 8:556–571.
Noam Chomsky. 2014. *Aspects of the Theory of Syntax*,
volume 11. MIT press.
Henry Conklin, Bailin Wang, Kenny Smith, and Ivan Titov. 2021. Meta-learning to compositionally generalize. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics*
and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3322–3335, Online. Association for Computational Linguistics.
Verna Dankers, Elia Bruni, and Dieuwke Hupkes. 2022.
The paradox of the compositionality of natural language: A neural machine translation case study. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4154–4175, Dublin, Ireland. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 731–742, Melbourne, Australia. Association for Computational Linguistics.
Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving textto-SQL evaluation methodology. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 351–360, Melbourne, Australia. Association for Computational Linguistics.
Jonathan Herzig and Jonathan Berant. 2021. Spanbased semantic parsing for compositional generalization. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 908–921, Online. Association for Computational Linguistics.
Zhiheng Huang, Davis Liang, Peng Xu, and Bing Xiang. 2020. Improve transformer models with better relative position embeddings. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3327–3335, Online. Association for Computational Linguistics.
Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Association for Computational Linguistics.
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz
Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A comprehensive method on realistic data. In *Proceedings of the 8th International Conference on Learning Representations*,
Addis Ababa, Ethiopia.
Najoung Kim and Tal Linzen. 2020. COGS: A compositional generalization challenge based on semantic interpretation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 9087–9105, Online. Association for Computational Linguistics.
Brenden M. Lake and Marco Baroni. 2018a. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks.
In *Proceedings of the 35th International Conference* on Machine Learning, volume 80 of *Proceedings* of Machine Learning Research, pages 2879–2888.
PMLR.
Brenden M. Lake and Marco Baroni. 2018b. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks.
In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, volume 80 of Proceedings of Machine Learning Research, pages 2879–2888, Stockholm, Sweden. PMLR.
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles.
In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Yafu Li, Yongjing Yin, Yulong Chen, and Yue Zhang.
2021. On compositional generalization of neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4767–4780, Online. Association for Computational Linguistics.
Andrey Malinin. 2019. Uncertainty Estimation in Deep Learning with Application to Spoken Language Assessment. Ph.D. thesis, University of Cambridge.
Andrey Malinin and Mark Gales. 2021. Uncertainty estimation in autoregressive structured prediction. In Proceedings of the 9th International Conference on Learning Representations, Online.
Richard Montague. 1970. Universal grammar. *Theoria*,
36(3):373–398.
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics.
Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. LSDSem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46–51, Valencia, Spain. Association for Computational Linguistics.
Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, and Jonathan Berant. 2020. Improving compositional generalization in semantic parsing. In *Findings of the Association for Computational Linguistics:*
EMNLP 2020, pages 2482–2495, Online. Association for Computational Linguistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Nowak, Tal Linzen, Fei Sha, and Kristina Toutanova.
2022. Improving compositional generalization with latent structure and data augmentation. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4341–4362, Seattle, United States. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021. Compositional generalization and natural language variation: Can a semantic parsing approach handle both? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 922–938, Online. Association for Computational Linguistics.
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018.
Self-attention with relative position representations.
In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics.
Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699–7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Bailin Wang, Wenpeng Yin, Xi Victoria Lin, and Caiming Xiong. 2021. Learning to synthesize data for semantic parsing. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 2760–2766, Online. Association for Computational Linguistics.
Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, and Jacob Andreas. 2021. Compositional generalization for neural semantic parsing via spanlevel supervised attention. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2810–2823, Online.
Association for Computational Linguistics.
Hao Zheng and Mirella Lapata. 2021. Compositional generalization via semantic tagging. In Findings of the Association for Computational Linguistics:
EMNLP 2021, pages 1022–1032, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hao Zheng and Mirella Lapata. 2022. Disentangled sequence to sequence learning for compositional generalization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 4256–4268, Dublin, Ireland. Association for Computational Linguistics.
## A Dataset Details
We evaluated our model on two machine translation datasets, and one semantic parsing benchmark which we selected to maximally reflect natural language variations and real-world generalization challenges. We describe these in detail below.
ReaCT is the real-world machine translation benchmark developed in this paper for compositional generalization. The IWSLT 2014 De→En dataset consists of approximately 170K sequence pairs. We used the fairseq script prepare-iwslt14.sh to randomly sample approximately 4% of this dataset as validation set and kept the rest as training set. Following standard practice, we created an in-domain test set, the concatenation of files dev2010, dev2012, tst2010, tst2011, and tst2012. We created an outof-distribution test sets from the WMT'14 De→En training corpus following the uncertainty selection method based on sequences.
CoGnition is another machine translation benchmark targeting compositional generalization (Li et al., 2021). It also contains a synthetic test set to quantify and analyze compositional generalization of neural MT models. This test set was constructed by embedding synthesized novel compounds into training sentence templates. Each compound was combined with 5 different sentence templates, so that every compound can be evaluated under 5 different contexts. A major difference between RE-ACT and CoGnition is the fact that test sentences for the latter are not naturally occurring. Despite being somewhat artificial, CoGnition overall constitutes a realistic benchmark which can help distinguish subtle model differences compared to purely synthetic benchmarks. For example, Zheng and Lapata (2022) showed that their encoder-only Dangle variant performed badly on this dataset in spite of impressive performance on synthetic semantic parsing benchmarks (Kim and Linzen, 2020; Keysers et al., 2020).
SMCalFlow-CS (Andreas et al., 2020) is a largescale semantic parsing dataset for task-oriented dialogue, featuring real-world human-generated utterances about calendar management. Yin et al.
(2021) proposed a compositional skills split of SMCalFlow (SMCalFlow-CS) that contains singleturn sentences from one of two domains related to creating calendar events (e.g., *Set up a meeting* with Adam) or querying an org chart (e.g., Who are in Adam's team? ), paired with LISP programs.
The training set S consists of samples from single domains while the test set C contains compositions thereof (e.g., create a meeting with Adam and his team). Since zero-shot compositional generalization is highly non-trivial due to novel language patterns and program structures, we follow previous work (Yin et al., 2021; Qiu et al., 2022) and consider a few-shot learning scenario, where a small number of cross-domain examples are included in the training set. We report experiments with 6, 16, and 32 examples.
## B Implementation Details
Machine Translation Models We implemented all translation models with fairseq (Ott et al., 2019).
Following previous work (Li et al., 2021; Zheng and Lapata, 2022), we compared with the baseline machine translation models Dangle and Transformer using the popular fairseq configuration transformer_iwslt_de_en. We also implemented a bigger variant of these models using a new configuration, which empirically obtained better performance. We used 12 encoder layers and 12 decoder layers. We set the dropout to 0.3 for attention weights and 0.4 after activations in the feedforward network. We also used pre-normalization
(i.e., we added layer normalization before each block) to ease optimization. Following Zheng and Lapata (2022), we used relative position embeddings (Shaw et al., 2018; Huang et al., 2020)
which have demonstrated better generalization performance.
Hyperparameters for R-Dangle were tuned on the respective validation sets of CoGnition and ReaCT. Both R-Dangleshr and R-Danglesep used a 12-layer decoder. For R-Dangleshr, we tuned the number of layers of the two adaptive components k1 and k2, and set k1 and k2 to 2 and 10, respectively. For R-Danglesep, we shared some layers of parameters between the value encoder and the adaptive key decoder and experimented with different sharing strategies. Finally, we adopted a 10-layer value encoder and a 10-layer key encoder (k1 = 2 and k2 = 8). The top 8 layers in the two encoders were shared. This configuration produced 12 differently parametrized transformer encoder layers, thus maintaining identical model size to the baseline.
Semantic Parsing Models Qiu et al. (2022)
showed the advantage of pre-trained sequence-tosequence models on SMCalFlow-CS. We therefore built R-Dangle on top of BART-large (Lewis et al.,
2020), which is well supported by fairseq. We used BART's encoder and decoder to instantiate the adaptive encoder and decoder in our model. For compatibility, we only employ the R-Dangleshr architecture. We also set k1 and k2 to 2 and 10, respectively.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
the limitations section
✓ A2. Did you discuss any potential risks of your work?
the limitations section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** The Section 4 And 5
✓ B1. Did you cite the creators of artifacts you used?
The section 5
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We were unable to find the license for the dataset we used
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We reuse existing datasets to create our benchmark.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. the section 4
## C ✓ **Did You Run Computational Experiments?** The Section 5 And 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
the section 6 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? the section 5 and appedix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
the section 6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? the section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
martinez-lorenzo-etal-2023-cross | Cross-lingual {AMR} Aligner: Paying Attention to Cross-Attention | https://aclanthology.org/2023.findings-acl.109 | This paper introduces a novel aligner for Abstract Meaning Representation (AMR) graphs that can scale cross-lingually, and is thus capable of aligning units and spans in sentences of different languages. Our approach leverages modern Transformer-based parsers, which inherently encode alignment information in their cross-attention weights, allowing us to extract this information during parsing. This eliminates the need for English-specific rules or the Expectation Maximization (EM) algorithm that have been used in previous approaches. In addition, we propose a guided supervised method using alignment to further enhance the performance of our aligner. We achieve state-of-the-art results in the benchmarks for AMR alignment and demonstrate our aligner{'}s ability to obtain them across multiple languages. Our code will be available at [\url{https://www.github.com/babelscape/AMR-alignment}](\url{https://www.github.com/babelscape/AMR-alignment}). | # Cross-Lingual Amr Aligner: Paying Attention To Cross-Attention
Abelardo Carlos Martínez Lorenzo1,2∗ **Pere-Lluís Huguet Cabot**1,2∗
Roberto Navigli2 1 Babelscape, Italy 2 Sapienza NLP Group, Sapienza University of Rome
{martinez,huguetcabot}@babelscape.com [email protected]
## Abstract
This paper introduces a novel aligner for Abstract Meaning Representation (AMR) graphs that can scale cross-lingually, and is thus capable of aligning units and spans in sentences of different languages. Our approach leverages modern Transformer-based parsers, which inherently encode alignment information in their cross-attention weights, allowing us to extract this information during parsing. This eliminates the need for English-specific rules or the Expectation Maximization (EM) algorithm that have been used in previous approaches. In addition, we propose a guided supervised method using alignment to further enhance the performance of our aligner. We achieve state-of-the-art results in the benchmarks for AMR alignment and demonstrate our aligner's ability to obtain them across multiple languages. Our code will be available at github.com/Babelscape/AMR-alignment.
## 1 Introduction
At the core of Natural Language Understanding lies the task of Semantic Parsing, aimed at translating natural language text into machine-interpretable representations. One of the most popular semantic formalisms is the Abstract Meaning Representation
(Banarescu et al., 2013, AMR), which embeds the semantics of a sentence in a directed acyclic graph, where concepts are represented by nodes, such as time, semantic relations between concepts by edges, such as *:beneficiary*, and the co-references by reentrant nodes, such as r representing *rose*. In crosslingual AMR, the English AMR graph represents the sentence in different languages (see Figure 1).
To date, AMR has been widely used in Machine Translation (Song et al., 2019), Question Answering (Lim et al., 2020; Kapanipathi et al., 2021),
Human-Robot Interaction (Bonial et al., 2020),
Text Summarization (Hardy and Vlachos, 2018;
∗ Equal contributions.
Liao et al., 2018) and Information Extraction (Rao et al., 2017), among other areas.
The alignment between spans in text and semantic units in AMR graphs is an essential requirement for a variety of purposes, including training AMR parsers (Zhou et al., 2021), cross-lingual AMR parsing (Blloshmi et al., 2020), downstream task application (Song et al., 2019), or the creation of new semantic parsing formalisms (Navigli et al., 2022; Martínez Lorenzo et al., 2022).
Despite the emergence of various alignment generation approaches, such as rule-based methods (Liu et al., 2018) and statistical strategies utilizing Expectation Maximization (EM) (Pourdamghani et al.,
2014; Blodgett and Schneider, 2021), these methods rely heavily on English-specific rules, making them incompatible with cross-lingual alignment. Furthermore, even though several attempts extend the alignment to non-English sentences and graphs (Damonte and Cohen, 2018; Uhrig et al.,
2021), these efforts are inherently monolingual and therefore lack the connection to the richer AMR
graph bank available in English, which can be exploited as a source of interlingual representations.
On the other hand, current state-of-the-art AMR
parsers are auto-regressive neural models (Bevilacqua et al., 2021; Bai et al., 2022) that do not generate alignment when parsing the sentence to produce the graph. Therefore, to obtain both, one needs to i) predict the graph and then ii) generate the alignment using an aligner system that is based on language-specific rules.
This paper presents the first AMR aligner that can scale cross-lingually by leveraging the implicit information acquired in Transformer-based parsers (Bai et al., 2022). We propose an approach for extracting alignment information from crossattention, and a guided supervised method to enhance the performance of our aligner. We eliminate the need for language-specific rules and enable simultaneous generation of the AMR graph and
![1_image_0.png](1_image_0.png)
alignment. Our approach is efficient and robust, and is suitable for cross-lingual alignment of AMR graphs.
Our main contributions are: (i) we explore how Transformer-based AMR parsers preserve implicit alignment knowledge and how we can extract it;
(ii) we propose a supervised method using crossattention to enhance the performance of our aligner, (iii) we achieve state-of-the-art results along different alignment standards and demonstrate the effectiveness of our aligner across languages.
## 2 Related Work
AMR alignment Since the appearance of AMR
as a Semantic Parsing formalism, several aligner systems have surfaced that provide a link between the sentence and graph units. JAMR (Flanigan et al., 2014) is a widely used aligner system that employs an ordered list of 14 criteria, including exact and fuzzy matching, to align spans to subgraphs.
However, this approach has limitations as it is unable to resolve ambiguities or learn novel alignment patterns. TAMR (Liu et al., 2018) extends JAMR
by incorporating an oracle parser that selects the alignment corresponding to the highest-scored candidate AMR graph. ISI (Pourdamghani et al., 2014)
aligner utilizes an EM algorithm to establish alignment between words and graphs' semantic units.
First, the graph is linearized, and then the EM
algorithm is employed with a symmetrized scoring function to establish alignments. This method leads to more diversity in terms of alignment patterns, but fails to align easy-to-recognize patterns that could be aligned using rules. LEAMR (Blodgett and Schneider, 2021) is another aligner system that combines rules and EM. This approach aligns all the subgraph structures to any span in the sentence. However, it is based on language-specific rules, making it unsuitable for cross-lingual settings. Moreover, despite several attempts to extend the alignment to non-English languages Anchiêta and Pardo (2020); Oral and Eryigit ˘ (2022),
these efforts are still monolingual since they rely on language-specific strategies. Consequently, in this paper we present an approach that fills this gap.
Cross-attention Most state-of-the-art systems for AMR parsing are based on Encoder-Decoder Transformers, specifically on BART (Lewis et al.,
2020). These models consist of two stacks of Transformer layers, which utilize self- and crossattention as their backbone. The popularity of Transformer models has led to increased interest in understanding how attention encodes information in text and relates to human intuition (Vashishth et al., 2019) and definitions of explainability (Bastings and Filippova, 2020; Bibal et al., 2022). Research has been conducted on how attention operates, relates to preconceived ideas, aggregates information, and explains model behavior for tasks such as natural language inference(Stacey et al., 2021), Translation (Yin et al., 2021; Zhang and Feng, 2021; Chen et al., 2021), Summarization (Xu et al., 2020; Manakul and Gales, 2021) or Sentiment Analysis (Wu et al., 2020). Furthermore, there have been attempts to guide attention to improve interpretability or performance in downstream tasks (Deshpande and Narasimhan, 2020; Sood et al., 2020). However, to the best of our knowledge, there has been no prior study on attention for AMR parsing. This paper fills this gap by investigating the role of attention in AMR parsing.
## 3 Method
Originally described by Vaswani et al. (2017) as
"multi-head attention over the output of the Encoder", and referred to as cross-attention in Lewis et al. (2020), it enables the Decoder to attend to the output of the Encoder stack, conditioning the hidden states of the autoregressive component on the input text. Self-attention and cross-attention modules are defined as:
$$\mathrm{Attention}(Q,K,V)=\mathrm{att}(Q,K)V$$
$$\operatorname{att}(Q,K)=s o f t m a x({\frac{Q K^{T}}{\sqrt{d_{k}}}})$$ $$\operatorname{CrossAtt}(Q,K,V)=$$
$$\mathrm{Concat}(h e a d_{1},...,h e a d_{H})W^{O}$$
$$h e a d_{h}=\mathrm{Attention}(Q W_{h}^{Q},K W_{h}^{K},V W_{h}^{V})$$
where *K, V* = Eℓ ∈ R
ne×dkH and Q =
Dℓ ∈ R
nd×dkH are the Encoder and Decoder hidden states at layer ℓ, ne and nd are the input and output sequence lengths, H is the number of heads, W
Q
h
, WK
hand WV
h ∈ R
dkH×dk are learned weights that project the hidden states to the appropriate dimensions, dk, for each head and WO ∈ R
dkH×dkH is a final learned linear projection. Therefore in each head h and layer ℓ we define the attention weights as attℓh =
att(DℓW
Q
h
, EℓWK
h
) ∈ R
nd×ne.
## 3.1 Unguided Cross-Attention
We argue that there is an intuitive connection between cross-attention and alignments. Under the assumption the Decoder will attend to the parts in the input that are more relevant to predicting the next token, we infer that, when decoding the tokens for a certain node in the graph, attention should focus on related tokens in the input, and therefore the words that align to that node. We will use the crossattention matrices (attℓh
) to compute an alignment between the input and the output.
## 3.2 Guided Cross-Attention
We also aim to explore whether cross-attention can be guided by the alignment between the words of the sentence and the nodes of the graph. To this end, we construct a sparse matrix *align* ∈ R
nd×ne from the automatically-generated alignments:
$$a l i g n(i,j)=\left\{\begin{array}{l l l}{{1}}&{{i f}}&{{x_{i}\sim y_{j}}}\\ {{}}&{{}}&{{}}\\ {{0}}&{{i f}}&{{x_{i}\not\sim y_{j}}}\end{array}\right.$$
where ∼ indicates alignment between subword token xi and graph token yj .
However, even though there are sparse versions of attention (Martins and Astudillo, 2016), these did not produce successful alignments in our experiments. Hence we choose to alleviate the constraint of imposing sparsity by employing the scalar mixing approach introduced in ELMo (Peters et al.,
2018). We learn a weighted mix of each head and obtain a single attention matrix:
$$a t t^{\ell}=\gamma\sum_{h=0}^{H-1}s_{h}^{\ell}a t t_{h}^{\ell}\in\mathbb{R}^{n_{d}\times n_{e}}\qquad{\mathrm{(1)}}$$
where s = *sof tmax*(a) with scalar learnable parameters γ, a0*, . . . , a*H.
The model has the flexibility to learn how to distribute weights such that certain heads give sparser attention similar to alignment, while others can encode additional information that is not dependent on alignment. In our experiments, we use the implementation of Bevilacqua et al. (2021, SPRING)
to train our parser but add an extra cross-entropy loss signal:
$$\begin{array}{l}{{{\mathcal L}=-\sum_{j=1}^{n_{d}}\log p_{B A R T}\left(y_{j}\mid y_{<j},x\right)}}\\ {{{}}}\\ {{-\sum_{j=1}^{n_{d}}\sum_{i=1}^{n_{e}}\log\left(\frac{e^{a t t^{\ell}(i,j)}}{\sum_{k=1}^{n_{d}}e^{a t t^{\ell}(i,k)}}\frac{a l i g n(i,j)}{\sum_{k=1}^{n_{e}}a l i g n(k,j)}\right)}}\end{array}$$ #### 3.3 Saliency Methods
A theoretical alternative to our reasoning about cross-attention is the use of input saliency methods.
These methods assign higher importance to the input tokens that correspond to a particular node in the graph or were more important in their prediction during decoding. To obtain these importance weights, we employ Captum (Kokhlikyan et al., 2020), an open-source library for model interpretability and understanding, which provides a variety of saliency methods, including gradientbased methods such as Integrated Gradients (IG),
Saliency (Simonyan et al., 2014), and Input X Gradient (IxG), backpropagation-based methods such as Deeplift (Shrikumar et al., 2017) and Guided Backpropagation (GB) (Springenberg et al., 2015),
and finally occlusion-based methods (Zeiler and Fergus, 2014).
We obtain a weight matrix sal ∈ R
nd×ne with the same size as the cross-attention matrix and use it to extract alignments in the same fashion as the unguided cross-attention method. This approach allows us to explore the input tokens that have a greater impact on the decoding process and can aid in understanding the reasoning behind the alignments made by the model.
## 3.4 Alignment Extraction
Our algorithm1to extract and align the input-output spans is divided into six steps:
1. **Alignment score matrix:** we create a matrix M ∈ R
nd×ne, where ne is the number of tokens in the sentence and nd is the number of tokens in the linearized graph, using the crossattention weights (attℓh or attℓ) as described in Section 3.
2. **Span segmentation:** For each sentence word, we sum the scores of tokens that belong to the same word column-wise in M. Then, for LEAMR alignments (see Section 4.2), the sentence tokens are grouped into spans using their span segmentation (see Appendix A).
3. **Graph segmentation:** We sum the score of tokens that belong to the same graph's semantic unit row-wise in M.
4. **Sentence graph tokens map:** We iterate over all the graph's semantic units and map them to the sentence span with highest score in M.
5. **Special graph structures:** We revise the mapping by identifying subgraphs that represent literal or matching spans - e.g., named entities, dates, specific predicates, etc. - and align them accordingly.
6. **Alignment formatting:** We extract the final alignments to the appropriate format using the resulting mapping relating graph's semantic units to sentence spans.
## 4 Experimental Setup 4.1 Graph Inventory
AMR 3.0 (LDC2020T02) consists of 59,255 sentence-graph pairs that are manually annotated. However, it lacks alignment information between the nodes in the graphs and the concepts in the sentences. We use the train split for the guided approach and use the respective validation and test splits from the alignment systems. Additionally, to evaluate cross-lingual performance, we use the gold German, Italian, and Spanish sentences of "AMR 2.0 - Four Translation" (LDC2020T07)
which are human parallel translations of the test set 1The pseudo-algorithm is described in the Appendix C.
in AMR 2.02, paired with their English graphs from the AMR 3.0 test set. Despite this, as the graph inventory does not contain alignment information, it becomes necessary to access other repositories in order to obtain the alignment.
## 4.2 Alignment Standards
We propose an approach that is agnostic to different alignment standards and we evaluate it on two standards that are commonly used: ISI and LEAMR.
ISI The ISI standard, as described in (Pourdamghani et al., 2014), aligns single spans in the sentence to graphs' semantic units (nodes or relations), and aligns relations and reentrant nodes when they appear explicitly in the sentence. The alignments are split into two sets of 200 annotations each, which we use as validation and test sets, updated to the AMR 3.0 formalism. For the cross-lingual alignment setup, we project English ISI graph-sentence alignments to the sentences in other languages, using the machine translation aligner (Dou and Neubig, 2021). This involves connecting the nodes in the graph to the spans in non-English sentences using the projected machine translation alignments between the English spans and the corresponding non-English sentence spans.
By leveraging this, we are able to generate a silver alignment for cross-lingual AMR, which enables us to validate the model's performance in a cross-lingual setup and determine its scalability across-languages.
LEAMR The LEAMR standard differentiates among four different types of alignment: i) Subgraph Alignments, where all the subgraphs that explicitly appear in the sentence are aligned to a list of consecutive spans, ii) Duplicate Subgraph, where all the subgraphs that represent omit repeated concepts in the sentence are aligned, iii)
Relation Alignments, where all the relations that were not part of a previous subgraph structure are aligned, and iv) Reentrancy Alignments, where all the reentrant nodes are aligned. In contrast to ISI, all the semantic units in the graph are aligned to some list of consecutive spans in the text. We use 150 alignments as the validation set and 200 as the test set, which includes sentence-graph pairs from The Little Prince Corpus (TLP) complemented with randomly sampled pairs from AMR 3.0.
2The sentences of AMR 2.0 are a subset of AMR 3.0.
![4_image_0.png](4_image_0.png)
## 4.3 Model
We use SPRING (Bevilacqua et al., 2021) as our parsing model based on the BART-large architecture (Lewis et al., 2020) for English and SPRING based on mBART for non-English languages mBART (Liu et al., 2020) for the multilingual setting. We extract all attℓh matrices from a model trained on AMR 3.0 as in Blloshmi et al.
(2021) in order to perform our unguided crossattention analysis. For the guided approach we re-train using the same hyperparameters as the original implementation, but with an extra loss signal as described in Section 3.2 based on either LEAMR or ISI. When using LEAMR alignments, we restructure the training split in order to exclude any pair from their test and validation sets.
## 5 Experiments 5.1 Correlation
In this study, we investigate the correlation between cross-attention and alignment by computing the Pearson's r correlation coefficient between the attℓh matrix and the LEAMR alignment matrix *align*.
To do so, we first flatten the matrices and remove any special tokens that are not relevant for alignment. As shown in Figure 2, there is a clear positive correlation between the two.
While we do not have a clear explanation for why certain heads have a higher correlation than others, it is evident that there is a connection between cross-attention and alignment. For example, head 6 in layer 3 (i.e., att36
) has a correlation coefficient of 0.635, approximately the same as the sum of the entire layer.
With regard to the saliency methods described in Section 3.3, the two most highly correlated methods were Saliency and GB, with a correlation coefficient of 0.575. Despite this result, we observe that saliency methods tend to focus more on essential parts of the sentence, such as the subject or predicate. These parts are usually aligned to more nodes and relations, which explains the high correlation, but they lack nuance compared to cross-attention.
Our best results were obtained by supervising layer 3 during training with the approach outlined in Section 3.2, using Cross-Entropy Loss on half of the heads (i.e., 3, 4, 5, 6, 7, 11, 12, and 15) that were selected based on their correlation on the validation set. This did not affect the performance of parsing.
When we looked at att3 using the learned weighted mix from Equation 1 with LEAMR alignments, the correlation reached 0.866, which is significantly higher than any other method. Figure 2 shows the impact of supervising half the heads on layer 3 and how it influences heads in other layers.
To gain a better understanding of these results, we present an example from the TLP corpus in Fig-
![5_image_0.png](5_image_0.png)
ure 3 to illustrate the different methods, including cross-attention and saliency methods. The left image shows the cross-attention values for att36
. Despite not having seen any alignment information, the model is able to correctly match non-trivial concepts such as "merchant" and "person". The center-left image illustrates how saliency methods focus on essential parts of the sentence, but lack nuance compared to cross-attention. The center-right image shows that supervising learning on layer 3 results in more condensed attention, which is associated with the improvement in correlation. However, it is important to note that the model can reliably attend to incorrect positions, such as aligning
"pointer" to "merchant" instead of "sold".
## 5.2 Results
LEAMR Table 1 shows the performances of our two approaches on the LEAMR gold alignments compared to previous systems. We use the same evaluation setup as Blodgett and Schneider (2021),
where the partial match assigns a partial credit from Jaccard indices between nodes and tokens. In both guided and unguided methods, we extract the score matrix for Algorithm 1 from the sum of the crossattention in the first four layers. We use a Wilcoxon signed-rank test (Wilcoxon, 1945) on the alignment matches per graph to check for significant differences. Both our approaches are significantly different compared to LEAMR (p=0.031 and p=0.007 respectively). However, we find no statistical difference between our unguided and guided approaches
(p=0.481).
Our guided attention approach performs best, improving upon LEAMR on Subgraph (+0.5) and Relation (+2.6). For Reentrancy, performance is relatively low, and we will explore the reasons for this in Section 7. Perhaps most interesting is the performance of the unguided system using raw cross-attention weights from SPRING. The system remains competitive against the guided model without having access to any alignment information. It outperforms LEAMR which, despite being unsupervised with respect to alignments, relies on a set of inductive biases and rules based on alignments.
While we also draw on specific rules related to the graph structure in post-processing, we will need to investigate their impact in an ablation study.
Relations that are argument structures (i.e.,
:ARG and :ARG-of) usually depend on the predictions for their parent or child nodes; hence their improvement would be expected to be tied to the Subgraph Alignment. The results in Table 2 reassure us that this intuition is correct. Notice how for Single Relations (such as :*domain* or :*purpose* in Figure 3) the performance by LEAMR was much lower, even worse than that of ISI: Blodgett and Schneider (2021) argued that this was due to the model being overeager to align to frequent prepositions such as to and of. On the other hand, our unguided method achieves 15 points over ISI and 20 over LEAMR, which hints at the implicit knowledge on alignment that cross-attention encodes. Our guided approach experiences a considerable drop for Single Relations since it was trained on data generated by LEAMR, replicating its faulty behavior albeit
| Exact Alignment | Partial Alignment | Spans | Coverage | | | | | | |
|-----------------------|---------------------|---------|------------|-------|-------|-------|-------|--------|--------|
| P | R | F1 | P | R | F1 | F1 | | | |
| Subgraph | ISI | 71.56 | 68.24 | 69.86 | 78.03 | 74.54 | 76.24 | 86.59 | 78.70 |
| Alignment | JAMR | 87.21 | 83.06 | 85.09 | 90.29 | 85.99 | 88.09 | 92.38 | 91.10 |
| (1707) | TAMR | 85.68 | 83.38 | 84.51 | 88.62 | 86.24 | 87.41 | 94.64 | 94.90 |
| LEAMR | 93.91 | 94.02 | 93.97 | 95.69 | 95.81 | 95.75 | 96.05 | 100.00 | |
| LEAMR † | 93.74 | 93.91 | 93.82 | 95.51 | 95.68 | 95.60 | 95.54 | 100.00 | |
| Ours - Unguided | 94.11 | 94.49 | 94.30 | 96.03 | 96.42 | 96.26 | 95.94 | 100.00 | |
| Ours - Guided - ISI | 89.87 | 91.97 | 90.91 | 92.11 | 94.27 | 93.18 | 93.69 | 100.00 | |
| Ours - Guided - LEAMR | 94.39 | 94.67 | 94.53 | 96.62 | 96.90 | 96.76 | 96.40 | 100.00 | |
| Relation | ISI | 59.28 | 8.51 | 14.89 | 66.32 | 9.52 | 16.65 | 83.09 | 9.80 |
| Alignment | LEAMR | 85.67 | 87.37 | 85.52 | 88.74 | 88.44 | 88.59 | 95.41 | 100.00 |
| (1263) | LEAMR † | 84.63 | 84.85 | 84.74 | 87.77 | 87.99 | 87.88 | 91.98 | 100.00 |
| Ours - Unguided | 87.14 | 87.59 | 87.36 | 89.87 | 90.33 | 90.10 | 91.03 | 100.00 | |
| Ours - Guided - ISI | 83.82 | 83.39 | 83.61 | 86.45 | 86.00 | 86.22 | 87.30 | 100.00 | |
| Ours - Guided - LEAMR | 88.03 | 88.18 | 88.11 | 91.08 | 91.24 | 91.16 | 91.87 | 100.00 | |
| Reentrancy | LEAMR | 55.75 | 54.61 | 55.17 | - | - | - | - | 100.00 |
| Alignment | LEAMR † | 54.61 | 54.05 | 54.33 | - | - | - | - | 100.00 |
| (293) | Ours - Unguided | 44.75 | 44.59 | 44.67 | - | - | - | - | 100.00 |
| Ours - Guided - ISI | 42.09 | 39.35 | 40.77 | - | - | - | - | 100.00 | |
| Ours - Guided - LEAMR | 56.90 | 57.09 | 57.00 | - | - | - | - | 100.00 | |
| Duplicate | LEAMR | 66.67 | 58.82 | 62.50 | 70.00 | 61.76 | 65.62 | - | 100.00 |
| Subgraph | LEAMR † | 68.75 | 64.71 | 66.67 | 68.75 | 64.71 | 66.67 | - | 100.00 |
| Alignment | Ours - Unguided | 77.78 | 82.35 | 80.00 | 77.78 | 82.35 | 80.00 | - | 100.00 |
| (17) | Ours - Guided - ISI | 63.16 | 70.59 | 66.67 | 65.79 | 73.53 | 69.44 | - | 100.00 |
| Ours - Guided - LEAMR | 70.00 | 82.35 | 75.68 | 72.50 | 85.29 | 78.38 | - | 100.00 | |
| AMR parser | P | R | F1 | |
|----------------------|-----------------|------|------|------|
| ALL | ISI | 59.3 | 08.5 | 14.9 |
| LEAMR † | 84.6 | 84.9 | 84.7 | |
| Ours - Unguided | 87.1 | 87.6 | 87.4 | |
| Ous - Guided - LEAMR | 88.0 | 88.2 | 88.1 | |
| Single | ISI | 82.9 | 52.1 | 64.0 |
| Relations | LEAMR † | 64.8 | 55.7 | 59.9 |
| (121) | Ours - Unguided | 79.5 | 79.5 | 79.5 |
| Ous - Guided - LEAMR | 77.5 | 64.8 | 70.5 | |
| Argument | ISI | 39.6 | 03.5 | 06.4 |
| Structure | LEAMR † | 86.6 | 88.2 | 87.4 |
| (1042) | Ours - Unguided | 87.9 | 88.4 | 88.2 |
| Ous - Guided - LEAMR | 89.0 | 90.8 | 89.9 | |
## Being Slightly More Robust.
ISI When we test our systems against the ISI
alignments, both our models achieve state-of-theart results, surpassing those of previous systems, including LEAMR. This highlights the flexibility of cross-attention as a standard-agnostic aligner
(we provide additional information in Appendix B). Table 3 shows the performance of our systems and compares ones with the ISI alignment as a reference. We omit relations and Named Entities to focus solely on non-rule-based alignments and have a fair comparison between systems. Here, our aligner does not rely on any span-segmentation, hence nodes and spans are aligned solely based on which words and nodes share the highest crossattention values. Still, both our alignments outperform those of the comparison systems in English.
Moreover, only our approach achieves competitive results in Spanish, German and Italian - obtaining 40 points more on average above the second best model - while the other approaches are hampered by the use of English-specific rules. However, we found two reasons why non-English systems
| EN | DE | ES | IT | AVG | | | | | | | | | | | |
|----------|------|------|------|-------|------|-------|------|------|------|------|------|------|------|------|------|
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | |
| JAMR | 92.7 | 80.1 | 85.9 | 75.4 | 6.6 | 12.1 | 84.4 | 16.1 | 27.1 | 64.8 | 13.2 | 21.9 | 79.3 | 29.0 | 36.8 |
| TAMR | 92.1 | 84.5 | 88.1 | 73.7 | 6.4 | 11.8 | 84.0 | 16.4 | 27.5 | 64.3 | 13.2 | 21.9 | 78.5 | 30.1 | 37.3 |
| LEAMR | 85.9 | 92.3 | 89.0 | 8.4 | 9.3 | 8.8 | 8.1 | 9.0 | 8.5 | 9.0 | 9.5 | 9.3 | 27.9 | 30.0 | 28.9 |
| Unguided | 95.4 | 93.2 | 94.3 | 64.0 | 74.4 | 68.85 | 67.9 | 77.1 | 72.2 | 67.4 | 75.5 | 71.2 | 73.7 | 80.1 | 76.6 |
| Guided | 96.3 | 94.2 | 95.2 | - | - | - | - | - | - | - | - | - | - | - | - |
Table 3: ISI results. Column blocks: models, language.
| GOLD | Without Rules | Layers | | | | | | | | | | | | | | | |
|----------|-----------------|----------|---------|------|--------|------|-------|-------|--------|--------|-------|-------|--------|--------|------|------|------|
| Unguided | Guided | | | | | | | | | | | | | | | | |
| LEAMR † | Ung. | Guided | LEAMR † | Ung. | Guided | Sal. | [0:4] | [4:8] | [8:12] | [0:12] | [0:4] | [4:8] | [8:12] | [0:12] | [3] | [3]* | |
| Sub. | 96.5 | 96.7 | 97.0 | 87.6 | 88.6 | 93.4 | 62.2 | 94.3 | 69.8 | 63.3 | 87.7 | 94.5 | 74.4 | 66.3 | 93.2 | 93.7 | 93.7 |
| Rel. | 87.1 | 89.2 | 90.3 | 26.6 | 60.1 | 83.4 | 50.0 | 87.7 | 72.7 | 61.6 | 84.5 | 88.1 | 73.8 | 62.5 | 87.9 | 86.2 | 85.9 |
| Reen. | 56.8 | 46.7 | 59.0 | 15.2 | 38.6 | 57.0 | 34.5 | 44.7 | 41.1 | 36.1 | 41.9 | 57.0 | 39.2 | 33.0 | 51.0 | 52.7 | 53.4 |
| Dupl. | 62.9 | 80.0 | 75.7 | 40.0 | 71.8 | 73.7 | 9.5 | 80.0 | 11.1 | 27.3 | 64.3 | 75.9 | 30.0 | 27.3 | 66.7 | 70.3 | 70.3 |
perform worse than in English: 1) linguistic divergences (as explained in Wein and Schneider
(2021)), and ii) the machine translation alignment error.
## 6 Ablation Study
Gold spans LEAMR relies on a span segmentation phase, with a set of multiword expressions and Stanza-based named entity recognition. We use the same system in order to have matching sentence spans. However, these sometimes differ from the gold spans, leading to errors. Table 4 (left) shows performance using an oracle that provides gold spans, demonstrating how our approach still outperforms LEAMR across all categories.
Rules All modern alignment systems depend on rules to some degree. For instance, we use the subgraph structure for Named Entities, certain relations are matched to their parent or child nodes, etc. (see Appendix A for more details). But what is the impact of such rules? As expected, both LEAMR and our unguided method see a considerable performance drop when we remove them.
For Relation, LEAMR drops by almost 60 points, since it relies heavily on the predictions of parent and child nodes to provide candidates to the EM model. Our unguided approach also suffers from such dependency, losing 25 points. However, our guided model is resilient to rule removal, dropping by barely one point on Subgraph and 5 points on Relation.
Layers Figure 2 shows how alignment acts differently across heads and layers. We explore this information flow in the Decoder by extracting the alignments from the sum of layers at different depths. The right of Table 4 shows this for both our unguided and guided models, as well as the Saliency method. [3] indicates the sum of heads in the supervised layer, while [3]* is the learned weighted mix. From our results early layers seem to align more explicitly, with performance dropping with depth. This corroborates the idea that Transformer models encode basic semantic information early (Tenney et al., 2019). While layers 7 and 8 did show high correlation values, the cross-attention becomes more disperse with depth, probably due to each token encoding more contextual information.
## 7 Error Analysis
We identify two main classes of error that undermine the extraction of alignments.
Consecutive spans Because each subgraph in LEAMR is aligned to a list of successive spans, the standard cannot deal correctly with transitive phrasal verbs. For example, for the verb "take off" the direct object might appear in-between ("He took his jacket off in Málaga"). Because these are not consecutive spans, we align just to "take" or "off".
Rules We have a few rules for recognizing subgraph structures, such as Named Entities, and align them to the same spans. However, Named Entity structures contain a placeholder node indicating the entity type; when the placeholder node appears explicitly in the sentence, the node should not be
![8_image_0.png](8_image_0.png)
part of the Named Entity subgraph. For example, when aligning *'Málaga', the city*, the placeholder node should be aligned to *city* while our model aligned it to *Málaga*.
## 8 Cross-Lingual Analysis: A Case Study
To investigate the potential causes of misalignment between English and non-English languages, we conduct a case study that qualitatively examines the differences in alignment generated by different systems and languages. Figure 4 illustrates the sentence, *"why is it so hard to understand?"* with its human translations in German, Spanish, and Italian, and its AMR. In the Italian translation, the subject of the verb is omitted, while in the Spanish translation the focus of the question is modified from asking the reason why something is difficult to understand to asking directly *what is difficult to* understand, making "qué" the subject. As a consequence, in both cases making it impossible to align
"it" with any word in either the Italian or Spanish sentence by Machine Translation Alignment. Table 5 presents the alignments generated for the sentence in each language and with each model in ISI
format. Although our model was able to align the node *"it"* by aligning it with the conjugated verb in the Italian sentence and with the word *"qué"*
in the Spanish sentence, which serves as the subject, this resulted in an error in our evaluation since the alignment of "it" was not projected in either Italian or Spanish. In addition, we also observed the performance of jamr and tamr, which are rulebased systems, and found that they were only able to align the word "so" in the German translation, as it shares the same lemma in English. In contrast, LEAMR was able to detect more alignments
En Why is it so hard to understand ?
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
![8_image_3.png](8_image_3.png)
![8_image_4.png](8_image_4.png)
![8_image_5.png](8_image_5.png)
ref 1.2 1.2.1 - 1.1.1 1.3 1 - 1.1 - ours 1.2 1.2.1 - 1.1.1 1.3 1 - 1.1 —
jamr - — 1.1.1 1.3 1 - 1.1 —
tamr - — 1.1.1 1.3 1 1.2 - 1.1 —
leamr 1.2 1.1.1 1.1.1 1.3 1 - 1.1 1.2.1 De Warum ist das so schwer zu verstehen ?
ref 1.2 1.2.1 - 1.1.1 1.3 1 - 1.1 —
ours 1.2 1.2.1 1.1.1 1.3 1 - 1.1 —
jamr - — - 1.3 - — - — tamr - — - 1.3 - — - —
leamr 1 - 1.2 1.2 1.3 1.1.1 1.2.1 —
Es Qué es - tan díficil de entender ?
ref 1.2 1.2.1 - — 1.3 1 - 1.1 —
ours 1.1.1 1.2.1 - — 1.2 1.3 1 - 1.1 —
jamr - — - — - — - —
tamr - — - — - — - —
leamr 1.1.1 1 1.2 - — 1.1 - 1.3 1.2.1 It Perché é - cosí difficile da capire ?
ref 1.2 1.2.1 - — 1.3 1 - 1.1 - ours 1.2 1.2.1 1.1.1 - 1.3 1 - 1.1 —
jamr - — - — - — - —
tamr - — - — - — - —
leamr 1 1.1 - 1.3 1.1.1 1.2.1 1.2
due to its requirement to align all nodes to a corresponding word in the target language. However, the alignments generated by LEAMR appeared to be almost entirely random.
## 9 Conclusion
In this paper, we have presented the first AMR
aligner that can scale cross-lingually and demonstrated how cross-attention is closely tied to alignment in AMR Parsing. Our approach outperforms previous aligners in English, being the first to align cross-lingual AMR graphs. We leverage the cross-attention from current AMR parsers, without overhead computation or affecting parsing quality. Moreover, our approach is more resilient to the lack of handcrafted rules, highlighting its capability as a standard- and language-agnostic aligner, paving the way for further NLP tasks. As a future direction, we aim to conduct an analysis of the attention heads that are not correlated with the alignment information in order to identify the type of information they capture, such as predicate identification, semantic relations, and other factors. Additionally, we plan to investigate how the alignment information is captured across different NLP tasks and languages in the cross-attention mechanism of sequence-tosequence models. Such analysis can provide insights into the inner workings of the models and improve our understanding of how to enhance their performance in cross-lingual settings.
## 10 Limitations
Despite the promising results achieved by our proposed method, there are certain limitations that need to be noted. Firstly, our approach relies heavily on the use of Transformer models, which can be computationally expensive to train and run. Additionally, the lower performance of our aligner for languages other than English is still a substantial shortcoming, which is discussed in Section 5.2.
Furthermore, our method is not adaptable to nonTransformer architectures, as it relies on the specific properties of Transformer-based models to extract alignment information.
Lastly, our method is based on the assumption that the decoder will attend to those input tokens that are more relevant to predicting the next one.
However, this assumption may not always hold true in practice, which could lead to suboptimal alignments.
In conclusion, while our proposed method presents a promising approach for cross-lingual AMR alignment, it is important to consider the aforementioned limitations when applying our method to real-world scenarios. Future research could focus on addressing these limitations and exploring ways to improve the performance of our aligner for languages other than English.
## 11 Ethics Statement
While our approach has shown itself to be effective in aligning units and spans in sentences of different languages, it is important to consider the ethical and social implications of our work.
One potential concern is the use of Transformerbased models, which have been shown to perpetuate societal biases present in the data used for training. Our approach relies on the use of these models, and it is therefore crucial to ensure that the data used for training is diverse and unbiased.
Furthermore, the use of cross-attention in our approach could introduce new ways to supervise a model in order to produce harmful or unwanted model predictions. Therefore, it is crucial to consider the ethical implications of any guidance or supervision applied to models and to ensure that any training data used to guide the model is unbiased and does not perpetuate harmful stereotypes or discrimination.
Additionally, it is important to consider the potential impact of our work on under-resourced languages. While our approach has shown to be effective in aligning units and spans in sentences of different languages, it is important to note that the performance gap for languages other than English still exists. Further research is needed to ensure that our approach is accessible and beneficial for under-resourced languages.
## Acknowledgments
The authors gratefully acknowledge the support of the European Union's Horizon 2020 research project Knowledge Graphs at Scale (KnowGraphs) under the Marie Marie Skłodowska-Curie grant agreement No 860801.
The last author gratefully acknowledges the support of the PNRR MUR project PE0000013-FAIR.
## References
Rafael Anchiêta and Thiago Pardo. 2020. Semantically inspired AMR alignment for the Portuguese language.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 1595–1600, Online. Association for Computational Linguistics.
Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022.
Graph pre-training for AMR parsing and generation.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland.
Association for Computational Linguistics.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149–155, Online. Association for Computational Linguistics.
Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In *Proceedings of AAAI*.
Adrien Bibal, Rémi Cardon, David Alfter, Rodrigo Souza Wilkens, Xiaoou Wang, Thomas François, and Patrick Watrin. 2022. Is attention explanation? an
introduction to the debate. In Association for Computational Linguistics. Annual Meeting. Conference Proceedings.
Rexhina Blloshmi, Michele Bevilacqua, Edoardo Fabiano, Valentina Caruso, and Roberto Navigli. 2021.
SPRING Goes Online: End-to-End AMR Parsing and Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 134–142, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rexhina Blloshmi, Rocco Tripodi, and Roberto Navigli. 2020. XL-AMR: Enabling cross-lingual AMR
parsing with transfer learning techniques. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2487–2500, Online. Association for Computational Linguistics.
Austin Blodgett and Nathan Schneider. 2021. Probabilistic, structure-aware algorithms for improved variety, accuracy, and coverage of AMR alignments.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3310–3321, Online. Association for Computational Linguistics.
Claire Bonial, Lucia Donatelli, Mitchell Abrams, Stephanie M. Lukin, Stephen Tratz, Matthew Marge, Ron Artstein, David Traum, and Clare Voss. 2020.
Dialogue-AMR: Abstract Meaning Representation for dialogue. In *Proceedings of the 12th Language* Resources and Evaluation Conference, pages 684–
695, Marseille, France. European Language Resources Association.
Chi Chen, Maosong Sun, and Yang Liu. 2021. Maskalign: Self-supervised neural word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4781–
4791, Online. Association for Computational Linguistics.
Marco Damonte and Shay B. Cohen. 2018. Crosslingual Abstract Meaning Representation parsing. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1146–1155, New Orleans, Louisiana. Association for Computational Linguistics.
Ameet Deshpande and Karthik Narasimhan. 2020.
Guiding attention for self-supervised learning with transformers. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4676–
4686, Online. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112–2128, Online.
Association for Computational Linguistics.
Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the Abstract Meaning Representation. In *Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 1426–1436, Baltimore, Maryland. Association for Computational Linguistics.
Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 768–773, Brussels, Belgium. Association for Computational Linguistics.
Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Salim Roukos, Alexander Gray, Ramón Fernandez Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, Dinesh Garg, Alfio Gliozzo, Sairam Gurajada, Hima Karanam, Naweed Khan, Dinesh Khandelwal, Young-Suk Lee, Yunyao Li, Francois Luus, Ndivhuwo Makondo, Nandana Mihindukulasooriya, Tahira Naseem, Sumit Neelam, Lucian Popa, Revanth Gangi Reddy, Ryan Riegel, Gaetano Rossiello, Udit Sharma, G P Shrivatsa Bhargav, and Mo Yu. 2021. Leveraging Abstract Meaning Representation for knowledge base question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3884–3894, Online. Association for Computational Linguistics.
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020.
Captum: A unified and generic model interpretability library for pytorch.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract Meaning Representation for multi-document summarization. In *Proceedings of the 27th International Conference on Computational Linguistics*,
pages 1178–1190, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Jungwoo Lim, Dongsuk Oh, Yoonna Jang, Kisu Yang, and Heuiseok Lim. 2020. I know what you asked:
Graph path learning using AMR for commonsense reasoning. In *Proceedings of the 28th International* Conference on Computational Linguistics, pages 2459–2471, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transitionbased parser. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 2422–2430, Brussels, Belgium. Association for Computational Linguistics.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Potsawee Manakul and Mark Gales. 2021. Long-span summarization via local attention and content selection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 6026–6041, Online. Association for Computational Linguistics.
Abelardo Carlos Martínez Lorenzo, Marco Maru, and Roberto Navigli. 2022. Fully-Semantic Parsing and Generation: the BabelNet Meaning Representation. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1727–1741, Dublin, Ireland.
Association for Computational Linguistics.
André F. T. Martins and Ramón F. Astudillo. 2016.
From softmax to sparsemax: A sparse model of attention and multi-label classification. In *Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48*,
ICML'16, page 1614–1623. JMLR.org.
Roberto Navigli, Rexhina Blloshmi, and Abelardo Carlos Martinez Lorenzo. 2022. BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers. *Proceedings of the AAAI*
Conference on Artificial Intelligence, 36.
K. Elif Oral and Gül¸sen Eryigit. 2022. ˘ AMR alignment for morphologically-rich and pro-drop languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 143–152, Dublin, Ireland.
Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237,
New Orleans, Louisiana. Association for Computational Linguistics.
Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning English strings with Abstract Meaning Representation graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 425–429, Doha, Qatar. Association for Computational Linguistics.
Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daumé III. 2017. Biomedical event extraction using Abstract Meaning Representation. In *BioNLP 2017*,
pages 126–135, Vancouver, Canada,. Association for Computational Linguistics.
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 3145–3153.
JMLR.org.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks:
Visualising image classification models and saliency maps. *CoRR*, abs/1312.6034.
Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. *Transactions of the Association for Computational Linguistics*, 7:19–31.
Ekta Sood, Simon Tannert, Philipp Mueller, and Andreas Bulling. 2020. Improving natural language processing tasks with human gaze-guided neural attention. In *Advances in Neural Information Processing Systems*, volume 33, pages 6327–6341. Curran Associates, Inc.
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2015. Striving for simplicity: The all convolutional net. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings.
Joe Stacey, Yonatan Belinkov, and Marek Rei. 2021. Supervising model attention with human explanations for robust natural language inference.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019.
BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593–
4601, Florence, Italy. Association for Computational Linguistics.
Sarah Uhrig, Yoalli Garcia, Juri Opitz, and Anette Frank.
2021. Translate, then parse! a strong baseline for cross-lingual AMR parsing. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 58–64, Online. Association for Computational Linguistics.
Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across nlp tasks. *CoRR*, abs/1909.11218.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Shira Wein and Nathan Schneider. 2021. Classifying divergences in cross-lingual AMR pairs. In *Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop*, pages 56–65, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Frank Wilcoxon. 1945. Individual comparisons by ranking methods. *Biometrics Bulletin*, 1(6):80–83.
Zhengxuan Wu, Thanh-Son Nguyen, and Desmond Ong.
2020. Structured self-AttentionWeights encode semantics in sentiment analysis. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 255–264, Online. Association for Computational Linguistics.
- Similarly for Named Entities, we align the whole subgraph structure based on its child nodes which indicate its surfaceform. However this leads to some errors as described in Section 7.
- We align node *amr-unknown* to the question mark if it appears in the sentence.
## A.2 Relations
- For the relation *:condition* we align it to the word if when it appears in the sentence.
- *:purpose* is aligned with to when in the sentence.
- *:ARGX* relations are aligned to the same span as the parent node, while *:ARGX-of* to that of the child, since they share the alignment of the predicate they are connected to.
Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In ECCV.
Song Xu, Haoran Li, Peng Yuan, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. Self-attention guided copy mechanism for abstractive summarization. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 1355–1362, Online. Association for Computational Linguistics.
Kayo Yin, Patrick Fernandes, Danish Pruthi, Aditi Chaudhary, André F. T. Martins, and Graham Neubig. 2021. Do context-aware translation models pay the right attention? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 788–801, Online. Association for Computational Linguistics.
- Nodes *have-org-role-91* and *have-rel-role-91* follow a fixed structure related to a person ie.
the sentence word *enemy* is represented as person → have-rel-role-91 → *enemy*, therefore for such subgraphs we use the alignment from the child node.
- For *:mod* and *:duration* we use the alignment from the child node.
Shaolei Zhang and Yang Feng. 2021. Modeling concentrated cross-attention for neural machine translation with Gaussian mixture model. In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 1401–1411, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- For *:domain* and *:opX* we use the alignment from the parent node.
Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, Young-Suk Lee, Radu Florian, and Salim Roukos. 2021. Structure-aware fine-tuning of sequence-to-sequence transformers for transitionbased AMR parsing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6279–6290, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A Leamr Alignment Rules A.1 Subgraph
The LEAMR standard has some predefined strategies for alignments that were followed during their annotation, as well as fixed in their alignment pipeline along EM. We kept a few of these strategies when extracting the alignment, just those related to the structure of the graph, but not those concerning token matching between the sentence and the graph.
We explore the variance with different seeds when guiding cross-attention. Table 1 reports on a single seed selected at random. Table 6 shows the results for five different seeds as well as the average and standard deviation. We observe some variance, especially for those alignment types with fewer
## B Extra Results B.1 Leamr Results
| Exact Alignment | Partial Alignment | Spans | | | | | | |
|------------------------------------------------------------------------------------------------------------|---------------------|---------|-------|-------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 | F1 | | |
| Subgraph | Run 1 | 94.39 | 94.67 | 94.53 | 96.62 | 96.90 | 96.76 | 96.40 |
| Alignment | Run 2 | 93.79 | 93.85 | 93.82 | 96.22 | 96.27 | 96.25 | 96.05 |
| (1707) | Run 3 | 94.26 | 94.32 | 94.29 | 96.60 | 96.66 | 96.63 | 96.34 |
| Run 4 | 94.20 | 94.26 | 94.23 | 96.47 | 96.53 | 96.50 | 96.22 | |
| Run 5 | 93.81 | 94.14 | 93.98 | 95.81 | 96.14 | 95.97 | 95.73 | |
| Average | 94.09 | 94.25 | 94.17 | 96.34 | 96.50 | 96.42 | 96.15 | |
| Std | 0.27 | 0.30 | 0.28 | 0.34 | 0.30 | 0.32 | 0.27 | |
| Relation | Run 1 | 88.03 | 88.18 | 88.11 | 91.08 | 91.24 | 91.16 | 91.87 |
| Alignment | Run 2 | 87.90 | 88.36 | 88.13 | 90.71 | 91.18 | 90.95 | 91.87 |
| (1263) | Run 3 | 88.61 | 88.61 | 88.61 | 91.44 | 91.44 | 91.44 | 91.95 |
| Run 4 | 88.39 | 88.61 | 88.50 | 91.02 | 91.25 | 91.14 | 91.66 | |
| Run 5 | 88.59 | 88.44 | 88.52 | 91.24 | 91.08 | 91.16 | 91.86 | |
| Average | 88.30 | 88.44 | 88.37 | 91.10 | 91.24 | 91.17 | 91.84 | |
| Std | 0.32 | 0.18 | 0.28 | 0.27 | 0.13 | 0.17 | 0.05 | |
| Reentrancy | Run 1 | 56.90 | 57.09 | 57.00 | - | - | - | - |
| Alignment | Run 2 | 56.23 | 56.42 | 56.32 | - | - | - | - |
| (293) | Run 3 | 57.24 | 57.43 | 57.34 | - | - | - | - |
| Run 4 | 55.56 | 55.74 | 55.65 | - | - | - | - | |
| Run 5 | 55.22 | 55.41 | 55.31 | - | - | - | - | |
| Average | 56.23 | 56.42 | 56.32 | - | - | - | - | |
| Std | 0.86 | 0.86 | 0.86 | - | - | - | - | |
| Duplicate | Run 1 | 70.00 | 82.35 | 75.88 | 72.50 | 85.29 | 78.38 | - |
| Subgraph | Run 2 | 65.00 | 76.47 | 70.27 | 67.50 | 79.41 | 72.97 | - |
| Alignment | Run 3 | 70.00 | 82.35 | 75.68 | 70.00 | 82.35 | 75.68 | - |
| (17) | Run 4 | 73.68 | 82.35 | 77.78 | 76.32 | 85.29 | 80.56 | - |
| Run 5 | 70.00 | 82.35 | 75.68 | 70.00 | 82.35 | 75.68 | - | |
| Average | 69.74 | 81.17 | 75.06 | 71.26 | 82.94 | 76.65 | - | |
| Std | 3.09 | 2.63 | 2.82 | 3.33 | 2.46 | 2.90 | - | |
| Table 6: Results on the LEAMR alignment for 5 seeds on the guided approach. Column blocks: runs; measures. | | | | | | | | |
Table 6: Results on the LEAMR alignment for 5 seeds on the guided approach. Column blocks: runs; measures.
Row blocks: alignment types; average and standard deviation (std). Bold is best.
elements; however, average performance is always higher than previous approaches.
## C Alignment Extraction Algorithm
Algorithm 1 shows the procedure for extracting the alignment between spans in the sentence and the semantic units in the graphs, using a matrix that weights Encoder tokens with the Decoder tokens
## D Amr Parsing
Since our guided approach was trained with a different loss than the SPRING model, it could influence the performance in the Semantic Parsing task.
Therefore, we also tested our model in the AMR
parsing task using the test set of AMR 2.0 and AMR 3.0. Table 7 shows the result, where we can observe how our model preserves the performance on parsing.
| AMR 2.0 | AMR 3.0 | |
|-----------------------|-----------|------|
| SPRING | 84.3 | 83.0 |
| Ours - Guided - ISI | 84.3 | 83.0 |
| Ours - Guided - Leamr | 84.3 | 83.0 |
Table 7: AMR parsing Results.
## E Hardware
Experiments were performed using a single NVIDIA 3090 GPU with 64GB of RAM and Intel® Core™ i9-10900KF CPU.
Training the model took 13 hours, 30 min per training epoch while evaluating on the validation set took 20 min at the end of each epoch. We selected the best performing epoch based on the SMATCH metric on the validation set.
## F Data
The AMR data used in this paper is licensed under the *LDC User Agreement for Non-Members* for LDC subscribers, which can be found here. The The Little Prince Corpus can be found here from the Information Science Institute of the University of Southern California.
## G Limitations
Even though our method is an excellent alternative to the current AMR aligner system, which is standard and task-agnostic, we notice some drawbacks when moving to other autoregressive models or languages:
Model In this work, we studied how Cross Attention layers retain alignment information between input and output tokens in auto-regressive models. In Section 5.1, we examined which layers in state-of-the-art AMR parser models based on BART-large best preserve this information. Unfortunately, we cannot guarantee that these layers are optimal for other auto-regressive models, and so on. As a result, an examination of cross-attention across multiple models should be done before developing the cross-lingual application of this approach.
Sentence Segmentation It is necessary to apply LEAMR's Spam Segmentation technique to produce the alignment in LEAMR format (Section 3.4). However, this segmentation method has several flaws: i) As stated in Section 7, this approach does not deal appropriately with phrasal verbs and consecutive segments; ii) the algorithm is Englishspecific; it is dependent on English grammar rules that we are unable to project to other languages.
Therefore we cannot extract the LEAMR alignments in a cross-lingual AMR parsing because we lack a segmentation procedure. However, although LEAMR alignment has this constraint, ISI alignment does not require any initial sentence segmentation and may thus be utilized cross-lingually.
Algorithm 1 Procedure for extracting the alignment between spans in the sentence and the semantic units in the graphs, using a matrix that weights Encoder tokens with the Decoder tokens.
1: **function** EXTRACTALIGNMENTS(*encoderT okens, DecoderT okens, scoreM atrix*)
2: alignmentM ap ← *dict*()
3: *spansList* ← SPANS(*encoderT okens*) ▷ Extract sentence spans as in LEAMR 4: *spanP osM ap* ← TOK2SPAN(*encoderT okens*) ▷ Map input tokens to spans 5: *graphP osM ap* ← TOK2NODE(*DecoderT okens*) ▷ Map output tokens to graph unit 6: COMBINESUBWORDTOKENS(*scoreM atrix*)
7: for DecoderT okenP os, GraphUnit in *graphP osM ap* do 8: encoderT okensScores ← scoreM atrix[*DecoderT okenP os*] 9: *maxScoreP os* ← ARGMAX(*encoderT okensScores*)
10: alignmentM ap[*GraphUnit*] ← SELECTSPAN(*spansList, maxScoreP os*)
11: **end for**
12: *f ixedM atches* ← GETFIXEDMATCHES(*graphP osM ap*) ▷ Look for rule based matches 13: *alignmentM ap* ← APPLYFIXEDMATCHES(*alignmentM ap, f ixedM atches*) 14: *alignments* ← FORMATALIGNMENT(*alignmentM ap*)
15: **return** *alignments* 16: **end function**
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 9
✓ A2. Did you discuss any potential risks of your work?
Section 10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the Abstract and Introduction
✓ A4. Have you used AI writing assistants when working on this paper?
Grammarly, we check the use of English of our paper
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4, 5, 6 And 7
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In the Appendix
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We believe these are self-explained by the licences discussed for each artifact.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We base our work on widely used datasets which already performed these steps.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4 and Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4 and Appendix.
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5 and Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Table 5 in Appendix
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Discussed through the paper when needed.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-etal-2023-zero | Zero-Shot Text Classification via Self-Supervised Tuning | https://aclanthology.org/2023.findings-acl.110 | Existing solutions to zero-shot text classification either conduct prompting with pre-trained language models, which is sensitive to the choices of templates, or rely on large-scale annotated data of relevant tasks for meta-tuning. In this work, we propose a new paradigm based on self-supervised learning to solve zero-shot text classification tasks by tuning the language models with unlabeled data, called self-supervised tuning. By exploring the inherent structure of free texts, we propose a new learning objective called first sentence prediction to bridge the gap between unlabeled data and text classification tasks. After tuning the model to learn to predict the first sentence in a paragraph based on the rest, the model is able to conduct zero-shot inference on unseen tasks such as topic classification and sentiment analysis. Experimental results show that our model outperforms the state-of-the-art baselines on 7 out of 10 tasks. Moreover, the analysis reveals that our model is less sensitive to the prompt design. Our code and pre-trained models are publicly available at \url{https://github.com/DAMO-NLP-SG/SSTuning}. | # Zero-Shot Text Classification Via Self-Supervised Tuning
Chaoqun Liu∗ 12 Wenxuan Zhang† 2 Guizhen Chen∗12 **Xiaobao Wu**1 Anh Tuan Luu1 Chip Hong Chang1 **Lidong Bing**2 1Nanyang Technological University, Singapore, 2DAMO Academy, Alibaba Group
{chaoqun.liu,guizhen.chen,saike.zwx,l.bing}@alibaba-inc.com
{xiaobao002,echchang,anhtuan.luu}@ntu.edu.sg
## Abstract
Existing solutions to zero-shot text classification either conduct prompting with pre-trained language models, which is sensitive to the choices of templates, or rely on large-scale annotated data of relevant tasks for meta-tuning.
In this work, we propose a new paradigm based on self-supervised learning to solve zeroshot text classification tasks by tuning the language models with unlabeled data, called selfsupervised tuning. By exploring the inherent structure of free texts, we propose a new learning objective called first sentence prediction to bridge the gap between unlabeled data and text classification tasks. After tuning the model to learn to predict the first sentence in a paragraph based on the rest, the model is able to conduct zero-shot inference on unseen tasks such as topic classification and sentiment analysis. Experimental results show that our model outperforms the state-of-the-art baselines on 7 out of 10 tasks. Moreover, the analysis reveals that our model is less sensitive to the prompt design. Our code and pretrained models are publicly available at https:
//github.com/DAMO-NLP-SG/SSTuning.
## 1 Introduction
Recent advances in pre-trained language models
(PLMs) have brought enormous performance improvements in a large variety of NLP tasks (Radford and Narasimhan, 2018; Devlin et al., 2019).
These paradigm shifts towards leveraging generic features learnt by PLMs are driven by the high data cost required for learning each new NLP task afresh. One promising learning method that echoes this paradigm shift is zero-shot text classification, which predicts text labels on unseen tasks. Zeroshot text classification has attracted considerable research attention in recent years (Wei et al., 2022;
∗Chaoqun Liu and Guizhen Chen are under the Joint PhD Program between Alibaba and Nanyang Technological University.
†Wenxuan Zhang is the corresponding author.
![0_image_0.png](0_image_0.png)
Figure 1: Zero-shot learning approaches: (a) prompting,
(b) meta-tuning, and (c) our proposed self-supervised tuning method.
Sanh et al., 2022; Yang et al., 2022), as labeled data is no longer a necessity for relearning new feature representations for untrained specific tasks.
Existing studies on zero-shot text classification can be briefly classified into two types, as shown in Figure 1. The first type is prompting, which uses PLMs to predict labels with designed templates and verbalizers (Figure 1 (a)). This can be achieved by leveraging the generation capability of large language models (Brown et al., 2020; Chowdhery et al., 2022), or reformulating text classification task as a mask-filling task (Schick and Schütze, 2021; Schick and Schütze, 2021). Likewise, generation-based methods (Meng et al., 2022; Ye et al., 2022) and mining-based methods (van de Kar et al., 2022) also rely on prompting to generate or filter noisy labeled samples, which are used for further fine-tuning. The second type is meta-tuning which fine-tunes a PLM on a collection of labeled data of related tasks before conducting inference on unseen tasks (Figure 1 (b)). By reformulating the annotated data into instruction templates (Wei et al., 2022; Sanh et al., 2022), question-answer pairs (Khashabi et al., 2020; Zhong et al., 2021),
multiple-choice questions (Yang et al., 2022) or entailment pairs (Yin et al., 2019; Ding et al., 2022; Du et al., 2023), and fine-tuning on them, PLMs perform well on unseen tasks.
Despite the achieved performance, existing methods have several limitations. Prompting has shown to be sensitive to the choice of patterns and verbalizers (van de Kar et al., 2022). This makes it difficult to design different templates specifically for each task. In addition, generation-based and mining-based methods require fine-tuning PLMs for each downstream task, which is inefficient for deployment. On the other hand, meta-tuning relies on labeled data of relevant tasks or in specific formats to facilitate the learning of desired patterns.
The requirement for such large-scale annotated data narrows its application scope.
To address the above issues, we propose to leverage self-supervised learning (SSL) for zero-shot text classification tasks. SSL has been widely used during the pre-training stage of PLMs to alleviate the need for large-scale human annotations (Devlin et al., 2019; Lan et al., 2020) by exploiting the intrinsic structure of free texts. Therefore, with a suitable SSL objective, the model is able to capture certain patterns with the auto-constructed training data and can be applied to a wide range of downstream tasks in a zero-shot manner without specific designs. To our best knowledge, this is the first work to exploit SSL at the tuning stage for zero-shot classification, which we refer to as selfsupervised tuning (SSTuning).
The biggest challenge of applying SSTuning to zero-shot text classification tasks is to design a proper learning objective that can effectively construct large-scale training samples without manual annotations. Intuitively, the core of the text classification task can be treated as associating the most suitable label to the text, given all possible options.
Motivated by this observation, we propose a new learning objective named first sentence prediction
(FSP) for the SSTuning framework to capture such patterns. In general, the first sentence tends to summarize the main idea of a paragraph. Therefore, predicting the first sentence with the rest of the paragraph encourages the model to learn the matching relation between a text and its main idea
("label"). To generate training samples, we use the first sentence in the paragraph as the positive option and the rest as text. The first sentences in other paragraphs are used as negative options. Specifically, if negative options are from the same article as the positive option, they are regarded as hard negatives since the sentences in the same article normally have some similarities, such as describing the same topic. Hard negatives may force the model to learn the semantics of the text instead of simply matching the keywords to complete the task.
In the inference phase, we convert all possible labels of a sample into options, which can be done in two simple ways: 1) use original label names; 2) convert labels using the templates (like "This text is about [label name]"). Then the text and options are combined to create the final input. The tuned model can thus retrieve the most relevant option as the predicted label of the text. Since the tuned model has seen a large number of samples and various first sentences as options, which has a higher chance to consist of similar options to the ones at the inference phase, its performance is less sensitive to verbalizer design. In this way, our SSTuning enables efficient deployment of PLM
for classifying texts of unseen classes on-the-fly without requiring further tuning with labeled data or unlabeled in-domain data.
Our main contributions are:
- We propose a new learning paradigm called self-supervised tuning (SSTuning) to solve zero-shot text classification tasks. A simple yet effective learning objective named first sentence prediction is designed to bridge the gap between unlabeled data and text classification tasks.
- We conduct extensive experiments on 10 zeroshot text classification datasets. The results show that SSTuning outperforms all previous methods on overall accuracy in both topic classification tasks and sentiment analysis tasks.
Our analysis further demonstrates that our model is less sensitive to prompt design.
## 2 Proposed Method
In this section, we discuss our proposed framework, SSTuning, and provide details for our dataset preparation process using the idea of first sentence prediction (FSP), the tuning phase, and the zero-shot inference phase.
## 2.1 First Sentence Prediction
Text classification can be regarded as selecting the most relevant label for the text, given all possible labels. Based on such observation, we propose the
![2_image_0.png](2_image_0.png)
FSP task to create datasets for our SSTuning by mimicking the same structure.
We design the FSP task by considering both the nature of the unlabeled corpus and the input/output format of classification tasks. In this subsection, we describe in detail how to construct the tuning and validation sets from the unlabeled corpus. Figure 2 shows the core procedures for our dataset generation.
Data filtering. We first filter data to select appropriate paragraphs for tuning (more details are shown in A.1). Removing meaningless sentences ensures data quality, which helps improve the performance of the model.
First sentence as the positive option. We consider an article An that contains M paragraphs, i.e.,
An = [P
n 1
, P n 2
, ...P nM], and suppose paragraph P
nm has K sentences [S
n,m 1, Sn,m 2, ..., Sn,m K ], the positive option O
n,m c and the text x n,m are:
$$\begin{array}{l}{{O_{c}^{n,m}=S_{1}^{n,m}}}\\ {{x^{n,m}=[S_{2}^{n,m},...,S_{K}^{n,m}]}}\end{array}\tag{1}$$
$\pi$
As shown in Figure 2, we can retrieve the first sentence *"Jim Berryman (born February 17, 1947)*
is a ... " as the positive option and the rest of the paragraph *"He is the former mayor of Adrian ..."*
as the text for the first paragraph in the article.
Negative sampling. After getting the positive option, we randomly sample J "first sentences" from other paragraphs [S
n1,m1 1, Sn2,m2 1*, ...S*nJ ,mJ
1]
as negative options, where J is a random number that satisfies 1 ≤ J ≤ NmaxLabel − 1. We let NmaxLabel denote the maximum number of labels that are first sentences, which is pre-defined to ensure the total number of tokens for options is not too long. It is less or equal to Nmodel, where Nmodel is the number of labels for the model output layer. Having a random number of negative options bridges the gap between tuning and zero-shot inference since the number of classes for evaluation datasets may vary from 2 to Nmodel.
Hard negatives. During negative sampling, if the negative options and the positive option are from the same article, we call the options hard negatives.
Inspired by the successful application of hard negatives in Gao et al. (2021b), we purposely add more hard negatives to enhance the model performance.
Sometimes, when we read articles, we notice that the same words appear in the first sentence and the rest of the paragraph. As shown in Figure 2, we can use the word *"Berryman"* to quickly find the corresponding first sentence for the text. However, if we add the hard negative *"On January 6, 2012,*
Berryman ...", the model has to understand the true semantics to choose the positive option.
Option padding. We pad the options with the special "[PAD]" token to make the input format consistent between the tuning phase and the inference phase. Specifically, if the total number of options after negative sampling is (J + 1) < Nmodel, we will add (Nmodel − J − 1) [PAD] options. Thus 1745 the final list of options is:
$$\begin{array}{c}{{O^{n,m}=\left[S_{1}^{n,m},S_{1}^{n_{1},m_{1}},S_{1}^{n_{2},m_{2}},...S_{1}^{n_{J},m_{J}},}}\\ {{O_{\mathsf{PAD}}^{1},O_{\mathsf{PAD}}^{2},...O_{\mathsf{PAD}}^{N_{\mathrm{model}}-J-1}\right]}}\end{array}\tag{3}$$
Generating final text and label. We shuffle the option list because the position of a positive option is random in the evaluation datasets. After shuffling, we assume the option list is:
$$O_{\rm shuffle}^{n,m}=[O_{0},O_{1},...O_{N_{\rm model}}-1],\tag{4}$$
where the positive option O
n,m c = Oj . Then the label for this sample is:
$\mathbf{a}\cdot\mathbf{a}\cdot\mathbf{m}=\mathbf{a}\cdot\mathbf{a}\cdot\mathbf{a}$.
$$L^{n,m}=j.$$
n,m = j. (5)
The final input text is the concatenation of the above components:
$$x_{inp}^{n,m}=[\text{CLS}]\{(T_{i})\;O_{i}\}_{i=0}^{N_{\text{model}}-1}[\text{SEP}]x^{n,m}[\text{SEP}]\tag{6}$$
where Tiis the i-th item from the index indicator list T (e.g. [*A, B, C...*]), [CLS] is the classification token, and [SEP] is the seperator token used by Devlin et al. (2019).
Thus the final text-label pair (x n,m inp , L
n,m) is the generated sample. We can repeat this process to generate a large number of samples as the tuning set. The validation set can also be generated in the same way. Note that if we select a corpus that only contains paragraphs instead of articles, we can treat each paragraph as an article, and no hard negatives are generated.
## 2.2 Tuning Phase 2.2.1 Network Architecture
We employ BERT-like pre-trained masked language models (PMLM) as the backbone, such as RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2020). Following Devlin et al. (2019), we add an output layer for classification. Such models have both bidirectional encoding capabilities and simplicity. Generative models are not necessary since we only need to predict the index of the correct option. We do not make any changes to the backbone so that the method can be easily adapted to different backbones. In order to cover all test datasets, we config the number of labels for the output layer as the maximum number of classes for all test datasets, denoted by Nmodel.
2.2.2 Learning Objective Traditional text classification with PMLMs like BERT maps each classification layer output to a class. Such a design requires a dedicated output layer for each dataset as they have different classes.
Instead, our learning object for FSP with the same network is to predict the index of the positive option. In this way, we can use the output layer for both tuning and inference and for various kinds of datasets.
As shown in Figure 2, we concatenate the labels and the text as input. The outputs are the indices
(0, 1, 2..., which correspond to A, B, C), which are the same as traditional classification datasets. We use a cross-entropy loss for tuning the model.
$\eqref{eq:walpha}$
## 2.3 Zero-Shot Inference Phase
During the zero-shot inference phase, we can infer directly by converting the input of the sample to the same format as that in the tuning phase.
## 2.3.1 Input Formulation
As shown in Figure 2, the zero-shot inputs are formulated similarly as the tuning phase, except 1)
instead of using first sentences as options, we convert the class names to options. Actually, we can simply use the original labels or some simple templates like "This text is about [label name]." for the conversion, thus little to no effort is needed. 2)
No shuffling is needed. Since the converted input and output during SSTuning and zero-shot phases are the same, no further adjustment of the model is required.
## 2.3.2 Constrained Prediction
Since the dimension of the output logits (N*model*)
may be different from the number of classes in a dataset (NL), the predictions may be out of range
(e.g. the model may output 3 for a dataset with 2 classes). To solve this issue, we simply make predictions based on the first NL logits:
$$\mathbf{s}[0:N_{L}])$$
$$(7)$$
$\hdots$
P = argmax(logits[0 : NL]) (7)
where P is the index for the positive option.
## 3 Experiment Setup 3.1 Sstuning Datasets
We choose English Wikipedia and Amazon review dataset (2018) (Ni et al., 2019) for SSTuning. The Wikipedia corpus has more than 6.2M articles1 by 1https://en.wikipedia.org/wiki/Wikipedia:
Size_of_Wikipedia the end of 2021, while Amazon Review Data has around 233.1M reviews2. Wikipedia articles typically use formal expressions and Amazon reviews contain informal user-written texts, together covering different genres of text.
For English Wikipedia, we collect articles up to March 1st, 2022. To balance the dataset, we select up to 5 paragraphs in each article. The generated dataset has 13.5M samples. For the Amazon review dataset, we only use the review text to create our SSTuning dataset, ignoring other information such as summary and vote. The Amazon review dataset has 29 categories. To keep the model from being dominated by a certain category, we select up to 500k samples from each category. In the end, we collected 11.9M samples.
To have a balanced dataset, we sample 2.56M
from the Wikipedia dataset and 2.56M from the Amazon review dataset, forming a total of 5.12M
samples as the tuning dataset. In addition, we sampled 32k from each of the two datasets, forming a validation set consisting of 64k samples.
## 3.2 Evaluation Datasets
We evaluate the models on 4 topic classification
(TC) tasks, including Yahoo Topics (yah) (Zhang et al., 2015), AG News (agn) (Zhang et al., 2015),
DBPedia (dbp) (Zhang et al., 2015) and 20newsgroup (20n) (Lang, 1995), and 6 sentiment analysis
(SA) tasks, including SST-2 (sst2) (Socher et al.,
2013), IMDb (imd) (Maas et al., 2011), Yelp (ylp)
(Zhang et al., 2015), MR (mr) (Pang and Lee, 2005)
and Amazon (amz) (Zhang et al., 2015), which are binary classification tasks, and SST-5 (sst5)
(Socher et al., 2013), a fine-grained 5-class SA task.
Detailed data statistics for each testing dataset are presented in Table 6 in Appendix A.
Following the baselines (Yang et al., 2022; van de Kar et al., 2022; Gera et al., 2022), we report the accuracy on the test set when available, falling back to the original validation set for SST-2.
## 3.3 Baselines
We choose the following baselines for comparison after considering their relevancy, impact, checkpoint availability, and model sizes:
- Textual entailment (TE) (Yin et al., 2019): Following Gera et al. (2022), we download the off-the-shelf models trained on MNLI and use 2https://nijianmo.github.io/amazon/
the default hypothesis template *"This example* is []." for evaluation.
- TE-Wiki (Ding et al., 2022): This model is also trained with entailment methods but with a dataset constructed from Wikipedia.
- Prompting-based method (Schick and Schütze, 2021): We compare with the results using multiple verbalizers reported in (van de Kar et al., 2022).
- Mining-based (van de Kar et al., 2022): The method has three steps, which are mine, *filter* and *fine-tune*. We compare with the results reported.
- UniMC (Yang et al., 2022): We download the released checkpoint and test the model without question prompts since the reported results on text classification tasks are better on average.
We followed the setups and verbalizers of the original works as much as possible. If the original work does not have verbalizers for a dataset, we will use the same or comparable verbalizers as ours, as shown in Table 7.
## 3.4 Implementation Details
To test the performance of the proposed method on different model sizes and architectures, we tune three versions of models, which are based on RoBERTabase, RoBERTalarge (Liu et al., 2019), and ALBERTxxlarge (V2) (Lan et al., 2020), denoted as SSTuning-base, SSTuning-large, SSTuningALBERT, respectively. We set the maximum token length as 512 and only run one epoch. We repeat all the experiments 5 times with different seeds by default. The experiments on SSTuning-base and SSTuning-large are run on 8 NVIDIA V100 GPUs and the experiments on SSTuning-ALBERT are run on 4 NVIDIA A100 GPUs.
The hyperparameters for fine-tuning and SSTuning are shown in Table 8. We set the batch size based on the constraint of the hardware and do a simple hyperparameter search for the learning rate.
We do not add hard negatives for the Amazon review dataset since the reviews are not in the format of articles. We also tried to use the negative options from the same product category as hard negatives but did not find any meaningful improvement. We set Nmodel as 20 and NmaxLabel as 10 after simple experiment.
## 4 Results And Analysis 4.1 Main Results
The main results are shown in Table 1. We have the following observations: 1) Our method SSTuningALBERT achieves new state-of-the-art results on 7 out of 10 datasets, and significantly reduces the gap between fine-tuning and zero-shot methods compared to UniMC (from 10.6 to 7.2) , showing the superiority of our proposed method. 2) With the same backbone, SSTuning-ALBERT outperforms UniMC by 3.4% on average. Note that different from UniMC, we do not utilize any labeled data to conduct meta-tuning, but purely rely on autoconstructed data for self-supervised tuning, which not only has a much large scale of data but also has more abundant options (first sentences). 3) Comparing methods based on RoBERTabase, RoBERTalarge and BARTlarge, our SSTuning-large and SSTuningbase are the two best-performing models on average. We also observe that SSTuning-large outperforms UniMC, despite the latter possessing a stronger backbone. 4) Our models do not perform very well on SST-5, which is a fine-grained sentiment analysis task. Maybe we can generate more fine-grained options from the unlabeled corpus to improve performance on such tasks. We leave it as a future work.
## 4.2 Ablation Study 4.2.1 Ablation On Tuning Datasets
We utilize both the Amazon review dataset and English Wikipedia during the tuning stage. To evaluate their effectiveness, we conduct ablation studies to create two model variants that are only trained on one dataset. We set the number of samples for each case to 5.12M for a fair comparison. As shown in Table 2, both datasets contribute to the final performance, thus discarding any one leads to a performance drop. It is interesting that tuning with Amazon review data performs the same as tuning with Wikipedia on topic classification tasks. This is unexpected since Wikipedia is more related to topic classification tasks intuitively. We anticipate the reason is that the backbone models have already been pre-trained with Wikipedia, thus further tuning with it does not bring significant advantages.
## 4.2.2 Alternative Tuning Objectives
We have proposed first sentence prediction (FSP)
as the tuning objective to equip the model learning to associate the label and text in the inference stage. We consider some alternative objectives here for comparison: 1) last sentence prediction (LSP),
which treats the last sentence as the positive option for the rest of the paragraph; 2) next sentence selection (NSS)3, which treats the first sentence in a consecutive sentence pair as text and the next as the positive option; 3) random sentence prediction (RSP), which randomly pick a sentence in a paragraph as the positive option and treat the rest as text. The comparison between the four settings is shown in Table 3. We find that FSP performs the best, especially for topic classification tasks.
Among the alternatives, utilizing LSP as the tuning objective leads to the best performance, which is expected since the last sentence in a paragraph usually also contains the central idea, sharing a similar function as the first sentence. Unlike topic classification tasks, the four settings perform similarly on sentiment analysis tasks. The possible reason is that each sentence in a paragraph shares the same sentiment.
## 4.3 Analysis 4.3.1 Impact Of Verbalizer Designs
During self-supervised tuning, the model saw a large number of first sentences as options, which may contain similar options to the unseen tasks, thus it may have better generalization capabilities.
To test how robust the model is to the verbalizer changes compared with UniMC, we design 10 sets of verbalizers for SST-2 and IMDb, covering various scenarios: 1) verbalizers with a single word; 2) verbalizers with different punctuation marks; 3)
combinations of single verbalizers; 4) different format for different classes. For a fair comparison, we only use one of our checkpoints and compare it with the UniMC checkpoint released. The results are shown in Table 4. We find that SSTuningALBERT performs better on average and is more stable. For the most challenging case, which is
"Terrible!" and *"I like the movie! It is wonderful!"*, SSTuning-ALBERT outperforms UniMC by 20.4 points for SST-2 and 17 points for IMDb.
## 4.3.2 Classification Mechanism
To investigate how our models make correct decisions, we did a case study on a movie review example. As shown in Figure 3, we used SSTuningbase (number of labels configured as 2) to classify 3Note that we use NSS here to distinguish from NSP (next sentence prediction) used by Devlin et al. (2019).
| Backbone | Labeled | Topic Classification | Sentiment Analysis | Avg | | | | | | | | | |
|-----------------|---------------|------------------------|----------------------|-------|------|------|------|------|------|------|------|------|------|
| yah | agn | dbp | 20n | sst2 | imd | ylp | mr | amz | sst5 | | | | |
| Fine-tuning❖ | RoBERTalarge | - | 77.1 | 95.5 | 99.2 | 75.3 | 95.9 | 96.4 | 98.3 | 91.3 | 97.2 | 59.9 | 88.6 |
| TE-Wiki | BERTbase | ✓ | 56.5 | 79.4 | 90.4 | 53.9 | 57.3 | 62.0 | 58.5 | 56.2 | 55.8 | 24.5 | 59.5 |
| TE-MNLI | RoBERTalarge | ✓ | 28.6 | 77.6 | 60.4 | 40.2 | 89.6 | 90.2 | 92.8 | 82.8 | 92.0 | 48.8 | 70.3 |
| TE-MNLI | BARTlarge | ✓ | 48.2 | 74.8 | 57.1 | 35.4 | 89.0 | 91.1 | 93.1 | 81.4 | 91.9 | 47.7 | 71.0 |
| Prompting* | RoBERTabase | - | 34.1 | 54.6 | 51.1 | - | 81.9 | 81.8 | 83.1 | 78.3 | 83.5 | - | - |
| Mining-based* | RoBERTabase | ✗ | 56.1 | 79.2 | 80.4 | - | 85.6 | 86.7 | 92.0 | 80.5 | 92.0 | - | - |
| UniMC* | ALBERTxxlarge | ✓ | - | 81.3 | 88.9 | - | 91.6 | 94.8 | - | - | - | - | - |
| UniMC (Rerun) | ALBERTxxlarge | ✓ | 59.0 | 84.3 | 89.2 | 43.7 | 90.1 | 93.6 | 94.3 | 87.3 | 93 | 45.6 | 78.0 |
| SSTuning-base | RoBERTabase | ✗ | 59.1 | 79.9 | 82.7 | 47.2 | 86.4 | 88.2 | 92.9 | 83.8 | 94.0 | 45.0 | 75.9 |
| SSTuning-large | RoBERTalarge | ✗ | 62.4 | 83.7 | 85.6 | 56.7 | 90.1 | 93.0 | 95.2 | 87.4 | 95.2 | 46.9 | 79.6 |
| SSTuning-ALBERT | ALBERTxxlarge | ✗ | 63.5 | 85.5 | 92.4 | 62.0 | 90.8 | 93.4 | 95.8 | 89.5 | 95.6 | 45.2 | 81.4 |
| TC | SA | All | |
|--------------------|------|-------|------|
| Amazon | 63.4 | 81.4 | 74.2 |
| Wikipedia | 63.4 | 77.9 | 72.1 |
| Amazon + Wikipedia | 67.2 | 81.7 | 75.9 |
Table 2: Zero-shot results with different tuning datasets.
The best result is in **Bold**.
| TC | SA | All | |
|----------------------------|------|-------|------|
| First sentence prediction | 67.2 | 81.7 | 75.9 |
| Last sentence prediction | 59.8 | 82.2 | 73.3 |
| Next sentence selection | 54.8 | 81.9 | 71.1 |
| Random sentence prediction | 56.8 | 80.8 | 71.2 |
whether the movie review "A wonderful movie!"
is negative or positive. We set the verbalizers as
"Bad." and "It's good." to see how the length of options impacts the decision. The prediction of the model is 1, which is correct. We find that [CLS] token attends more to the second opinion, especially to the tokens around the index indicator "B" in the last layer. This is consistent with our intuitions.
For humans, when we do classification tasks, we normally compare the options and select the option that best matches the text. We show additional attention maps and analysis in Appendix B.2.3.
## 4.3.3 Importance Of Index Indicators
To further understand how the index indicator guides the model to make the prediction, we employ different indicator designs during the tuning
![6_image_0.png](6_image_0.png)
and inference stage. Specifically, we consider different formats of the index indicator, which are: 1)
alphabet characters (A, B, C...), which is the default format; 2) numerical index (0, 1, 2...); 3) same index indicator for all options (0, 0, 0...). During the inference, we also consider two special indicators: 4) same alphabet characters (A, A, A...), and 5) rearranged alphabet characters (B, A, D, C...).
The results are shown in Table 5. There is not much difference between using alphabet characters and numerical indexes, as shown in cases 1 and 2. As shown in case 3, using the same characters will de-
| Verbalizer for "negative" | Verbalizer for "positive" | UniMC(w/o Qn) | SSTuning-ALBERT | | |
|-----------------------------|------------------------------------|-----------------|-------------------|------|------|
| SST-2 | IMDb | SST-2 | IMDb | | |
| Bad. | Good. | 87.0 | 91.9 | 90.7 | 93.9 |
| Terrible. | Great. | 88.5 | 91.7 | 91.4 | 94.3 |
| Negative. | Positive. | 86.0 | 90.3 | 92.2 | 92.6 |
| Negative! | Positive! | 88.9 | 90.2 | 92.1 | 92.4 |
| Terrible! | Awesome! | 88.4 | 91.1 | 90.9 | 94.0 |
| Bad, terrible and negative. | Good, great, and positive. | 80.7 | 87.5 | 87.3 | 90.8 |
| I don't like the movie! | I like the movie! | 91.5 | 92.9 | 89.8 | 90.3 |
| Terrible! | I like the movie! It is wonderful! | 66.4 | 75.1 | 86.8 | 92.1 |
| It's terrible. | It's great. | 91.6 | 93.0 | 90.6 | 94.1 |
| It's negative. | It's positive. | 85.6 | 89.9 | 89.2 | 91.3 |
| Average | 85.5 | 89.4 | 90.1 | 92.6 | |
| Standard Deviation | 7.4 | 5.3 | 1.9 | 1.5 | |
Table 4: Comparison of zero-shot results for 2 sentiment analysis tasks with different verbalizers. The best average results are in **bold**.
| Tuning | Inference | Avg | Std | |
|----------|--------------|-----------------|-------|------|
| 1 | (A, B, C...) | (A, B, C...) | 75.9 | 0.3 |
| 2 | (0, 1, 2...) | (0, 1, 2...) | 75.6 | 0.4 |
| 3 | (0, 0, 0...) | (0, 0, 0...) | 74.1 | 0.6 |
| 4 | (A, B, C...) | (A, A, A...) | 32.0 | 1.1 |
| 5 | (A, B, C...) | (B, A, D, C...) | 23.4 | 12.1 |
grade the performance but not much, which means the model can rely on position embedding of the index indicator to make the correct predictions. As shown in cases 4 and 5, using inconsistent index indicators will greatly degrade the performance, which further verifies the importance of using consistent index indicators to make correct predictions.
## 4.3.4 Impact Of Hard Negative Samples
Intuitively, adding more hard negatives will make the task more difficult, thus forcing the mode to better understand the semantics of the sentences.
We tested the impact of hard negatives based on two settings: 1) train with both the Amazon reviews and Wikipedia, each with 2.56M samples; 2) train with only 2.56M Wikipedia samples. We don't train with only Amazon reviews since they don't have hard negatives. The results with 0, 1, 3, 5, 7, 9 hard negatives are shown in Figure 4.
In general, adding more hard negatives will improve the performance. For the case with both datasets, the impact of hard negatives is small. This is because the Amazon review dataset alone can achieve good performance, as shown in Table 2.
![7_image_0.png](7_image_0.png)
However, hard negatives have a significant impact on the setting with only Wikipedia for tuning. The possible reason is that without hard negatives the model may only learn keyword matching instead of semantics since the keywords may appear many times in the same Wikipedia article.
## 4.3.5 Additional Analysis
We report additional analysis in Appendix B.2. As shown in Figure 5, we can further improve the performance by increasing the tuning sample size.
We also compared SSTuning-base with different numbers of output labels Nmodel. As shown in Appendix B.2.2, we can increase Nmodel to inference on datasets with more classes.
## 5 Related Work
Zero-shot text classification. Zero-shot learning has the advantage that no annotated data is required for downstream tasks. Prompting-based methods
(Brown et al., 2020; Chowdhery et al., 2022; Schick and Schütze, 2021; Gao et al., 2021a) that reformulate the inputs as prompts can perform much worse in the zero-shot setting than few-shot settings as it may be hard for the PLMs to interpret the templates. A better option may be miningbased method (van de Kar et al., 2022), which mines the labeled data from the unlabeled corpus for fine-tuning each downstream task. Similarly, generation-based approaches (Meng et al., 2022; Ye et al., 2022) generate labeled data with a generative PLM.
More works on zero-shot text classifications are based on transfer learning. Instruction-tuningbased models like FLAN (Wei et al., 2022) and T0
(Sanh et al., 2022), fine-tine PLMs on a collection of datasets described by instructions or prompts to improve performances on unseen tasks. PLMs can also be meta-tuned (Zhong et al., 2021) on text classification datasets and do zero-shot on other classification datasets. UniMC (Yang et al., 2022)
converts several tasks to multiple-choice tasks and does zero-shot inference on tasks that can be formulated in the same format. Another line of work is to convert text classification problems to textual entailment problems. By fine-tuning on natural language inference datasets (Yin et al., 2019) or a dataset from Wikipedia (Ding et al., 2022), the models can do inference directly on text classification datasets. Instead of using annotated datasets, we only need unlabeled data to generate a large number of labeled samples as tuning and validation sets by exploring the inherent text structure.
Self-supervised learning. Self-supervised learning has been widely applied during language model pre-training by leveraging the input data itself as supervision signals (Liu et al., 2021). Left-toright language modeling (Radford and Narasimhan, 2018) and masked language modeling (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020) help learn good sentence representations. In order to capture the sentence-level relations of downstream tasks, Devlin et al. (2019) pre-train a next sentence prediction task and Lan et al. (2020) use sentence order prediction task to model the inter-sentence coherence. Wang et al. (2020) combine the two objectives to form a three-way classification task. Instead of modeling the inter-sentence relations, Meng et al. (2021) employs sequence contrastive learning to align the corrupted text sequences that originate from the same input source and guarantee the uniformity of the representation space. Our work uses a harder learning objective called first sentence prediction: given several options and text, find the corresponding first sentence preceding the text.
## 6 Conclusions
In this work, we propose a new learning paradigm called SSTuning for zero-shot text classification tasks. By forcing the model to predict the first sentence of a paragraph given the rest, the model learns to associate the text with its label for text classification tasks. Experimental results show that our proposed method outperforms state-of-the-art baselines on 7 out of 10 tasks and the performance is more stable with different verbalizer designs. Our work proves that applying self-supervised learning is a promising direction for zero-shot learning. In the future, we plan to apply SSTuing to other tasks by designing proper learning objectives.
## Limitations
In this work, we proposed SSTuning for zero-shot text classification tasks. During inference, we may need to design verbalizers even though we can use templates like "This text is about [label name]". For simplicity and fair comparison, we only refer to previous works for such designs, which may be sub-optimal. As shown in Table 4, using the verbalizers "Terrible." and "Great." work better than
"It's terrible." and "It's great." for the SST-2 and IMDA tasks that we reported in the main results.
If the labeled validation set is provided, the model may perform better by choosing verbalizers based on the validation set.
Due to limited computation resources, we only tuned the model with 5.12 million samples, which is only a small portion of the available samples. We believe that tuning the model on a larger dataset help improve the performance. Even though the computational cost will also increase, it is worth it since no more training is needed at the inference phase. In addition, we did not do extensive hyperparameter searches except for the learning rate, which may further improve the performance.
In our experiment, we only tested the method with discriminative models like RoBERTa and ALBERT. Its performance with generative models is not known. It is non-trivial to test on such models since generative models can do both natural language understanding tasks and natural language generation tasks. We leave this as future work.
## Acknowledgements
This research is supported, in part, by Alibaba Group through Alibaba Innovative Research (AIR)
Program and Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore. Chaoqun Liu and Guizhen Chen extend their gratitude to Interdisciplinary Graduate Programme and School of Computer Science and Engineering, Nanyang Technological University, Singapore, for their support. This research is also supported by the Ministry of Education Tier 1 grant
(MOE Tier 1 RS21/20).
## References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
Hantian Ding, Jinrui Yang, Yuqian Deng, Hongming Zhang, and Dan Roth. 2022. Towards open-domain topic classification. *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations*.
Jiangshu Du, Wenpeng Yin, Congying Xia, and Philip S.
Yu. 2023. Learning to select from multiple options.
In *Proceedings of the 2023 AAAI*.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021a.
Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
Long Papers), Virtual Event, August 1-6, 2021, pages 3816–3830. Association for Computational Linguistics.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021b.
Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6894–
6910. Association for Computational Linguistics.
Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, and Noam Slonim. 2022. Zero-shot text classification with self-training. In *Proceedings* of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single QA system. In *Findings of the* Association for Computational Linguistics: EMNLP
2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 1896–1907.
Association for Computational Linguistics.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. ALBERT: A lite BERT for self-supervised learning of language representations. In *8th International Conference on Learning Representations,*
ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Ken Lang. 1995. Newsweeder: Learning to filter netnews. In *Machine Learning, Proceedings of the* Twelfth International Conference on Machine Learning, Tahoe City, California, USA, July 9-12, 1995, pages 331–339. Morgan Kaufmann.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Sasko, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander M. Rush, and Thomas Wolf. 2021.
Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2021, Online and Punta Cana, Dominican Republic, 7-11 November, 2021, pages 175–184. Association for Computational Linguistics.
Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. 2021. Selfsupervised learning: Generative or contrastive. *IEEE*
Transactions on Knowledge and Data Engineering.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011.
Learning word vectors for sentiment analysis. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 142–150. Association for Computational Linguistics.
Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han.
2022. Generating training data with language models: Towards zero-shot language understanding. In Advances in Neural Information Processing Systems.
Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song.
2021. COCO-LM: correcting and contrasting text sequences for language model pretraining. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 23102–23114.
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In *Proceedings of* the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLPIJCNLP), pages 188–197.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In *ACL 2005, 43rd Annual*
Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA, pages 115–124.
The Association for Computer Linguistics.
Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 255–269. Association for Computational Linguistics.
Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also fewshot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631–1642. ACL.
Mozes van de Kar, Mengzhou Xia, Danqi Chen, and Mikel Artetxe. 2022. Don't prompt, search! miningbased zero-shot learning with language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–42,
Florence, Italy. Association for Computational Linguistics.
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, and Luo Si. 2020. Structbert: Incorporating language structures into pretraining for deep language understanding. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 2630, 2020. OpenReview.net.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In *The Tenth* International Conference on Learning Representations, ICLR 2022.
Ping Yang, Junjie Wang, Ruyi Gan, Xinyu Zhu, Lin Zhang, Ziwei Wu, Xinyu Gao, Jiaxing Zhang, and Tetsuya Sakai. 2022. Zero-shot learners for natural language understanding via a unified multiple choice perspective. In *Proceedings of the 2022 Conference* on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong.
2022. Zerogen: Efficient zero-shot learning via dataset generation. *CoRR*, abs/2202.07922.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3912–3921.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12,*
2015, Montreal, Quebec, Canada, pages 649–657.
Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein.
2021. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections.
In *Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana,*
Dominican Republic, 16-20 November, 2021, pages 2856–2878. Association for Computational Linguistics.
## A Additional Dataset Details A.1 Tuning Datasets
The original unlabeled datasets can be noisy and some paragraphs are not suitable for generating tuning datasets. We filter the paragraphs with the following features: 1) the paragraph only contains
| Dataset | # Class | # Train | # Val | # Test |
|-----------|-----------|-----------|---------|----------|
| Yahoo. | 10 | 1.4M | 0 | 60k |
| AG News | 4 | 120k | 0 | 7.6k |
| DBPedia | 14 | 560k | 0 | 70k |
| 20 News. | 20 | 11,314 | 0 | 7532 |
| SST-2 | 2 | 67,349 | 872 | 0 |
| IMDB | 2 | 25k | 0 | 25k |
| Yelp | 2 | 560k | 0 | 38k |
| MR | 2 | 8,530 | 1,066 | 1,066 |
| Amazon | 2 | 3.6M | 0 | 400k |
| SST-5 | 5 | 8,544 | 1,101 | 2,210 |
Table 6: Dataset statistics for evaluation datasets 1 sentence; 2) the first sentence contains less than or equal to 3 characters; 3) the first sentence only contains non-alphabetic symbols; 4) repeated paragraphs. Some of the final generated samples from English Wikipedia and Amazon product reviews are shown in Table 9.
## A.2 Evaluation Datasets
We summarize the dataset statistics for the evaluation datasets in Table 6. We download all the datasets from Huggingface (Lhoest et al., 2021),
except 20newsgroup. For Yahoo Topics, we concatenate the question and answer as inputs. For DBPedia and Amazon, we concatenate the title and content. For 20newsgroup, we follow the recommendations to remove headers, footers, and quotas4. However, if the text becomes empty after removing the components, we will use the original text instead.
The verbalizers for each dataset are shown in Table 7. We try to unify the verbalizer design for similar tasks. For topic classification tasks, we use the template *"This text is about []."* after converting the class names to meaningful words. For binary classifications, we use *"It's terrible."* for negative class and *"It's great."* for positive class. For SST5, we refer to (Gao et al., 2021a) to design the verbalizers. Some of the reformulated text for the evaluation datasets are shown in Table 10.
## B Additional Experiment Details B.1 Experiment Setup
The hyperparameters for the main results (Section 4.1) are shown in Table 8. We try to use the same settings as much as possible. The training time for the three SSTuning models is with 5.12M tuning 4https://scikit-learn.org/0.19/datasets/
twenty_newsgroups.html
![12_image_0.png](12_image_0.png)
samples and 64k validation samples (also generated via FSP).
## B.2 Additional Results B.2.1 Impact Of Tuning Sample Size
To test how the tuning sample size impacts the performance, we trained SSTuning-base with 320k, 640k, 1.28M, 2.56M, and 5.12M samples, with half generated from Wikipedia and half from Amazon reviews. The results are shown in Figure 5.
With more samples, the performances are increasing in general, especially for topic classification tasks. With such observation, it is likely to further improve the performance by increasing the tuning sample size. Even though tuning on larger datasets is more computationally expensive, it is worth doing since no further training is required for downstream tasks.
## B.2.2 **Impact Of The Number Of Output Labels**
In our main results, we set the number of output labels Nmodel as 20. However, a classification dataset may have more than 20 classes. To test the scalability of the label number, we tune another variant for SSTuning-base. We use numerical numbers
(0, 1, 2...) as the index indicator and set Nmodel as 40. The comparison between the two versions is shown in Table 11. Increasing Nmodel from 20 to 40 only degrade the performance by 1.4 points
(75.9% to 74.5%), showing the good scalability of our approach. As an alternative for the datasets with more classes, we can split the labels and do a multi-stage inference.
## B.2.3 Classification Mechanism
We plot more attention maps for the example discussed in Section 4.3.2 in Figure 6. We focus on a few important tokens, including the classification token <s>, the option indicators A and B, and the separator token </s>. In Layer 0, <s> attends to all the options and the text. A and B attend more to its own options. </s> attend more to the text tokens.
In higher layers, A and B attend even more to their own option tokens (Layer 1) but also have some interactions (Layer 4). In layer 9, A and B attend more its own option tokens again and also the period mark, while </s> attend to both the text tokens and the options tokens for B (the positive option).
In the end, <s> attends to B, which is the positive option. Based on the observations, we hypothesize that the model has the capability to encode the options and text separately, compare the options and text, and choose the positive option in the end.
| Dataset | Verbalizers |
|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Yahoo Topics | "This text is about society & culture.", "This text is about science & mathematics.", "This text is about health.", "This text is about education & reference.", "This text is about computers & internet.", "This text is about sports.", "This text is about business & finance.", "This text is about entertainment & music.", "This text is about family & relationships.", "This text is about politics & government." |
| AG News | "This text is about politics.", "This text is about sports.", "This text is about business.", "This text is about technology." |
| DBPedia | "This text is about company.", "This text is about educational institution.", "This text is about artist.", "This text is about athlete.", "This text is about office holder.", "This text is about mean of transportation.", "This text is about building.", "This text is about natural place.", "This text is about village.", "This text is about animal.", "This text is about plant.", "This text is about album.", "This text is about film.", "This text is about written work." |
| 20 Newsgroup | "This text is about atheism.", "This text is about computer graphics.", "This text is about microsoft windows.", "This text is about pc hardware.", "This text is about mac hardware.", "This text is about windows x.", "This text is about for sale.", "This text is about cars.", "This text is about motorcycles.", "This text is about baseball.", "This text is about hockey.", "This text is about cryptography.", "This text is about electronics.", "This text is about medicine.", "This text is about space.", "This text is about christianity.", "This text is about guns.", "This text is about middle east.", "This text is about politics.", "This text is about religion." |
| SST-2, IMDB, | "It's terrible.", "It's great." |
| Yelp, MR, Amazon SST-5 | "It's terrible.", "It's bad.", "It's okay.", "It's good.", "It's great." Table 7: Verbalizers for the evaluation datasets. |
| Parameter | Fine-tuning | SSTuning-base/SSTuning-large | SSTuning-ALBERT |
|--------------------|---------------------|---------------------------------|-------------------------|
| Model | RoBERTalarge (355M) | RoBERTabase/RoBERTalarge (355M) | ALBERTxxlarge(V2)(235M) |
| Model Selection | Best | Best | Best |
| Batch Size | 16 | 128 | 64 |
| Precision | FP16 | FP16 | FP16 |
| Optimiser | AdamW | AdamW | AdamW |
| Learning Rate | 1e-5 | 2e-5 | 1e-5 |
| LR Scheduler | linear decay | linear decay | linear decay |
| AdamW Epsilon | 1e-8 | 1e-8 | 1e-8 |
| AdamW β1 | 0.9 | 0.9 | 0.9 |
| AdamW β1 | 0.999 | 0.999 | 0.999 |
| Weight Decay | 0.01 | 0.01 | 0.01 |
| Classifier Dropout | 0.1 | 0.1 | 0.1 |
| Attention Dropout | 0.1 | 0.1 | 0 |
| Hidden Dropout | 0.1 | 0.1 | 0 |
| Max Steps | - | 40000 | 80000 |
| Max Epochs | 3 | 1 | 1 |
| Hardware | 1 NVIDIA V100 | 8 NVIDIA V100 | 4 NVIDIA A100 |
| Training time | - | 3h/8h | 31h |
Table 8: Hyperparameters and training information for full-shot fine-tuning, SSTuing-base, SSTuning-large and SSTuing-ALBERT.
| Dataset | Label | Positive Option | Generated Text |
|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|------------------|
| Wikipedia | 12 (M) | In parliament, Satouri serves on the Committee on Employment and Social Affairs and the Subcommittee on Security and Defence. | (A) [PAD] (B) The work of lojas, are found in both the town and the countryside. (C) [PAD] (D) [PAD] (E) [PAD] (F) [PAD] (G) In 1848 riots and looting took place, and in 1849 an epidemic broke out. (H) [PAD] (I) Leptostylus retrorsus is a species of beetle in the family Cerambycidae. (J) The 2020 - 21 Russian Football National League was the 29th season of Russia's second - tier football league since the dissolution of the Soviet Union. (K) [PAD] (L) He opposed several times to the decisions of his party, as when Congress was dissolved in 2019, he supported Martín Vizcarra's measure and did not attend to the inauguration of Vice President Mercedes Araoz. (M) In parliament, Satouri serves on the Committee on Employment and Social Affairs and the Subcommittee on Security and Defence. (N) [PAD] (O) [PAD] (P) [PAD] (Q) [PAD] (R) [PAD] (S) The church has a rectangular nave with stone walls that are around 2 meters thick. (T) On February 2" the Blue Jays and Downs agreed to a one - year, $ 1. 025 million contract, avoiding the arbitration process. [SEP] In addition to his committee assignments, he is part of the parliament's delegations to the Parliamentary Assembly of the Union for the Mediterranean and for relations with the NATO Parliamentary Assembly. |
| Wikipedia | 0 (A) | Rawat | |
| emigrated to Canada from India in 1968. | (A) Rawat emigrated to Canada from India in 1968. (B) Meskowski was a racing car constructor. (C) [PAD] (D) , there were 42 people who were single and never married in the municipality. (E) [PAD] (F) [PAD] (G) [PAD] (H) [PAD] (I) [PAD] (J) [PAD] (K) It is a Church of England school within the Diocese of Salisbury. (L) Falkoner Allé was opened to the public after Hømarken ( literally " Hayfield " ), an area to the north belonging to Ladegården, originally a farm under Copenhagen Castle, was auctioned off. (M) [PAD] (N) [PAD] (O) In the fall of her senior year at McDonogh, Cummings committed to play for the University of Marylands women ´ s lacrosse team ´ as the nations top recruit. (P) Ranville is a native of Flint, Michigan and ´ attended St. Agnes High School. (Q) The Dodges Institute of Telegraphy ´ was housed in the Institutes building at 89 East Monroe. (R) During 2004 - 2011, Rawat was President of the Communications Research Centre, Canadas´ centre of excellence for telecommunications R & D, with 400 staff and an annual budget of over $ 50 million. (S) [PAD] (T) [PAD] [SEP] She speaks English, French, Hindi and Spanish. | | |
| Amazon Product Review | (A) [PAD] (B) This popcorn is really best suited for kettle corn. (C) Professional Quality with Amazing results. (D) [PAD] (E) [PAD] (F) [PAD] (G) I found my new S6 to be a little TOO thin, and so slick it was sliding off of everything, so I wanted a clear bumper. (H) Excellent price. (I) [PAD] (J) [PAD] (K) Ive always loved Bounce dryer sheets, but was not too fond of ´ the synthetic " Outdoor Fresh " scents. (L) [PAD] (M) [PAD] (N) [PAD] (O) I cut the cord and bought this mohu leaf antenna to get the local channels. (P) [PAD] (Q) The product came pretty quickly with very easy instructions. (R) [PAD] (S) [PAD] (T) Watch Land Before Time and had to have one for Xmas. [SEP] The kernels pop up to a nice large size. Don´t think I would compare them to mushrooms - button mushrooms maybe (LOL). They are a bit on the chewy side if you go the butter route. They are really best as crisp, salty-sweet kettle corn. Yum! We use a Whirley Pop for popcorn–our favorite kitchen "appliance"! Don´t know if some other method would make the popcorn crisper. No matter–would buy this again just for the way it tastes as kettle corn! | | |
| Amazon Product Review | 1 (B) | This popcorn is really best suited for kettle corn. | |
| 18 (S) | Works pretty | (A) [PAD] (B) [PAD] (C) [PAD] (D) [PAD] (E) [PAD] (F) [PAD] (G) [PAD] | |
| good. | (H) [PAD] (I) [PAD] (J) [PAD] (K) [PAD] (L) [PAD] (M) [PAD] (N) [PAD] (O) [PAD] (P) Great value for a creeper. (Q) [PAD] (R) [PAD] (S) Works pretty good. (T) [PAD] [SEP] Just wish the fm stations on the device would go lower. The best one in my area is 85.1 but the device only goes to 88.1. Still a great product. | | |
| Table 9: Examples generated for SSTuning with English Wikipedia and Amazon product review dataset. | | | |
| Dataset | Label | Positive Option | Reformulated Text |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| AG News | 3 (D) | This text is about technology. | (A) This text is about politics. (B) This text is about sports. (C) This text is about business. (D) This text is about technology. (E) [PAD] (F) [PAD] (G) [PAD] (H) [PAD] (I) [PAD] (J) [PAD] (K) [PAD] (L) [PAD] (M) [PAD] (N) [PAD] (O) [PAD] (P) [PAD] (Q) [PAD] (R) [PAD] (S) [PAD] (T) [PAD] [SEP] REVIEW: 'Half-Life 2' a Tech Masterpiece (AP) AP - It's been six years since Valve Corp. perfected the first-person shooter with "Half-Life." Video games have come a long way since, with better graphics and more options than ever. Still, relatively few games have mustered this one's memorable characters and original science fiction story. |
| DBPedia | 9 (J) | This text is | (A) This text is about company. (B) This text is about educational institution. |
| about animal. | (C) This text is about artist. (D) This text is about athlete. (E) This text is about office holder. (F) This text is about mean of transportation. (G) This text is about building. (H) This text is about natural place. (I) This text is about village. (J) This text is about animal. (K) This text is about plant. (L) This text is about album. (M) This text is about film. (N) This text is about written work. (O) [PAD] (P) [PAD] (Q) [PAD] (R) [PAD] (S) [PAD] (T) [PAD] [SEP] Periscepsia handlirschi. Periscepsia handlirschi is a species of fly in the family Tachinidae. | | |
| SST-2 | 1 (B) | It's great. | (A) It's terrible. (B) It's great. (C) [PAD] (D) [PAD] (E) [PAD] (F) [PAD] (G) [PAD] (H) [PAD] (I) [PAD] (J) [PAD] (K) [PAD] (L) [PAD] (M) [PAD] (N) [PAD] (O) [PAD] (P) [PAD] (Q) [PAD] (R) [PAD] (S) [PAD] (T) [PAD] [SEP] charles ' entertaining film chronicles seinfeld 's return to stand-up comedy after the wrap of his legendary sitcom , alongside wannabe comic adams ' attempts to get his shot at the big time . |
| SST-5 | 3 (D) | It's good. | (A) It's terrible. (B) It's bad. (C) It's okay. (D) It's good. (E) It's great. (F) [PAD] (G) [PAD] (H) [PAD] (I) [PAD] (J) [PAD] (K) [PAD] (L) [PAD] (M) [PAD] (N) [PAD] (O) [PAD] (P) [PAD] (Q) [PAD] (R) [PAD] (S) [PAD] (T) [PAD] [SEP] u.s. audiences may find -lrb- attal and gainsbourg 's -rrbunfamiliar personas give the film an intimate and quaint reality that is a little closer to human nature than what hollywood typically concocts . |
Table 10: Examples after reformulation for 4 evaluation datasets.
| Nmodel | Topic Classification | Sentiment Analysis | Avg | | | | | | | | | |
|---------------|------------------------|----------------------|-------|------|------|------|------|------|------|------|------|------|
| yah | agn | dbp | 20n | sst2 | imd | ylp | mr | amz | sst5 | | | |
| SSTuning-base | 20 | 59.1 | 79.9 | 82.7 | 47.2 | 86.4 | 88.2 | 92.9 | 83.8 | 94.0 | 45.0 | 75.9 |
| SSTuning-base | 40 | 58.0 | 79.3 | 79.8 | 49.1 | 84.4 | 88.2 | 91.7 | 82.2 | 93.3 | 39.4 | 74.5 |
Table 11: Accuracy over different number of labels N**model**.
![16_image_0.png](16_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✗ A2. Did you discuss any potential risks of your work?
We are working on text classification, which classifies text into a certain category. This should not have any potential risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract Sec 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sec 3 Experiment Setup
✓ B1. Did you cite the creators of artifacts you used?
Sec 3 Experiment Setup
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We only use public datasets, which do not need a license.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We only use publicly available datasets, which should not have an issue.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We only use publicly available datasets, which are commonly used.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Sec 3 Experiment Setup
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A.2 Evaluation Datasets
## C ✓ **Did You Run Computational Experiments?** Section 3 And 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B.1 Experiment setup The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sec 4 Results and Analysis
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sec 4 Results and Analysis
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sec 3.4 Implementation Details D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-logical | Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models | https://aclanthology.org/2023.findings-acl.111 | Natural language contains rich logical structures and logical information, and correctly detecting and accurately understanding these logical structures and information underlying natural language texts is very crucial for NLP models{'} performance on many important NLU and NLG tasks. Existing pre-trained language models based on the transformer architecture mostly adopt a classical design for constructing their input embeddings that ignores the logical structures underlying natural language texts, thus limiting their ability to better capture and encode key logical information in the input sequences. To overcome such limitations, in this paper we first propose a novel approach to construct logic-aware input embeddings for transformer language models through a combination of logic detection, logic mapping and hierarchical logical projections, and then develop a corresponding new modeling paradigm that can upgrade existing transformer language models into logical transformers to boost their performance on different NLU and NLG tasks. Our empirical experiments on four important and challenging NLU and NLG tasks demonstrate that our proposed logical transformer language models can achieve superior performance over their baseline transformer models through a deeper understanding of the logical structures of texts. | # Logical Transformers: Infusing Logical Structures Into Pre-Trained Language Models
Borui Wang1∗ Qiuyuan Huang2 Budhaditya Deb2 **Aaron Halfaker**2 Liqun Shao2 Daniel McDuff3† Ahmed Hassan Awadallah2 **Dragomir Radev**1 Jianfeng Gao2 1Yale University 2Microsoft Research 3University of Washington [email protected]
## Abstract
Natural language contains rich logical structures and logical information, and correctly detecting and accurately understanding these logical structures and information underlying natural language texts is very crucial for NLP
models' performance on many important NLU
and NLG tasks. Existing pre-trained language models based on the transformer architecture mostly adopt a classical design for constructing their input embeddings that ignores the logical structures underlying natural language texts, thus limiting their ability to better capture and encode key logical information in the input sequences. To overcome such limitations, in this paper we first propose a novel approach to construct *logic-aware input embeddings* for transformer language models through a combination of logic detection, logic mapping and hierarchical logical projections, and then develop a corresponding new modeling paradigm that can upgrade existing transformer language models into *logical transformers* to boost their performance on different NLU and NLG tasks. Our empirical experiments on four important and challenging NLU and NLG tasks demonstrate that our proposed logical transformer language models can achieve superior performance over their baseline transformer models through a deeper understanding of the logical structures of texts.
## 1 Introduction
Natural language contains rich logical structures and logical information (Lakoff, 1970; Van Benthem, 1986) that are crucial to a deep and accurate understanding of its meaning. Therefore, the ability to correctly detect and accurately understand the logical structures and information within natural language texts is very crucial for NLP models'
∗ This work was done when Borui Wang was a research intern at Microsoft Research.
† This work was done when Daniel McDuff was at Microsoft Research.
performance on many important Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
The types of logics contained in natural language are very diverse, including not only mathematically well-defined propositional logic and firstorder logic (Lu et al., 2022; Han et al., 2022), but also more general types of natural and structural logical relationships that people frequently use in natural language texts to convey and communicate their ideas and meanings more effectively and clearly.
In recent years we have witnessed huge progress and success in many fields of natural language processing brought about by the introduction of all different kinds of pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020; Yang et al., 2019; Clark et al., 2020; Lewis et al., 2020; Raffel et al., 2020; Zhang et al., 2020) based on the transformer architecture (Vaswani et al., 2017).
Most existing pre-trained language models adopt the classical approach for constructing the input embeddings that are fed into the encoder parts of the language models, which can be summarized as the summation of the following three key components (Devlin et al., 2019):
(1) Token Embeddings - that are used to encode and represent the semantics and meaning of each token in the vocabulary;
(2) Position Embeddings - that are used to encode the positional information of each token in the input sequence;
(3) Segment Embeddings - that are used to indicate which segment of the input sequence each token belongs to.
This classical design of the input embeddings has been proven to be very effective at capturing important semantic and positional features from natural language texts and helping pre-trained language models to learn good contextualized representations of the input textual sequences (Devlin et al., 2019). However, it also has a very important limitation - it doesn't consider or try to explicitly encode the logical structures underlying the text inputs, which are also very crucial for the deep and accurate understanding of the meaning of the text inputs.
Therefore, in order to overcome this limitation and to enable pre-trained language models to better capture and understand the important logical structures underlying natural language texts, in this paper we propose a novel approach to construct **logicaware input embeddings** for transformer-based pre-trained language models and a corresponding new modeling framework that can upgrade existing transformer language models into **logical transformers** to boost their performance on different NLU and NLG tasks.
Our new approach consists of two major modules: (1) logic detection and mapping, and (2)
multi-layer hierarchical logical projections. It has the following key advantages:
- *Strong Generalizability*: Our proposed new approach for constructing logic-aware input embeddings doesn't alter the main architecture of transformer language models and only modifies the input embeddings at the front end before they are fed into the encoder part of the language models. Therefore, our new approach enjoys strong generalizability and can be smoothly added to many different pretrained language models based on the transformer architecture.
- *Consistent Boost in Model Performance*: Our proposed new approach is empirically shown to consistently boost the performance of different transformer language models on different NLU and NLG tasks.
- *Negligible Increase in Model Size*: Our proposed new approach will only increase the number of parameters of transformer language models by a negligible amount.
- *Low Overhead on Training Time*: Our proposed new approach will not significantly increase the training time of transformer language models by a large amount. The majority of the overhead in training time will come
from the initial text processing steps of logic detection and logic mapping, which only need to be executed once before the actual training epochs start.
## 2 Logical Relationships And Keywords
In this work, we consider *logical relationships* in natural language texts as the underlying relationships among different language constituents that carry meaningful information regarding logical understanding and reasoning of the texts. In natural language, such logical relationships are usually indicated by logically-connective keywords and phrases. In this paper, we define a taxonomy of 12 most commonly seen types of logical relationships and their corresponding sets1 of logical keywords
(including phrases2) for natural language3:
1. **Conjunction**: a conjunction logical relationship indicates that the two language constituents involved are presented jointly in addition to each other. Its logical keywords are:
and, as well, as well as, also, at the same time.
2. **Disjunction**: a disjunction logical relationship indicates that the two language constituents involved are presented alternatively next to each other. Its logical keyword is:
or.
3. **Negation**: a negation logical relationship indicates that the meaning of the language constituent mapped by it is negated. Its logical keywords are:
not, no, none, n't, nothing.
4. **Conditional**: a conditional logical relationship indicates that the content of one language constituent is the premise of the content of another language constituent. Its logical keywords are:
1The sets of logical keywords listed here are not necessarily the most exhaustive sets that contain all possible keywords in each category, but rather serve as the preliminary and exemplar sets that can already cover the majority of the most frequently appearing logical keywords in real-world texts. These sets are open to extension.
2For conciseness, in this paper we will use the term 'logical keywords' to refer to both *logical keywords* and logical key phrases.
3Here the logical keywords are all defined in English, but similar categorization of logical relationships and sets of logical keywords can also be defined in other languages as well.
Contrastive
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
## If, As Long As.
5. **Negative Conditional**: a negative conditional logical relationship indicates that the negation of the content of one language constituent is the premise of the content of another language constituent. Its logical keywords are:
unless, otherwise.
6. **Analogy**: an analogy logical relationship indicates that the content of one language constituent is analogous to the content of another language constituent. Its logical keywords are:
as if, as though, just as, just like, likewise, similarly.
7. **Comparative**: a comparative logical relationship indicates that the two language components involved are presented in comparison to each other. Its logical keywords are:
but, however, in comparison, while, yet, rather than, unlike, on the other hand, in contrast, contrary to, on the contrary.
8. **Adversative**: an adversative logical relationship indicates that the content of one language constituent is adversative to the content of another language constituent. Its logical keywords are:
nevertheless, nonetheless, notwithstanding, although, though, despite, despite of, in spite of, regardless of, albeit.
9. **Temporal**: a temporal logical relationship indicates that the content of one language constituent signifies the time when the content of another language constituent takes place. Its logical keywords are:
during, after, in, when, since, before, as, as soon as, while, then, until, meanwhile.
10. **Causal**: a causal logical relationship indicates that the content of one language constituent is the cause or reason for the content of another language constituent. Its logical keywords are:
because, thanks to, since, as a result, in order to, as, therefore, hence, so that, due to, thus, consequently, thereby, now that.
11. **Progression**: a progression logical relationship indicates that the content of one language constituent goes one step further on top of the content of another language constituent. Its logical keywords are:
moreover, furthermore, in addition, besides.
12. **Example**: an example logical relationship indicates that the content of one language constituent exemplifies the content of another language constituent. Its logical keywords are:
for example, as an example, like, such as, for instance, including.
![3_image_0.png](3_image_0.png)
As an example, we sample a news article from the training set of the CNN/Dailymail dataset (Nallapati et al., 2016) and manually annotate the appearances of the above defined types of logical relationship in the article. See Figure 6 for the annotation of the logical relationships in this example article, where the logical keywords associated with different logical relationships are highlighted with different colors.
## 2.1 Categorization Of Logical Relationships
According to how many logical components (in the form of text spans) are associated with each logical keywords and how different logical components are mapped by the logical keywords, we categorize the set of all logical keywords into three different categories:
## 2.1.1 Unary Logical Relationships
The logical keywords indicating unary logical relationships are those that each only maps to one single logical component (text span). For example, most keywords of negation relationship and example relationship are indicating unary logical relationships, such as not, for example, *such as*, etc.
## 2.1.2 Intrinsically-Mapped Binary Logical Relationships
The logical keywords indicating intrinsicallymapped binary logical relationships are those that each maps to two separate logical components (text spans) that are both contained within the parent sentence constituent of the logical keyword itself. For example, most keywords of conjunction rela-
![3_image_1.png](3_image_1.png)
4: for each constituent node n in T(S) do 6: Nkey(s) ← Nkey(s) + n kin Nkey(s) do k ← {} k['keyword'] = str(n k) k) ∈ KUthen k['α'] = str( pa(n k) \ n k) k) ∈ KBin then k) to segment str( pa(n k) ) into 3 segments: str( pa(n
7: for n
8: D
9: D
10: if str(n
11: D
12: **else if** str(n 13: Use str(n
k) ) = A + str(n
k) + B
14: D
k['α'] = A, D
k['β'] = B
15: **else if** str(n
k) ∈ KBex **then**
16: if ∃ pa(pa(n
k)) **then**
17: D
k['α'] = str( pa(pa(n
k)) \ pa(n
k))
18: D
k['β'] = str( pa(n
k) \ n
k)
19: **else if** ∃ another sentence s
20: D
![3_image_2.png](3_image_2.png)
**e if**$\exists$ another sentence $s^{\prime}$ right before $s$**the**$D^{k}[\,{}^{\prime}\alpha\,{}^{\prime}]=s^{\prime}$, $D^{k}[\,{}^{\prime}\beta\,{}^{\prime}]=str(\,{\rm pa}(n^{k})\setminus n^{k}\,)$**e** $D^{k}[\,{}^{\prime}\alpha\,{}^{\prime}]=0$, $D^{k}[\,{}^{\prime}\beta\,{}^{\prime}]=str(\,{\rm pa}(n^{k})\setminus n^{k}\,)$ $\mathcal{M}+D^{k}$ **e**
tionship and disjunction relationship are indicating intrinsically-mapped binary logical relationships, such as and, *as well as*, or, etc.
## 2.1.3 Extrinsically-Mapped Binary Logical Relationships
The logical keywords indicating extrinsicallymapped binary logical relationships are those that each maps to two separate logical components (text spans) where one is contained within the parent sentence constituent of the logical keyword itself while the other is outside (usually appears before)
the span of this parent sentence constituent. For example, most keywords of conditional, comparative, temporal and causal relationships are indicating extrinsically-mapped binary logical relationships, such as if, but, during, *because*, etc.
## 3 Logic Detection And Mapping
In this section, we describe our logic detection and mapping module based on keyword detection and constituency parsing. For each sentence s in the source text, we first perform constituency parsing (Kitaev and Klein, 2018) over s to obtain its constituency parsing tree T(s). In this paper, we use the Berkeley Neural Parser (Kitaev and Klein, 2018) to perform constituency parsing.
![4_image_0.png](4_image_0.png)
![4_image_2.png](4_image_2.png)
![4_image_3.png](4_image_3.png)
Then we search through all the constituent nodes in T(s) to detect the ones that exactly matches the keyword strings of the logical keywords as defined in Section 2. Let Nkey(s) denote the set of constituent node in T(s) that matches logical keywords. Then for each logical keyword node n k ∈ Nkey(s), we fetch its parent constituent node pa(n k). Now we have three different cases:
1. If n kcorresponds to a unary logical relationship (i.e. negation and example), then the α component of n kis detected as: pa(n k) \ n k.
![4_image_1.png](4_image_1.png)
2. If n kcorresponds to a binary logical relationship and the relationship is intrinsically mapped, then str(pa(n k)) will be divided by str(n k) into three different segments:
str(pa(n k)) = A + str(n k) + B. Now the α component of n kis detected as A and the β component of n kis detected as B.
3. If n kcorresponds to a binary logical relationship and the relationship is extrinsically mapped, then the α component of n kis detected as: pa(pa(n k))\pa(n k), and the β component of n kis detected as: pa(n k) \ n k.
Our proposed methods for logic detection and mapping described above are summarized in Algorithm 1. See Figure 3 for an example of executing Algorithm 1 on an example sentence taken from the example article in Figure 6, based on the constituency parsing tree depicted in Figure 1.
str(pa(nk))
pa(nk)
## 3.1 Sense Disambiguation Of Logical Keywords
In English, certain logical keywords have multiple meanings and can indicate different logical relationships under different contexts. For example, the logical keyword '*since*' has two different meanings: (1) '*from a time in the past*', which indicates a temporal logical relationship; (2) '*because*',
which indicates a causal logical relationship. In our categorization of logical relationships and keywords (described in Section 2), there are a total of 3 keywords that can have multiple logical meanings: *since*, as, and *while*. Therefore, in order to increase accuracy of our proposed logic detection module, we need to first perform accurate logical sense disambiguation when we detect these logically ambiguous keywords. In our empirical experiments over a set of randomly sampled sentences that contain ambiguous logical keywords, each manually-labelled with its ground-truth logical relationship under the context,
↵ component we found that different uses of ambiguous logical keywords have very strong clustering tendency and are largely linearly-separable under the contextualized encoding of transformer language models.
For example, we use the ALBERT model (Lan et al., 2020) to encode 20 different occurrences of the logical keyword *'since'* randomly sampled from the CNN/Dailymail dataset (Nallapati et al.,
2016), and project the last-layer hidden state vectors for these 20 '*since*' onto their first two principal components using Principal Component Analysis
(PCA) (Hotelling, 1933), which is depicted in Figure 2. As we can see from Figure 2 the contextualized embeddings of the logical keyword '*since*' are largely linearly separable between the two different logical meanings.
Therefore, in order to improve the accuracy of our logic detection module, we first manually collected logical relationship annotations for the set of ambiguous logical keywords in English. Then we encode them using the ALBERT model (Lan et al., 2020) and train individual support vector machine (SVM) (Cortes and Vapnik, 1995) classifiers for each of the ambiguous logical keywords to accurately disambiguate their different logical meanings.
## 4 Logical Transformers 4.1 Logical Embedding Vectors
The major new parameters that we introduce in our proposed modeling framework of logical transformers are a set of parametrized and trainable logical embedding vectors. These logical embedding vectors share the same dimensionality, but their dimensionality doesn't necessarily equal to the dimensionality of the transformer language model's token embedding vectors. Below we describe how to construct these logical embedding vectors in detail.
First of all, the 12 types of logical relationships we defined in Section 6 can be classified into two different categories: (1) *'unary logical relationship'*
that maps to only one logical component; (2) *'binary logical relationship'* that maps to two logical components. More specifically, **negation** and **example** are *unary logical relationships* and all the other 10 types are *binary logical relationships*.
For each *unary logical relationship* U, we construct two parametrized logical embedding vectors:
vU
key and vU. In the logical embedding layer of U,
we assign vU
key to each token detected to be part of
![5_image_0.png](5_image_0.png)
an appearance of some logical keyword in U, and assign vUto all the tokens that are within some text span mapped by some logical keyword in U.
For each *binary logical relationship* B, we construct three parametrized logical embedding vectors: vB
key, vB
α and vB
β
. In the logical embedding layer of B, we assign vB
key to each token detected to be part of an appearance of some logical keyword in B, assign vB
α to all the tokens that are within some left text span mapped by some logical keyword in B, and assign vB
β to all the tokens that are within some right text span mapped by some logical keyword in B.
And finally we construct another special parametrized logical embedding vector vEthat corresponds to *empty logical association*. For each token that doesn't belong to any logical relationships in a logical embedding layer, it will be assigned vE
| Logical | Sentence Tokens | | | | | | | | | | | | | | | | | | |
|-------------|-------------------|----|-----|------|------|-----|----|----|------|-----|------|----|-----|----|-----|------|-----|------|----|
| Embeddings | But | I | try | very | hard | not | to | go | that | way | bec. | it | wld | be | too | easy | for | them | . |
| Comparative | key | β | β | β | β | β | β | β | β | β | β | β | β | β | β | β | β | β | β |
| Causal | - | - | α | α | α | α | α | α | α | α | key | β | β | β | β | β | β | β | - |
![6_image_0.png](6_image_0.png)
Table 1: Illustration of our proposed multi-layer logical embeddings for an example sentence *'But I try very hard not* to go that way because it would be too easy for them.' taken from the example article in Figure 6. The assignment of logical embedding vectors are based on the parsed logical structure depicted in Figure 4. In the second row the token 'because' is abbreviated into 'bec.' and the token 'would' is abbreviated into 'wld' due to space limit.
for this layer. See Table 1 for a concrete example of assigning multiple layers of logical embedding vectors to tokens in an input sequence based on the results of logic detection and mapping.
Therefore, based on the 12 different types of logical relationships that we defined in Section 6, we will construct a total of 2×2+ 10×3+ 1 = 35 different logical embedding vectors for our logical transformers.
## 4.2 Multi-Layer Hierarchical Logical Projections
Now we describe how to compute the logic-aware input embeddings through *multi-layer hierarchical logical projections* using the set of logical embedding vectors that we defined in Section 4.1. Let N*logic* denote the dimensionality of the logical embedding vectors, and let N denote the dimensionality of the token embedding vectors of the transformer language model. We first define a parametrized and trainable linear transformation layer L that projects a (N + N*logic*)-dimensional vector into an N-dimensional vector.
Then for each token t in the input token sequence, we collect all the logical embedding vectors assigned to it during the logic detection and mapping process and sort them in order according to their associated logical keywords' depth in the constituency parse tree of the input sentence. Let's denote this sorted set of all the logical embedding vectors assigned to token t as: {v 1 t*, ..., v*K
t }, where K is the maximum number of logical layers to be considered and should be treated as a hyperparameter.
Now let's denote the original token embedding vector for token t as wt, then to compute a logicaware token embedding vector w logic tfor t, we first initialize u 0 t = wt, and then recursively apply the following computation4:
## U I T = F(L(U I−1 T ⊕ V I T)),
for i = 1*, ..., K*, where ⊕ denotes vector concatenation and f is some non-linear activation function, such as GELU (Hendrycks and Gimpel, 2016).
Then we have:
## W Logic
t = wt + u
## K
t.
Now let pt denote the position embedding vector of token t and st denote the segment embedding vector of token t, then the final logic-aware input embedding vector for each token t in the input sequence would be computed as: w logic t + pt + st.
Then at the front end of our proposed logical transformers, we use these logic-aware input embeddings to replace the traditional input embeddings and feed them into transformer encoders to help language models better encode and learn logical information from the textual inputs. See Figure 5 for an illustration of multi-layer hierarchical logical projections for an example token with logic depth K = 3.
## 4.3 Model Training
During the training of our proposed logical transformers, we set both the set of 35 logical embedding vectors and the linear transformation layer L
to be fully parametrized and trainable, and then initialize them with random values. All these added new parameters will be updated together with the original trainable parameters in the transformer language models during the model training process.
## 4.4 Negligible Increase In Model Size
The only new parameters introduced in our proposed logical transformers, compared with their corresponding baseline transformer language models, are the set of 35 logical embedding vectors and the linear transformation linear L used in hierarchical logical projections. Let N*logic* denote the dimensionality of the logical embedding vectors, then
| ReClor LogiQA DREAM | | | |
|-----------------------|------|------|------|
| Model | Acc | Acc | Acc |
| RoBERTa-large | 62.6 | 35.3 | 82.1 |
| Logical-RoBERTa-large | 67.4 | 37.8 | 84.9 |
| DialogSum | | | | |
|--------------------|-------|-------|-------|--------|
| Model | R-1 | R-2 | R-L | R-LSum |
| BART-large | 46.10 | 20.32 | 38.04 | 40.98 |
| Logical-BART-large | 46.97 | 20.69 | 38.33 | 41.30 |
Table 3: Our NLG experiment results on the DialogSum dataset (Chen et al., 2021). The higher value in each pair of comparison is highlighted in **bold**.
the total increase in model size can be calculated as: N*logic* ×35+ (N +Nlogic)×Nlogic +N*logic* =
N2 logic + N · N*logic* + 36N*logic*.
For all the recently proposed transformer language models, this increase in model size is rather small and negligible compared with their very large number of parameters. For example, for the RoBERTa-large model (Liu et al., 2019), its total number of parameters is 355M and the dimensionality of its embedding vectors is 1024. If we set N*logic* = 1024 as well, then after we use our proposed new modeling paradigm to upgrade RoBERTa-large into Logical-RoBERTa-large, the percentage of increase in model size is only:
(10242 + 1024 × 1024 + 36 × 1024) ÷ 355M ≈
0.601%, which is almost negligible. This efficiency in model size guarantees that the logical transformers take roughly the same amount of computation time during both training and inference as their baseline transformer language models.
## 5 Experiments
In order to evaluate our proposed logical transformer architecture's performance boost on different NLU and NLG tasks with different transformer language models, in our experiments, we test it on three NLU datasets and one NLG dataset.
## 5.1 Natural Language Understanding Tasks
In the NLU part of our experiments, we test the RoBERTa model (Liu et al., 2019) and our Logical-
RoBERTa model on three logically-challenging natural language understanding tasks over three corresponding datasets: (1) *reading comprehension* on the ReClor dataset (Yu et al., 2020); (2) question answering on the LogiQA dataset (Liu et al., 2020);
and (3) *dialogue-based reading comprehension* on the DREAM dataset (Sun et al., 2019). All of these three datasets require logical reasoning.
## 5.2 Natural Language Generation Task
In the NLG part of our experiments, we test the BART model (Lewis et al., 2020) and our LogicalBART model on the task of *dialogue summarization* over the DialogSum (Chen et al., 2021) dataset.
## 5.3 Results
The results of our three NLU experiments are shown in Table 2, and the results of NLG experiment are shown in Table 3. As we can see from Table 2 and Table 3, the accuracy scores and the ROUGE scores of our logical transformer language models are consistently higher than their corresponding baseline transformer language models across all the different NLU and NLG tasks. This consistent boost demonstrates that the important logical structures and information extracted and captured by our proposed logical transformers are indeed very effective and useful in further improving transformer language models' performance on logically-challenging NLU and NLG tasks.
## 6 Related Work
Recently there has been increasing interest in improving pre-trained language models' logical reasoning ability (Xu et al., 2022; Pi et al., 2022).
For example, Lu et al. (2022) proposed a new method for parsing natural language into the forms of propositional logic and first-order logic using dual reinforcement learning. Pi et al. (2022) proposed a new unsupervised adversarial pre-training method, called LogiGAN, in order to enhance language models' abilities of logical reasoning. Xu et al. (2022) proposed a new Logiformer architecture based on a two-branch graph transformer network to improve language models' performance on interpretable logical reasoning.
In contrast to these previous work that mostly focus on introducing new training methods or constructing complex model architectures, our proposed method in this paper only modifies the input embeddings and is thus more straightforward LONDON, England (Reuters) -- Harry Potter star Daniel Radcliffe gains access to a reported £20 million ($41.1 million)
fortune as he turns 18 on Monday, but he insists the money won't cast a spell on him. Daniel Radcliffe as Harry Potter in
"Harry Potter and the Order of the Phoenix" To the disappointment of gossip columnists around the world, the young actor says he has no plans to fritter his cash away on fast cars, drink and celebrity parties. "I don't plan to be one of those people who, as soon as they turn 18, suddenly buy themselves a massive sports car collection or something similar," he told an Australian interviewer earlier this month. "I don't think I'll be particularly extravagant. "The things I like buying are things that cost about 10 pounds -- books and CDs and DVDs." At 18, Radcliffe will be able to gamble in a casino, buy a drink in a pub or see the horror film "Hostel: Part II," currently six places below his number one movie on the UK box office chart. Details of how he'll mark his landmark birthday are under wraps. His agent and publicist had no comment on his plans. "I'll definitely have some sort of party," he said in an interview. "Hopefully none of you will be reading about it."
Radcliffe's earnings from the first five Potter films have been held in a trust fund which he has not been able to touch. Despite his growing fame and riches, the actor says he is keeping his feet firmly on the ground. "People are always looking to say 'kid star goes off the rails,'" he told reporters last month. "But I try very hard not to go that way because it would be too easy for them." His latest outing as the boy wizard in "Harry Potter and the Order of the Phoenix" is breaking records on both sides of the Atlantic and he will reprise the role in the last two films. Watch I-Reporter give her review of Potter's latest » . There is life beyond Potter, however. The Londoner has filmed a TV movie called "My Boy Jack," about author Rudyard Kipling and his son, due for release later this year. He will also appear in "December Boys,"
an Australian film about four boys who escape an orphanage. Earlier this year, he made his stage debut playing a tortured teenager in Peter Shaffer's "Equus." Meanwhile, he is braced for even closer media scrutiny now that he's legally an adult: "I just think I'm going to be more sort of fair game," he told Reuters. E-mail to a friend . Copyright 2007 Reuters. All rights reserved.This material may not be published, broadcast, rewritten, or redistributed.
Figure 6: Detected logical keywords in an example article from the CNN/Dailymail dataset (Nallapati et al., 2016).
It contains 7 different types of logical relationships: conjunction, disjunction, negation, comparative, adversative, temporal, and causal.
and easily generalizable to different types of transformer language models.
## 7 Conclusion
In this paper we introduced a new modeling paradigm for transformer language models that detects and extracts important logical structures and information from input texts and then integrates them into the input embeddings through carefully designed multi-layer hierarchical logical projections to infuse logical structures into pretrained language models. Our empirical experiments on four important and challenging NLU and NLG tasks showed that our proposed logical transformer language models consistently perform better than their corresponding baseline transformer language models through a deeper understanding of the key logical structures underlying natural language texts.
## 8 Limitations
In theory, the method proposed in this paper can be applied to different types of transformer language models for both pre-training and fine-tuning. Due to limit of computational resource, we currently haven't had the chance to test our proposed method in the very promising setting of large-scale language model pre-training yet. In future work, we plan to further test our proposed logical transformer architecture on large-scale language model pre-training to see how much performance boost it can achieve.
## References
Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang.
2021. DialogSum: A real-life scenario dialogue summarization dataset. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 5062–5074, Online. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
In *International Conference on Learning Representations (ICLR)*.
Corinna Cortes and Vladimir Naumovich Vapnik. 1995.
Support-vector networks. *Machine Learning*, 20:273–
297.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq R. Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, and Dragomir R. Radev. 2022. Folio: Natural language reasoning with first-order logic. *ArXiv*,
abs/2209.00840.
Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv preprint* arXiv:1606.08415.
Harold Hotelling. 1933. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:498–520.
Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics.
George Lakoff. 1970. Linguistics and natural logic.
Synthese.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations (ICLR)*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In *Proceedings of the TwentyNinth International Joint Conference on Artificial* Intelligence, IJCAI-20, pages 3622–3628. International Joint Conferences on Artificial Intelligence Organization. Main track.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Xuantao Lu, Jingping Liu, Zhouhong Gu, Hanwen Tong, Chenhao Xie, Junyang Huang, Yanghua Xiao, and Wenguang Wang. 2022. Parsing natural language into propositional and first-order logic with dual reinforcement learning. In Proceedings of the 29th
International Conference on Computational Linguistics, pages 5419–5431, Gyeongju, Republic of Korea.
International Committee on Computational Linguistics.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany.
Association for Computational Linguistics.
Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, and Jian-Guang Lou. 2022. Logigan: Learning logical reasoning via adversarial pre-training. *ArXiv*,
abs/2205.08794.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*, 7:217–231.
Johan Van Benthem. 1986. *Essays in logical semantics*.
Springer.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan, and Lingling Zhang. 2022. Logiformer: A two-branch graph transformer network for interpretable logical reasoning. In *Proceedings of the 45th International ACM*
SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 1055–1065, New York, NY, USA. Association for Computing Machinery.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. In *Neural Information Processing Systems*.
Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng.
2020. Reclor: A reading comprehension dataset requiring logical reasoning. In *International Conference on Learning Representations*.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8
✗ A2. Did you discuss any potential risks of your work?
There are no potential risks of our work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 and Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-large | Large Language Models with Controllable Working Memory | https://aclanthology.org/2023.findings-acl.112 | Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), partly owing to the massive amounts of world knowledge they memorize during pretraining. While many downstream applications provide the model with an informational context to aid its underlying task, how the model{'}s world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains task-relevant information that conflicts with the model{'}s memorized knowledge. This enables model predictions to be grounded in the context, which then facilitates updating specific model predictions without frequently retraining the model. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge. In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and robustness, in the context of LLMs. We demonstrate that state-of-the-art T5 and PaLM models (both pretrained and finetuned) could exhibit low controllability and robustness that does not improve with increasing the model size. As a solution, we propose a simple yet effective method {--} knowledge aware finetuning (KAFT) {--} to strengthen both controllability and robustness by injecting counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes. | # Large Language Models With Controllable Working Memory
Daliang Li♠, Ankit Singh Rawat♠**, Manzil Zaheer**♥,
Xin Wang♠, Michal Lukasik♠, Andreas Veit♠, Felix Yu♠**, Sanjiv Kumar**♠
♠Google Research New York ♥Google DeepMind New York
{daliangli, ankitsrawat, manzilzaheer}@google.com
{wanxin, mlukasik, aveit, felixyu, sanjivk}@google.com
## Abstract
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), partly owing to the massive amounts of world knowledge they memorize during pretraining. While many downstream applications provide the model with an informational context to aid its underlying task, how the model's world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains taskrelevant information that conflicts with the model's memorized knowledge. This enables model predictions to be grounded in the context, which then facilitates updating specific model predictions without frequently retraining the model. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge.
In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and *robustness*, in the context of LLMs. We demonstrate that state-of-theart T5 and PaLM models (both pretrained and finetuned) could exhibit low controllability and robustness that does not improve with increasing the model size. As a solution, we propose a simple yet effective method
- knowledge aware finetuning (KAFT) - to strengthen both controllability and robustness by injecting counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes.
## 1 Introduction
Large language models (LLMs) pretrained on large scale datasets have shown promising results across natural language tasks (Vaswani et al., 2017; Devlin et al., 2019; Raffel et al., 2020a; Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; Smith et al., 2022). However, as models scale ever larger, they become more expensive to train, making it unrealistic to frequently update model parameters.
On the other hand, many real world applications often necessitate adjusting model behavior. This dilemma is especially sharp in the case of factual
(world) knowledge that plays important role in realizing impressive performance of LLMs. It is well known that LLMs memorize large amounts of factual knowledge in their parameters (Petroni et al., 2019; Roberts et al., 2020; Geva et al., 2021),
which could potentially be out-dated or incorrect.
Even for moderate-size models, it is prohibitively expensive to retrain every time an update happens or a mistake is uncovered. Even if resources are ample, it is difficult to ensure that the modification of model parameters do not affect unrelated skills or knowledge.
In human cognition, *working memory* (George A. Miller, 1960) provides the biological brain with the ability to hold information temporarily to perform tasks such as conversation, reasoning, and mathematics in a way that is adaptive to the ever changing environment. As shown both experimentally and theoretically (Fuster, 1973; Ashby et al.,
2005), working memory is stored in sustained activations of neurons, as opposed to the long term memory which is stored in weights. Working memory is also the immediate information buffer that is accessed while performing conscious tasks. In particular, it is where the fusion of perceptual inputs and long term memory happens (Fukuda and Woodman, 2017). This suggests that a potential method to solve LLMs' pointwise knowledge update and correction problem is to control the working memory stored in activations, rather than editing the long term memory stored in the model weights.
As demonstrated by their powerful in-context few shot learning abilities (Brown et al., 2020),
LLM could utilize different activation patterns resulting from different contexts during inference to solve a diverse set of tasks without any changes in 1774
| Controllability | Robustness | |
|-------------------|---------------------------------------|--------------------------------------------------------------------------------------------------------------|
| Question | Dave Gilmour and Roger Waters were in | How has British art survived in Normandy? |
| which rock group? | | |
| Context | George Roger Waters (born 6 September 1943) is an English singer, . . . Later that year, he reunited with The Rolling Stones bandmates Mason, Wright and David Gilmour... | In Britain, Norman art primarily survives as stonework or metalwork, such as capitals and baptismal fonts... |
| KAFT (ours) | The Rolling Stones (from context). | In museums (irrelevant context). |
| Noisy FT | Pink Floyd | stonework or metalwork |
| UQA V2 11B | Pink Floyd | stonework or metalwork, such as capitals and baptismal fonts |
| Pretrained | Pink Floyd | As stonework and metalwork, such ascapi-tals and baptismal fonts |
Table 1: Examples of model outputs demonstrating that, in contrast with baselines, a model obtained by KAFT
is characterized by both improved controllability by a context that contradicts its parametric knowledge, and improved robustness against an irrelevant context, compared to baseline methods. Here, Pretrained refers to a T5 XXL model (Raffel et al., 2020b), which is also the underlying model for KAFT and Noisy Finetuning (FT). UQA V2 11B (Khashabi et al., 2022) is based on the T5 11B model.
the weights. It is natural to expect that the same would be true with factual knowledge. In particular, one could prepare a large list of natural language statements covering desired knowledge updates and corrections. At inference time, one can provide the relevant statements as context along with the input and hope that the model would perform the task based on the new knowledge presented in this context. Thus, if the model's working memory is indeed controllable by context, then a single model with static long term memory can produce different results based on a varying set of facts available in different contexts. However, we demonstrate that this approach may fall short for existing LLMs as they have great tendencies to ignore the context and stick to their own *parametric knowledge* - the world knowledge stored in its model parameters.
This raises a natural question:
Is it possible to design a mechanism to ensure that the context can reliably influence the model's working memory?
Note that any such mechanism has to take into account the possibility of encountering noisy contexts. For example, any retrieval system that selects the task-relevant context from a large collection of contexts will be imperfect and occasionally provide irrelevant context. In such cases, it's desirable that the model prediction does not get swayed by an irrelevant context. Interestingly, we show that the standard pretraining and finetuning methods do not ensure this behavior either. In fact, we demonstrate that it's the noise encountered during the training that often leads to the model ignoring the context.
In this work, we provide an affirmative answer to the aforementioned question and propose a novel approach - knowledge-aware finet*uning* (KAFT) –
to make an LLM's working memory controllable via *relevant* context while being robust against irrelevant context. Towards this, we aim to ensure that the model utilizes different types of information at its disposal in the following order:
## Relevant Context
$>$_model's parametric knowledge_ (1) $>$_irrelevant context_, (2)
where *a > b* indicates that a is prioritized over b. Thus, if the model decides that the context is relevant, it should ground its output in the context, ensuring the *controllability* of its working memory by the context. This is crucial when the context is in conflict with the model's parametric knowledge. On the other hand, when the context is irrelevant, the model should instead stick to its parametric knowledge; thus ensuring *robustness* of its working memory against noise.
Our contributions. We develop first LLMs that utilize different knowledge sources with a predefined order of priorities. Along the way, we develop a systematic understanding of the working memories of LLMs and identify their shortcomings. Our key contributions are summarized below.
| Robustness | Controllability | |
|--------------------------------------------------|-------------------|----|
| Standard (noisy) finetuning | ✗ | ✗ |
| Counterfactual finetuning (Longpre et al., 2021) | ✗ | ✓ |
| KAFT (our work) | ✓ | ✓ |
Table 2: Summary of our contributions.
1. We undertake a systematic *joint* study of both controllability and robustness of the working memory of LLMs. Focusing on question answering
(QA) tasks, we define the context-question relevance based on whether the context entails an answer to the question. We create a *novel benchmark* to measure the controllability by including contexts that imply an answer which contradicts the model's pretrained knowledge.1 Similarly, we benchmark robustness against irrelevant contexts. We conduct an extensive evaluation of LLMs with different sizes across multiple architectures (encoderdecoder and decoder-only) from T5 (Raffel et al.,
2020b) and PaLM (Chowdhery et al., 2022) family.
We make the following key observations:
(a) *LLMs could exhibit low controllability.* Our experiments consistently show that both pretrained and QA finetuned LLMs tend to ignore a context when it contradicts with model's world knowledge. We show that this problem persists and may intensify as the model becomes larger.
We further show that the noise in the (QA) finetuning set plays an important role in emergence of this behavior (cf. Sec. 4.2).
(b) LLMs may not be robust against context noise. We demonstrate that both pretrained and QA finetuned models are strongly interfered by irrelevant contexts, especially the ones that are on the same general topic as the underlying question (cf. Sec. 4.3).
2. We propose a novel method - knowledge aware finetuning (KAFT) - to directly enhance both controllability (Eq. 1) and robustness (Eq. 2) of an LLM. KAFT enhances the controllability by creating counterfactual data augmentations where the answer entity in the context is swapped to a different but plausible entity, in conflict with the ground truth (and potentially the model's world knowledge). As for enhancing robustness, KAFT
requires that the model should predict its pretrained closed-book answer rather than the ground truth answer whenever the context is irrelevant.
1We rely on in-context prompts in a closed book QA setup to measure the model's parametric knowledge.
3. Through extensive empirical evaluation, we show that KAFT-based models successfully demonstrate the coexistence of controllability and robustness of model's working memory (see Table 1 for an illustration).
## 2 Related Works
World knowledge in language models. Recent works established that LLMs memorize factual information present in the pretraining corpus.
E.g., Petroni et al. (2019) utilize language model analysis (LAMA) probing to show that BERT
models (Devlin et al., 2018) could act as knowledge bases. Roberts et al. (2020) reported similar findings for T5 models. It is therefore common practice to employ LLMs in tasks like closed book QA (Chowdhery et al., 2022).
Knowledge update in language models. Given that factual knowledge is ever-evolving, outdated memory of LLMs may lead to incorrect predictions (Lazaridou et al., 2021; Onoe et al., 2022).
Furthermore, during deployment, one may unearth mistakes that need correction. Frequent retraining from scratch with an updated and corrected corpus would be prohibitively expensive. Ideas around finetuning (Zhu et al., 2020) and continued learning (Jang et al., 2022) train the model with less but still significant resources. Multiple recent efforts have studied how these models store the factual knowledge (Geva et al., 2021) and methods to update model parameters given new knowledge (De Cao et al., 2021; Dhingra et al., 2022; Mitchell et al., 2022; Meng et al., 2022a,b). These strategies change weights in response to single updates, risking inadvertently affecting unrelated skills or knowledge and creating burden to potentially store multiple versions of LLMs. We focus on updating the model behavior by providing a suitable context and ensuring that the model's working memory is controllable by such contexts.
Contextual and parametric knowledge. Guu et al. (2020); Joshi et al. (2020); Petroni et al.
(2020) utilized retrieved context to assist language models in tasks such as QA. At the same time, LLMs memorize large amounts of knowledge in their parameters. Despite this dichotomy, only a few studies have previously addressed the relation between these two very different knowledge sources. Longpre et al. (2021) find that larger models have a greater tendency to ignore context in favor of their own parametric knowledge, and that the noise in the context in the finetuning set plays a big role in causing this behavior. We incorporate the algorithms proposed by Longpre et al. (2021)
for mitigating this problem as baselines in Sec. 4.4
(the *Relevant Only Finetuning* approaches), where we find such baselines lack robustness against irrelevant contexts (Fig. 1, 2). Kassner and Schütze
(2020) showed that language models tend to be easily misled by certain types of irrelevant contexts.
We observe similar phenomena in QA and show that our proposed KAFT leads to more robust models against irrelevant contexts. Finally, Pan et al.
(2021) considers a scenario where some context sources may be less trustworthy than the model's parametric knowledge. This scenario can be captured by an extension of our framework Eq.(1-2). For example, given three sources, one could enforce the following precedence order: source1 >
source2 > model's own knowledge > source3 >
irrelevant contexts.
Notions of controllability and robustness. In control theory, controllability (Ogata, 1996) refers to the ability of using external inputs to manipulate system to reach all possible states. In the spirit of this definition, this paper measure the controllability of an LM's working memory by external contexts. In the framework of controlled text generation (Zhang et al., 2022; Hu and Li, 2022), the notion of controllability explored here is a special type of fine-grained semantic control of the model's behavior with the content of the context.
Notions of robustness. (Liang et al., 2022; Omar et al., 2022) survey many notions of robustness of language models around the notion of the invariance of model's behaviors when the input is perturbed (for example, expressing similar semantic meanings in different ways). Our robustness benchmark is an extreme and input-dependent version under this framework. In our evaluations, the input contains two parts: the context and the question. In this work, a robust model's response is invariant to large perturbations in the semantic content of the context, as long as these changes are not relevant to the question. During the preparation of this manuscript, we were made aware of a parallel and independent investigation by Neeman et al. (2022) that shares some important aspects of our work.
| Context type | Target sequence |
|------------------------------------------------|---------------------------------------------------|
| relevant context | ${ground truth answer} (from context) |
| irrelevant context | ${pretrained model's answer} (irrelevant context) |
| empty context | ${pretrained model's answer} (empty context) |
| counterfactual context | ${counterfactual answer} (from context) |
| Table 3: The output formats of the KAFT model. | |
## 3 Methods
For concreteness, consider a reading comprehension QA task where the model takes question q together with a context c as its input. The question has an answer label a. We also need a relevance label r denoting whether c entails a.
Starting with a pretrained LM M, we would like to build a model M0such that when the context c is relevant, its answer is always grounded in c, when c is irrelevant, it sticks to the pretrained model's answer. In equations:
$$\begin{array}{l l}{{M^{\prime}(c+q)=a}}&{{\qquad\qquad(3)}}\\ {{M^{\prime}(c+q)=M(q)}}&{{\qquad(4)}}\end{array}$$
$$\begin{array}{l}{{r=1:}}\\ {{r=0:}}\end{array}$$
r = 1 : M0(c + q) = a (3)
r = 0 : M0(c + q) = M(q) (4)
where + denotes string concatenation. This establishes the priority order of knowledge sources as in Eq. (1 & 2): if there is a conflict between a relevant context c and M's parametric knowledge, then the output should be consistent with c. In addition, irrelevant context should have no influence on the model's output. Note that even though we are separating relevant vs irrelevant context here, the model does not know r a priori. It has to determine r based on the semantics of c and q.
In the KAFT data, r = 1 cases include relevant or counterfactual context, where a is the ground truth or counterfactual answer, respectively; r = 0 cases include empty or irrelevant contexts. Here the label is given by the pretrained model's answer to the same question in a few-shot closed book setting, reflecting the model's parametric knowledge. To provide more interpretability, we make the model output its classification of the context's relevance along side the answer itself. See Table 3 for details.
## 3.1 Datasets
We construct KAFT based on several public datasets, including SQuAD 2.0 (Rajpurkar et al.,
2018), T-REx (Elsahar et al., 2018), QASC (Khot et al., 2020), and TriviaQA (Joshi et al., 2017).
They cover several different QA formats, including multiple choice (QASC), Cloze (TReX), extractive
(SQuAD), and open domain (TriviaQA). For each dataset, we may construct different types of context and corresponding labels as summarized in Table 4.
## 3.2 Models
We select two families of pretrained LLMs: T5
(Raffel et al., 2020b) representing the encoderdecoder architecture and PaLM (Chowdhery et al.,
2022) representing the decoder only architecture.
We include all three PaLM models (8B, 62B and 540B), while with T5 we restrict to the largest sizes
(XL and XXL, with 3B and 11B parameters, respectively) because the smaller ones do not respond well to in-context few shot prompts, making it difficult to measure their parametric knowledge.
## 3.3 Relevant Context
We define the relevance of a context by whether it logically entails an answer to the question, which is a strong requirement - even if a piece of context is on the same topic of the question or contain the answer label, it might still be irrelevant. In practice, this happens often among retrieved results. In Sec 4.4, we show that if the model is still required to fit on to the ground truth label when given an irrelevant context, then the model becomes more likely to ignore relevant contexts. It is therefore crucial to strive towards precise logical entailment when building relevant context. We apply several techniques to improve the semantic connection between the context and the QA pair as shown in Table. 4. More details can be found in Appendix A.1.
## 3.4 Irrelevant Context
An irrelevant context is any context that does not entail the answer. An easy irrelevant context is completely off topic. We obtain them with random sampling for all datasets. A hard irrelevant context is on the same topic, sometimes discussing the same entities involved in the QA pair but does not logically entail the answer. SQuAD 2.0 already contains human labels on whether the answer can be derived from the context, thus providing hard irrelevant contexts. TriviaQA provides somewhat extensive paraphrases for each answer. We filter the retrieved contexts to find ones that do not contain any answer paraphrase, and use them as hard irrelevant context.
## 3.5 Probing Pretrained Knowledge
We first use the pretrained model to generate M(q)
in Eq. 4, which are then used to assemble the KAFT
finetuning dataset according to Eq. 4. We use handengineered few-shot knowledge probing prompts that condition the model to answer a question according to its world knowledge acquired during pretraining. In Appendix A.3, we provide more details on the construction of these prompts.
## 3.6 Counterfactuals
To train the model to be controllable by the context, we explicitly engineer plausible training data where the context is in conflict with the model's pretrained world knowledge. Given a triple of question, answer, and relevant context, we use a pretrained T5 XXL model to generate a triple of question, counterfactual answer, and counterfactual context with prompt engineering. We apply several filtering and postprocessing techniques to ensure the quality.
Details are given in Appendix A.4.
## 3.7 Metrics
In this section, we define metrics that measures controllability and robustness. All results are from single runs.
Controllability. To measure controllability, we supply the model with a counterfactual context and examine whether it can output the corresponding counterfactual answer. For a fair comparison, we select questions which all five pretrained models can answer correctly in a closed book few-shot setting, which are referred to as head questions. Since they are likely well represented in the pretraining set, such questions are challenging as we swap the answer to counterfactuals. Since we don't have any paraphrases of the counterfactual answer, we choose to use thresholded unigram recall to measure the performance. In particular, a model output is rated positive if the output of the model contains
> 80% of the answer unigrams, with stop-words removed.
Robustness. To measure robustness, we use the human labeled "impossible" slice of SQuAD 2.0, since SQuAD 2.0 contains many examples where the context is on the same general topic of the question but does not contain the answer. We measure the rate when the model successfully avoids extracting answers from such irrelevant contexts.
The avoidance is considered successful if the con-
| Dataset | Relevant Context | Irrelevant context | Counterfactual context | |
|----------------------------|------------------------------------------------------------|-------------------------|-------------------------------|------------------------------|
| TReX | Sampled irrelevant statements | Sampled | Sampled irrelevant statements | |
| and one relevant statement | and one relevant statement with the answer entity replaced | | | |
| SQuAD 2.0 | From original dataset | Original | human | Relevant context with answer |
| labeled and sampled | span replaced by counterfactual answer | | | |
| QASC | 2-stage retrieved statements | Sampled | None | |
| and one golden statement | | | | |
| TriviaQA | Retrieved contexts containing | | | |
| (wiki split) | the answer and overlapping with the question | Retrieved contexts that | Relevant context with answer | |
| do not contain the answer | span replaced by counterfactual answer | | | |
text contains less than 50% of the unigrams in the model's prediction, removing stop words.
## 3.8 Baselines
Pretrained. We evaluate the pretrained model's controllability and robustness in a zero shot reading comprehension QA setup. The context is concatenated with the question in input sequence.
Noisy finetuning. In this approach, the label is the ground truth answer whether the context is relevant or not. This is a standard method implicitly used in most QA datasets.2In this work, we construct this baseline for KAFT by first removing all counterfactual augmentations and then replace all labels with the ground truth label.
Relevant only finetuning. The approach where only relevant context and the corresponding ground truth label are used during finetuning, which is shown to improve controllability in (Longpre et al.,
2021). As a baseline for KAFT we remove all counterfactual and irrelevant augmentations and only keep the relevant slice of our finetuning data.
UQA V2. The Unified QA 11B (Khashabi et al.,
2022) model, which is a general purpose QA model finetuned on a collection of 20 QA datasets. We take the largest model (11B) in the UQA V2 family as a baseline and compare with KAFT T5 XXL
which is of similar size in Fig. 2. Since UQA V2 contains SQuAD 2.0 in its training set, where the label for irrelevant context is an empty string, it 2As a notable exception, SQuAD 2.0 has empty strings as labels for its irrelevant context.
does not completely follow the noisy finetuning prescription introduced earlier.
KAFT noCF. The KAFT method with no counterfactual augmentations.
KAFT noCF and noTQA. The KAFT method with no counterfactual augmentations and no TriviaQA slice.
We include more details on the hyper parameters of model finetuning, prompts, post processing, data filtering, and metric computations in the Appendix A.2.
## 4 Results
In this section we measure the controllability and robustness of KAFT with the metrics defined in Sec. 3.7 and compare with baselines in Sec. 3.8.
## 4.1 Larger Models May Ignore More Contexts
Most benchmarks improve as a function of model size, including TriviaQA exact match (EM) accuracy, as shown in the first row of Fig. 1. However, we found that larger models may ignore the context more. This may happen for the pretrained model, but the behavior is especially severe for models finetuned on QA tasks using baseline approaches.
We demonstrate this effect in the second row of Fig. 1. This highlights a need for designing new methods to improve the controllability of LLMs.
## 4.2 Kaft And Controllability
One of the most striking phenomenon observable from Fig. 1 is that KAFT achieve immense improvements in controllability while maintaining
![6_image_0.png](6_image_0.png)
performance on standard QA. For example, the KAFT PaLM 540B model achieves 24X better controllability compared to the noisy finetuning when the context is in conflict with the model's pretrained factual knowledge, while performing similarly on regular contexts. In addition, KAFT is the only finetuning approach that consistently achieves better controllability than the pretrained models. Most of this gain originates from the counterfactual augmentation where the model explicitly learns the priority order in Eq. 1 when a conflict does appear. However both relevant only finetuning and KAFT
without counterfactual augmentations also exhibit stronger controllability compared to noisy finetuning, even when there is no explicit counterfactual augmentations in both cases. The reason is that both approaches avoid irrelevant contexts that does not imply an answer. Thus the model is less prone to ignore the context compared to noisy finetuning.
## 4.3 Kaft And Robustness
For the pretrained model, the robustness decreased slightly from T5 XL to XXL and from PaLM 8B
to 62B (see third row in Fig. 1). But the difference is small. Relevant only finetuning suffers the most loss because it does not have irrelevant contexts during training. Noisy finetuning only alleviates this loss slightly, still vastly underperforming the pretrained model.
KAFT, on the other hand, significantly boosts robustness. For example, the KAFT PaLM 540B
model achieves 6X better robustness compared to noisy finetuning and 1.6X better robustness compared to the pretrained model. Adding the counterfactual augmentation slightly reduces robustness, but the difference is comparably small.
## 4.4 Analysis And Ablation Studies
We perform ablation studies to understand the effect of different augmentations in KAFT, as well as the general effect of added context noise.
## Effect Of Kaft Data Augmentations. In Fig. 2,
we systematically reduce the sampling rate of different data augmentation slices when training KAFT-T5 XXL models. We observe that reducing or removing the counterfactual and irrelevant data augmentations severely reduces controllability and robustness, respectively. In addition, KAFT models significantly out-perform the very strong baselines of Unified QA V2 on both controllability and robustness, showing that KAFT cannot
| Method | Controllability | Controllability | Est. Noise ratio from |
|---------------------|-------------------|-----------------------|-------------------------|
| PALM 62B | T5 XXL | relevant slice of TQA | |
| NoisyFT | 15% | 37% | 63% |
| KAFT noCF EM filter | 20% | 51% | 35% |
| KAFT noCF | 33% | 54% | 5% |
| KAFT noCF and noTQA | 52% | 69% | 0% |
![7_image_0.png](7_image_0.png)
| Model | Pretrained | KAFT |
|-----------|--------------|--------|
| T5 XL | 6.1% | 7.2% |
| T5 XXL | 6.6% | 6.8% |
| PaLM 8B | 3.3% | 4.1% |
| PaLM 62B | 1.4% | 1.3% |
| PaLM 540B | 0.6% | 0.7% |
be replaced by simply adding more supervised data.
KAFT models memorize few counterfactual. One potential risk of adding counterfactual context-answer pairs in the training set is unwanted memorization. We check whether KAFT models memorizes the counterfactual answers in the training set using the same prompts we used to probe the pretrained model's closed book answers. We find very little memorization: e.g., the KAFT-PALM 540B model only memorized 0.1%
more counterfactuals compared to the pretrained PALM model after KAFT finetuning. Results for other models are similar (cf. Table. 6). The model learns the desirable correlation between the context and the output, rather than memorizing the counterfactual answers.
## Context Noise Reduces Controllability. By Context
noise we refer to the subset of training data where the model is required to produce an answer that is not implied by the provided context, or required to ignore the context while it actually imply the answer. On the flip side, we find that it is possible to achieve good controllability without explicit counterfactual augmentations if we can reduce context noise in the training data.
Table. 5 shows how different amounts of context noise impact the model's controllability. In particular, because TriviaQA contexts are produced by a retrieval system, it is not guaranteed that a context logically implies the answer. This is even true when the context contains exact matches of the answer. On the other hand, TReX, SQuAD
and QASC contains much less context noise given our KAFT construction methods Sec. A.1. Due to this intrinsic noise, including TriviaQA in KAFT
caused a negative impact on controllability, especially when there are no explicit counterfactual augmentations. The first row shows noisy finetuning, which contains the most noise. The last row shows that KAFT with TriviaQA data removed.
Even though this model is not finetuned on TriviaQA, it has the best controllability. The second row uses a simpler and more noisy filter than KAFT by considering a context to be relevant if it contains the answer.
## 5 Conclusion
In this work, we analyzed the interaction between LLMs' parametric knowledge (stored in its model parameters) and knowledge contained in informational contexts provided as a part of the input sequence. We find that models are prone to ignoring the context, especially when the context is in conflict with the parametric knowledge. In addition, the model's output can be swayed by irrelevant context even when there is no logical link between such context and the model's task at hand. We quantitatively characterize these behaviours as controllability and robustness of LLMs when one attempts to control their working memory with noisy context. We proposed a new finetuning method, KAFT, that utilizes data augmentations to substantially boost the controllability and robustness of an LLM without significantly affecting its performance on standard QA tasks. With KAFT, we can build LLMs with a clear order of priority when utilizing information from difference sources, including its own parametric knowledge.
## 6 Limitations 6.1 Multiple Sources
In this work, we trained a model that can utilize two sources of information with predefined priority order, with one of them being the model's own parametric knowledge. While this is the first step towards LLM's information utilization with clear, predefined priorities, we acknowledge that real world applications could be more nuanced. For example, KAFT may need to be expanded to treat multiple sources of information with different trustworthiness which may translate to the following desired priority order:
relevant context 1 $>$ relevant context 2 $>$ model's parametric knowledge $>$ relevant context 3 $>$ all irrelevant context
This orders of priority determines the handling of conflicts. In addition, any irrelevant context should have no influence on the model's output.
## 6.2 Multitask / In-Context Learning
KAFT currently only explores QA tasks. We acknowledge that the applications of LLMs go far beyond a single style of tasks. We have not yet achieved controlled utilization of information in a task agnostic way. Ideally, the model should learn to prioritize retrieved relevant information in any task that LLMs are capable of, including in-context few-shot or zero-shot scenarios.
## 6.3 Dynamically Enforce "Learning To Ignore"
In this work, it was necessary to build a different KAFT dataset for each model. Because in Eq. 4, whenever the context is irrelevant, the model fits on to the pretrained model's answers which depends on the model. This presents additional workload when applying KAFT to new models. In future, it's worthwhile to explore a dynamic methods that generates closed booked answers during training.
At each training step involving irrelevant context, we could run the forward pass twice, one with the provided context and another without. Then we can compute a new loss:
$r=1:$ Loss = CE($M^{\prime}(c+q),$label) (9) $r=0:$ Loss = CE($M^{\prime}(c+q),$ stop_gradient($M^{\prime}(q))$) (10)
where + denotes string concatenation. This is different from Eq. 4 as it fits on to the closed book answers of the current version of the finetuned model, rather than that of the pretrained model.
It's not yet clear whether this would achieve better robustness. It's also more expensive because two forward passes are necessary for each training example. However it might be justified by the improved simplicity in directly applying KAFT with minimal prepossessing.
This approach is somewhat similar to classifier free guidance (Ho and Salimans, 2022), which has been successfully applied to image generation models. One added benefit of classifier free guidance is the ability to tune the strength of context conditioning after the model is trained, which is another interesting direction to explore here.
(5) (6) (7) (8) (1) $\frac{1}{2}$ (2) $\frac{1}{2}$ (3) $\frac{1}{2}$ (4) $\frac{1}{2}$ (5) $\frac{1}{2}$ (6) $\frac{1}{2}$ (7) $\frac{1}{2}$ (8) $\frac{1}{2}$
## 7 Ethics Statement: Broader Impacts And Potential Risks
In this work, we study approaches to finetune LLMs to make them more grounded and faithful to provided contexts. If our method is applied broadly, it has the potential to correct the unwanted or biased behavior of LLMs with a carefully curated set of natural language instructions without expensive retraining. This provides one feasible avenue towards improving language models to correct a potential bias that is embedded in the pretraining corpus. At the same time, we acknowledge that our method does not completely address such issues on its own, because 1) instances where the model's working memory is not controllable by the context even after KAFT is applied may remain; 2)
the finetuning dataset used in KAFT may inadvertently introduce or strengthen certain biases. For example, we acknowledge that all KAFT datasets used in this study are English datasets, and so it is a valuable future work direction to extend KAFT
to be more representative of all languages.
In addition, we acknowledge that the use of LLMs can be expensive in terms of energy usage.
We utilize existing pretrained LLMs such as T5 and PaLM. KAFT's energy usage is small compared to the pretraining process, but it still leaves a significant energy footprint. In particular, the most expensive training, KAFT-PaLM 540B, takes 12190 TPU v4 hours. It is our hope that methods such as KAFT will provide a way for reducing the need for frequently retraining LLMs, and thus could lead to a more environmentally friendly experimentation.
## Acknowledgements
We would like to thank Slav Petrov for his insightful comments on an early draft of the paper that significantly helped improve the presentation of our work. We would also like to thank the reviewers and meta-reviewer for their thoughtful feedback on our submission.
## References
F. Gregory Ashby, Shawn W. Ell, Vivian V. Valentin, and Michael B. Casale. 2005. FROST: A Distributed Neurocomputational Model of Working Memory Maintenance. *Journal of Cognitive Neuroscience*, 17(11):1728–1743.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advances in NeurIPS, volume 33, pages 1877–1901.
Curran Associates, Inc.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
Palm: Scaling language modeling with pathways.
Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021.
Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6491–6506, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL*, pages 4171–
4186.
Bhuwan Dhingra, Jeremy R Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W Cohen. 2022. Time-aware language models as temporal knowledge bases. *Transactions* of the Association for Computational Linguistics, 10:257–273.
Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)*, Miyazaki, Japan. European Language Resources Association (ELRA).
Keisuke Fukuda and Geoffrey F. Woodman. 2017. Visual working memory buffers information retrieved from visual long-term memory. Proceedings of the National Academy of Sciences, 114/20.
J M Fuster. 1973. Unit activity in prefrontal cortex during delayed-response performance: neuronal corre-
lates of transient memory. *Journal of Neurophysiology*, 36(1):61–78. PMID: 4196203.
Karl H. Pribram George A. Miller, Eugene Galanter.
1960. *Plans and the structure of behavior*. Holt, New York.
Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5484–5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training.
Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance.
Zhiting Hu and Li Erran Li. 2022. A causal lens for controllable text generation. *CoRR*,
abs/2201.09119.
Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun KIM, Stanley Jungkyu Choi, and Minjoon Seo. 2022. Towards continual knowledge learning of language models. In *International Conference on Learning Representations*.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
Mandar Joshi, Kenton Lee, Yi Luan, and Kristina Toutanova. 2020. Contextualized representations using textual encyclopedic knowledge. *CoRR*, abs/2004.12006.
Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models:
Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics.
Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1896–1907, Online. Association for Computational Linguistics.
Tushar Khot, Peter Clark, Michal Guerquin, Peter Alexander Jansen, and Ashish Sabharwal. 2020.
Qasc: A dataset for question answering via sentence composition. *ArXiv*, abs/1910.11473.
Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021.
Mind the gap: Assessing temporal generalization in neural language models. In Advances in Neural Information Processing Systems, volume 34, pages 29348–29363. Curran Associates, Inc.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A.
Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models.
Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052–7063, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual knowledge in gpt. arXiv preprint arXiv:2202.05262.
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2022b. Massediting memory in a transformer.
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. 2022. Fast model editing at scale. In International Conference on Learning Representations.
Ella Neeman, Roee Aharoni, Or Honovich, Leshem Choshen, Idan Szpektor, and Omri Abend. 2022. to appear.
Katsuhiko Ogata. 1996. *Modern Control Engineering*
(3rd Ed.). Prentice-Hall, Inc., USA.
Marwan Omar, Soohyeon Choi, DaeHun Nyang, and David Mohaisen. 2022. Robust natural language
processing: Recent advances, challenges, and future directions. *CoRR*, abs/2201.00768.
Yasumasa Onoe, Michael JQ Zhang, Eunsol Choi, and Greg Durrett. 2022. Entity cloze by date: What lms know about unseen entities. arXiv preprint arXiv:2205.02832.
Liangming Pan, Wenhu Chen, Min-Yen Kan, and William Yang Wang. 2021. Contraqa: Question answering under contradicting contexts. *CoRR*, abs/2110.07803.
Fabio Petroni, Patrick S. H. Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H.
Miller, and Sebastian Riedel. 2020. How context affects language models' factual predictions. *CoRR*, abs/2005.04611.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics.
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020a. Exploring the limits of transfer learning with a unified text-totext transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020b. Exploring the limits of transfer learning with a unified text-totext transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for squad.
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020.
How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910.
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro.
2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in NeurIPS*.
Hanqing Zhang, Haolin Song, Shaoyu Li, Ming Zhou, and Dawei Song. 2022. A survey of controllable text generation using transformer-based pre-trained language models. *CoRR*, abs/2201.05337.
Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, and Sanjiv Kumar. 2020. Modifying memories in transformer models. *arXiv preprint arXiv:2012.00363*.
## A Appendix A.1 Details On Relevant Context Construction
SQuAD 2.0 has human labels for this particular aspect. But most datasets do not. For TReX, the question is cloze style where we mask a certain entity within the triples statement. We build a relevant context by concatenating the original statements with a number of sampled irrelevant statements, after randomly shuffling their order. This ensures the relevance of the context while keeping it challenging. The training set of QASC provides 2 gold statements that implies the answer via a two hop reasoning. We are using the 2-stage retrieved collection of statements similar to (Khashabi et al.,
2020). We find that the gold statements, or semantically equivalent ones, often exist in the retrieved results. To improve relevance we will randomly add one of the two golden statements and mix it in the retrieved context to build a relevant context for the KAFT training set. We manually checked on a random small subset that this ensures a relevance ratio around 90%.
TriviaQA is especially challenging because there is no human labeled gold context, while all existing contexts are obtained by a retrieval system. We filter the context by whether they contain the answer. This turned out to be insufficient and leaves a large fraction of irrelevant contexts that do not logically entail the answer. We apply additional filters based on the unigram overlaps of the context with the question, as well as based on the output of a logically entailment model.
## A.2 Training Details
We use a learning rate of 0.0002 on all models. The batch size is 32 for all PaLM models and 16 for T5 models. For T5 XL we pick the checkpoint at 100000 finetune steps and for T5 XXL models we pick the checkpoint at 90000 steps. For PaLM
8B and 62B, we pick the checkpoint at 40000 finetuning steps. For PaLM 540B we pick the checkpoint at 15000 steps. These steps are generally determined by avoiding overfitting. However for larger models we are also constrained by compute resources.
## A.3 Knowledge Probing Prompts
In this section we provide details on how the knowledge probing prompts in Table 7-9 are constructed.
In particular, our goal is to make the model only answer questions where it knows the answer. To do this, we construct prompts that contains two types of QA pairs:
1. Regular QA pairs if the model can answer the specific question correctly in multiple few-shot in-context settings.
2. QA pairs where the answer is "I don't know" for T5 models or "?" for PaLM models, if the model cannot answer the question correctly in most few-shot in-context settings.
With such specially designed prompts, we encourage the model to abstain if it does not know the answer. The counterfactual context used in the controllability benchmark is constructed using the same method. However we ensure no entities overlaps exist between the prompts that generates the training data vs the test data.
## A.4 Counterfactual Generation
To train the model to be controllable by the context, we explicitly engineer plausible training data where the context is in conflict with the model's pretrained world knowledge. This is done in 3 steps:
1. We apply a diverse set of few-shot prompts similar to Table 10 to condition a pretrained T5 XXL model to generate plausible counterfactual answers.
2. We remove examples if the generation is unsuccessful, when it's either too long or have a large overlap with the original answer.
3. We replace all occurrences of the original answer with the counterfactual answer in the original context to build the counterfactual context.
With this approach, we build a new QA data set where the answer implied by the context is likely to be in conflict with the model's existing knowledge.
## A.5 Evaluations For Counterfactual Memorization And Relevance Classification
One potential danger of adding counterfactual context-answer pairs in the training set is unwanted memorization. We check whether KAFT models memorizes the counterfactual answers in the training set using the same prompts we used to probe the pretrained model's closed book answers. The results in Table 6 show that KAFT has little unwanted memorization of counterfactual answers. Instead the model learns the desirable correlation between the context and the output, as demonstrated in Figure 1.
As illustrated in Table 1 and described in Table 3, we require the model to generate its judgements on whether the provided context is relevant. As a sanity check, we evaluated this part of the output on 1000 class-balanced SQuAD2 validation questions, the relevance prediction from KAFT-T5-XXL has 84% precision and 98% recall.
## A.6 Postprocessing
After we obtain the output from the pretrained model to the question, which is concatenated after the knowledge probing prompt, we need to postprocess it and removed unwanted components. We do two types of post-processing on the pretrained predictions:
| Model | Standard QA Knowledge Probe Prompts Q: Into what body of water does the Hudson River terminate? A: The Atlantic Ocean. Q: What method formally adds inverses to elements to any monoid? A: I don't know. Q: Supply and what else causes child labour to still exist today? A: demands. Q: Who is the prime minister of Japan in 2015? A: Shinzo Abe. Q: Who is responsible for judicial review? A: Courts. Q: what was the name of the other HD channel Virgin media could carry in the future? A: I don't know. Q: What is the term for a hyperactive immune system that attacks normal tissues? A: autoimmunity. Q: What complexity class is commonly characterized by unknown algorithms to enhance solvability? A: I don't know. Q: Which nation contains the majority of the amazon forest? A: Brazil. |
|-------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| T5 XL | Q: Into what body of water does the Hudson River terminate? A: The Atlantic Ocean. Q: What method formally adds inverses to elements to any monoid? A: I don't know. Q: Supply and what else causes child labour to still exist today? A: demands. Q: Who is the prime minister of Japan in 2015? A: Shinzo Abe. Q: Who is responsible for judicial review? A: Courts. Q: What religion did the French spread along with their imperialism? A: Catholicism. Q: The symbol for mercuric oxide is? A: HgO. Q: What religion did the Yuan discourage, to support Buddhism? A: Taoism. |
| T5 XXL | Only answer the questions you know the answer to: Q: Into what body of water does the Hudson River terminate? A: The Atlantic Ocean. Q: What year was the county of Hampshire officially named? A: ?. Q: Who said the following statement? "Enlightenment is mans emergence from his self-incurred immaturity". A: Immanuel Kant. ´ Q: What method formally adds inverses to elements to any monoid? A: ?. Q: What King and former Huguenot looked out for the welfare of the group? A: Henry IV. Q: The principle of faunal succession was developed 100 years before whose theory of evolution? A: Charles Darwin. Q: Who is the hero who killed a dragon on the Drachenfels? A: Siegfried. |
| PaLM 8B | Only answer the questions you know the answer to: Q: Into what body of water does the Hudson River terminate? A: The Atlantic Ocean. Q: What year was the county of Hampshire officially named? A: ?. Q: Who said the following statement? "Enlightenment is man's emergence from his self-incurred immaturity". A: Immanuel Kant. Q: What method formally adds inverses to elements to any monoid? A: ?. |
| PaLM 62B | Q: Who was the US Secretary of State in 2001? A: Colin Bowell. Q: The principle of faunal succession was developed 100 years before whose theory of evolution? A: Charles Darwin. Q: Who is the hero who killed a dragon on the Drachenfels? A: Siegfried. Q: When did the European Anti-Fraud Office investigate John Dalli? A: 2012. Q: What religion did the French spread along with their imperialism? A: Catholicism. Q: When did Costa v ENEL take place? A: 1964. Only answer the questions you know the answer to: Q: Into what body of water does the Hudson River terminate? A: New York Bay. Q: What year was the county of Hampshire officially named? A: ?. Q: Who said the following statement? "Enlightenment is mans emergence from his self-incurred immaturity". A: Immanuel Kant. ´ Q: What method formally adds inverses to elements to any monoid? A: ?. |
| PaLM 62B | Q: When was the Parental Leave directive created? A: 1996. Q: How many megaregions are there in the United States? A: 11. Q: Where is DÓlier Street? A: Dublin. Q: What is the speed limit set to reduce consumption? A: 55 mph. Q: What channel replaced Sky Travel? A: Sky Three. Q: Who founded McKinsey & Company? A: James O. McKinsey. |
| Table 7: Knowledge probing prompts for standard QA datasets. These prompts are used to probe the pretrained | |
Table 7: Knowledge probing prompts for standard QA datasets. These prompts are used to probe the pretrained
model's answer to questions in SQuAD 2.0 and TriviaQA.
| Model | Cloze Style QA Knowledge Probe Prompts The Hudson River terminate into ___ . A: The Atlantic Ocean. ___ formally adds inverses to elements to any monoid. A: ?. Supply and ___ causes child labour to still exist today? A: demands. ___ was the prime minister of Japan in 2015? A: Shinzo Abe. ___ is responsible for judicial review. A: Courts. ___ was the name of the other HD channel Virgin media could carry in the future. A: ?. ___ is defined as a hyperactive immune system attacking normal tissues? A: autoimmunity. ___ complexity class is commonly characterized by unknown algorithms to enhance solvability. A: ?. ___ contains the majority of the amazon forest? A: Brazil. |
|-------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| T5 XL | The Hudson River terminate into ___ . A: The Atlantic Ocean. ___ formally adds inverses to elements to any monoid. A: ?. Supply and ___ causes child labour to still exist today? A: demands. ___ was the prime minister of Japan in 2015? A: Shinzo Abe. ___ is responsible for judicial review. A: Courts. The French spread along with their imperialism the ___ religion. A: Catholicism. The symbol for mercuric oxide is ___. A: HgO. The Yuan discouraged ___ to support Buddhism. A: Taoism. |
| T5 XXL | Only answer the questions you know the answer to: The Hudson River terminate into ___ . A: The Atlantic Ocean. The county of Hampshire was officially named in ___ . A: ?. ___ said "Enlightenment is mans emergence from his self-incurred immaturity". A: Immanuel Kant. ´ ___ formally adds inverses to elements to any monoid. A: ?. King ___ and former Huguenot looked out for the welfare of the group. A: Henry IV. The principle of faunal succession was developed 100 years before ___'s theory of evolution. A: Charles Darwin. ___ is the hero who killed a dragon on the Drachenfels? A: Siegfried. |
| PaLM 8B | Only answer the questions you know the answer to: The Hudson River terminate into ___ . A: The Atlantic Ocean. The county of Hampshire was officially named in ___ . A: ?. ___ said "Enlightenment is mans emergence from his self-incurred immaturity". A: Immanuel Kant. ´ ___ formally adds inverses to elements to any monoid. A: ?. ___ was the US Secretary of State in 2001. A: Colin Bowell. The principle of faunal succession was developed 100 years before ___'s theory of evolution? A: Charles Darwin. ___ is the hero who killed a dragon on the Drachenfels. A: Siegfried. The European Anti-Fraud Office investigate John Dalli in year ___ . A: 2012. The French spread along with their imperialism the ___ religion. A: Catholicism. Costa v ENEL happend in year ___ . A: 1964. |
| PaLM 62B | Only answer the questions you know the answer to: The Hudson River terminate into ___ . A: New York Bay. The county of Hampshire was officially named in ___ . A: ?. ___ said "Enlightenment is mans emergence from his self-incurred immaturity". A: Immanuel Kant. ´ ___ formally adds inverses to elements to any monoid. A: ?. The Parental Leave directive created in year ___ . A: 1996. There are ___ megaregions in the United States. A: 11. D'Olier Street is located in ___ . A: Dublin. The speed limit was set to ___ to reduce consumption. A: 55 mph. ___ channel replaced Sky Travel. A: Sky Three. ___ founded McKinsey & Company. A: James O. McKinsey. |
| PaLM 62B Table 8: Knowledge probing prompts for Cloze style QA datasets. These prompts are used to probe the pretrained | |
Table 8: Knowledge probing prompts for Cloze style QA datasets. These prompts are used to probe the pretrained
model's answer to questions in TReX.
Model Multiple Choice QA Knowledge Probe Prompts Question: Into what body of water does the Hudson River terminate? (A) The great lakes
(B) Amazon river (C) The red sea (D) the Atlantic Ocean (E) San Francisco bay
(F) The north sea (G) Indian Ocean (H) Lake Mississippi -Answer: (D) the Atlantc Ocean.
Question: Who was the prime minister of Japan in 2015? (A) Donald Trump (B) Miho Nonaka
(C) Andrew Yang (D) a France citizen (E) a political outsider (F) Shinzo Abe (G) woman
(H) Zoe. -Answer: (F) Shinzo Abe.Question: what increases moisture? (A) density (B) the sun (C) wind (D) droughts (E) Honey (F) 17 (G) rain (H) meat -Answer: (G) rain.
Question: What can be found inside a cell? (A) soil (B) dogs (C) ovum (D) starfish
(E) Most plants (F) RNA (G) washer (H) abundant -Answer: (F) RNA.
Question:What kind of coloring do chomoplasts make? (A) fat (B) move
(C) RNA (D) grow (E) red (F) skin (G) eyes (H) DNA -Answer: (E) red.
PaLM 62B
Table 9: Knowledge probing prompts for Cloze style QA datasets. These prompts are used to probe the pretrained model's answer to questions in TReX.
| Question | In which country did Warsaw Pact officials meet to dissolve the alliance? |
|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Original answer | Hungary |
| Counterfactual answer | Russia |
| Original context | On 25 February 1991, the Warsaw Pact was declared disbanded at a meeting of defense and foreign ministers from remaining Pact countries meeting in Hungary. |
| Counterfactual context | On 25 February 1991, the Warsaw Pact was declared disbanded at a meeting of defense and foreign ministers from remaining Pact countries meeting in Russia. |
| T5 Prompt to generate the | Let's play a game of writing fake answers Who did US fight in world war |
| counterfactual answer | 1? Real answer: Germany. Fake answer: Somalia. Who is the CEO of Amazon? Real Answer: Jeff Bezos. Fake Answer: Richard D. Fairbank 7 more examples . . . In which country did Warsaw Pact officials meet to dissolve the alliance? Real answer: Hungary. Fake answer: hextra_id_0i. |
Table 10: An example from the counterfactual split of the KAFT training set. We take an original question, answer, and context triple. We then use a few examples to prompt a pretrained T5 XXL model to generate a plausible counterfactual answer. Finally, we replace all occurrences of the original answer with the counterfactual answer to build the counterfactual context.
1. **Truncation:** We truncate the model's output on special tokens such as *< extra*_id_1 >, punctuation, line change symbols and question/context initialization symbols such as "Q:", "Question:",
"CONTEXT:". These symbols are a frequent in the pretrained model's responds to our QA style knowledge probe prompts and indicate that the model is ready to move on to the next question that is unrelated to the answer of the current question.
2. **Abstain:** We normalize all abstain symbols.
Whenever the model indicate abstaining using either "I don't know", "unsure" or "?" in the output as responses to our prompt, we record "unsure" as its answer when constructing the label in the irrelevant slices of KAFT.
## A.7 Dataset And Task Details
KAFT mixes together a number of datasets, each with multiple augmentation slices. During training, data from these difference sources are sampled in a round-robin style according to predefined mixture weights. We list these weights as well as the corresponding dataset stats as in Table 11. The sampling ratio from each slice is computed using a product of the normalized dataset level rate and the normalized slice level rate as follows:
$$R(d,s)=\frac{r_{d}}{\sum_{d^{\prime}}r_{d^{\prime}}}\frac{r_{d s}}{\sum_{s^{\prime}}r_{d s^{\prime}}}\qquad(11)$$
$\text{VST}=\text{VST}+\text{VLT}+\text{VLT}+\text{VLT}$ 2.
where *d, d*0 denote different datasets and *s, s*0 denote difference slices within each dataset. For example, the sampling ratio from the QASC relevant slice is given by:
$$R(QASC,relevant)\tag{12}$$ $$=\frac{0.3}{1.3+0.3+0.1+0.2}\frac{0.5}{0.5+0.25+0.02}$$ $$=0.0831$$
The KAFT-TriviaQA training set contains 45593 relevant examples and 72697 irrelevant examples.
The KAFT-QASC training set contains 8134 relevant examples and the same number of irrelevant examples. The KAFT-SQuAD2 dataset contains 78125 relevant examples and 117287 irrelevant examples. The KAFT-TReX training set contains 75365 relevant examples and 47503 irrelevant examples.
## A.8 Licensing And Scientific Artifacts
In this work, we used the following scientific artifacts: TriviaQA is licensed under Apache License 2.0. The SQuAD 2.0 dataset is licensed under CC
BY-SA 4.0. T-REx is under a Creative Commons Attribution-ShareAlike 4.0 International License.
QASC is under CC BY license. T5 models are under Apache License 2.0. Unified QA models are under Apache License 2.0. The PaLM models are proprietary. All these artifacts are properly cited when we mention them the first time. Our use for these artifacts are consistent with their licenses.
We create the following scientific artifacts and we will partly release them after this paper is pub-
| dataset | dataset | slice | slice weight |
|-----------------------------|-----------|----------|----------------|
| weight | | | |
| SQuAD 2.0 | 1.3 | relevant | 0.8 |
| counterfactual | 0.1 | | |
| original irrelevant abstain | 0.1 | | |
| original irrelevant other | 0.1 | | |
| empty correct | 0.33 | | |
| empty abstain | 0.02 | | |
| empty other | 0.05 | | |
| sampled irrelevant correct | 0.33 | | |
| sampled irrelevant abstain | 0.02 | | |
| sampled irrelevant other | 0.03 | | |
| QASC | 0.3 | relevant | 0.5 |
| irrelevant correct | 0.25 | | |
| irrelevant other | 0.02 | | |
| TReX | 0.1 | relevant | 0.4 |
| counterfactual | 0.4 | | |
| 2-hop relevant | 6 | | |
| irrelevant correct | 0.15 | | |
| irrelevant abstain | 0.03 | | |
| irrelevant other | 0.03 | | |
| TriviaQA | 0.2 | relevant | 0.8 |
| counterfactual | 0.15 | | |
| irrelevant/empty correct | 0.5 | | |
| irrelevant/empty other | 0.2 | | |
lished: The KAFT finetuning method will be released under Apache License 2.0. The KAFT-T5 models will be released under Apache License 2.0.
The KAFT-PaLM models will be proprietary.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 6
✓ A2. Did you discuss any potential risks of your work?
section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1-5
✓ B1. Did you cite the creators of artifacts you used?
section 1-5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A.8
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A.8
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use well established public datasets that are constructed based on publicly available information.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Sec 1-5, Appendix A.8
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** Sec 3-4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sec 1-4, Sec 7, Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, Appendix A
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Jax D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
varshney-etal-2023-unified | A Unified Evaluation Framework for Novelty Detection and Accommodation in {NLP} with an Instantiation in Authorship Attribution | https://aclanthology.org/2023.findings-acl.113 | State-of-the-art natural language processing models have been shown to achieve remarkable performance in {`}closed-world{'} settings where all the labels in the evaluation set are known at training time. However, in real-world settings, {`}novel{'} instances that do not belong to any known class are often observed. This renders the ability to deal with novelties crucial. To initiate a systematic research in this important area of {`}dealing with novelties{'}, we introduce NoveltyTask, a multi-stage task to evaluate a system{'}s performance on pipelined novelty {`}detection{'} and {`}accommodation{'} tasks. We provide mathematical formulation of NoveltyTask and instantiate it with the authorship attribution task that pertains to identifying the correct author of a given text. We use amazon reviews corpus and compile a large dataset (consisting of 250k instances across 200 authors/labels) for NoveltyTask. We conduct comprehensive experiments and explore several baseline methods for the task. Our results show that the methods achieve considerably low performance making the task challenging and leaving sufficient room for improvement. Finally, we believe our work will encourage research in this underexplored area of dealing with novelties, an important step en route to developing robust systems. | # A Unified Evaluation Framework For Novelty Detection And Accommodation In Nlp With An Instantiation In Authorship Attribution
Neeraj Varshney1∗ Himanshu Gupta1∗ Eric Robertson2 Bing Liu3 **Chitta Baral**1 1 Arizona State University 2 PAR Government Systems Corporation 3 University of Illinois at Chicago
## Abstract
State-of-the-art natural language processing models have been shown to achieve remarkable performance in 'closed-world' settings where all the labels in the evaluation set are known at training time. However, in real-world settings, 'novel' instances that do not belong to any known class are often observed. This renders the ability to deal with novelties crucial.
To initiate a systematic research in this important area of 'dealing with novelties', we introduce *NoveltyTask*, a multi-stage task to evaluate a system's performance on pipelined novelty
'detection' and 'accommodation' tasks. We provide mathematical formulation of NoveltyTask and instantiate it with the authorship attribution task that pertains to identifying the correct author of a given text. We use Amazon reviews corpus and compile a large dataset (consisting of 250k instances across 200 authors/labels) for NoveltyTask. We conduct comprehensive experiments and explore several baseline methods for the task. Our results show that the methods achieve considerably low performance making the task challenging and leaving sufficient room for improvement. Finally, we believe our work will encourage research in this underexplored area of dealing with novelties, an important step en route to developing robust systems.
## 1 Introduction
Recent advancements in Natural Language Processing (NLP) have led to the development of several pre-trained large-scale language models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al.,
2020b), and ELECTRA (Clark et al., 2020). These models have been shown to achieve remarkable performance in *closed-world* settings where all the labels in the evaluation set are known at training time. However, in real-world settings, this assumption is often violated as instances that do not belong to any known label ('novel' instances) are also observed. This renders the ability to deal with novelties crucial in order to develop robust systems for real-world applications.
The topic of novelty is getting increased attention in the broad AI research (Boult et al., 2020; Li et al., 2021b; Rambhatla et al., 2021). Also, in NLP, the 'novelty detection' task in which novel instances need to be identified is being explored
(Ghosal et al., 2018; Ma et al., 2021); related problems such as anomaly detection (Chalapathy and Chawla, 2019), out-of-domain detection, and openset recognition (Hendrycks and Gimpel, 2017; Hendrycks et al., 2020; Ovadia et al., 2019) are also being studied. In addition to the task of 'detection',
dealing with novelties also requires '*accommodation*' that pertains to learning from the correctly detected novelties. Despite having practical significance, this aspect of dealing with novelties has remained underexplored. Furthermore, dealing with novelties is a crucial step in numerous other practical applications such as concept learning, continual learning, and domain adaptation.
To initiate systematic research in this area of
'dealing with novelties', we formulate a multi-stage task called **NoveltyTask**. Initially, a dataset consisting of examples of a set of labels (referred to as 'known labels') is provided for training and then sequential evaluation is conducted in two stages:
Novelty Detection and **Novelty Accommodation**.
Both these stages include distinct unseen evaluation instances belonging to both 'known labels'
(labels present in the training dataset) and 'novel labels' (labels not present in the training dataset).
In the first evaluation stage i.e. the **novelty detection** stage, the system needs to either identify an instance as novel or classify it to one of the 'K'
known labels. This is the same as the (K + 1)
class classification problem (where K corresponds to the number of known labels) used in standard anomaly/OOD detection tasks. This evaluation stage is followed by a **feedback** phase in which the
∗Equal Contribution, Contact email: [email protected]
![1_image_0.png](1_image_0.png)
ground truth label of the novel instances (from the detection stage) that get correctly reported as novel is revealed. Essentially, in the feedback phase, the system gets some examples of the novel labels (from the evaluation instances of the detection stage) that it correctly identified as novel.
In addition to the initially provided training examples of the K known labels, the system can leverage these new examples of the novel labels for the next evaluation stage, the **novelty accommodation** stage. This stage also has evaluation instances from both the known and the novel labels (distinct and mutually exclusive from the detection stage);
however, in this stage, the system needs to identify the true label of the evaluation instances, i.e. it's a (K + N) class classification problem where N
corresponds to the number of novel labels. We summarize this multi-stage task in Figure 1. We note that **NoveltyTask is a controlled task/framework**
for evaluating a system's ability to deal with novelties and not a method to improve its ability.
It is intuitive that the ability to deal with the novelties should be directly correlated with the ability to detect the novelties; our two-stage pipelined formulation of NoveltyTask allows achieving this desiderata as higher accuracy in correctly detecting the novelties will result in more feedback i.e. more examples of the novel labels that will eventually help in achieving higher performance in the accommodation stage. However, in the detection stage, the system needs to balance the trade-off between reporting instances as novel and classifying them to the known labels. Consider a trivial system that simply flags all the evaluation instances of the detection stage as novel in order to get the maximum feedback; such a system will get the true groundtruth label (novel label) of all the novel instances present in the detection stage and will eventually perform better in the accommodation stage but it would have to sacrifice its classification accuracy in the detection stage (especially on instances of the known labels). We address several such concerns in formulating the performance metrics for NoveltyTask (Section 3).
In this work, we instantiate NoveltyTask with authorship attribution task in which each author represents a label and the task is to identify the correct author of a given unseen text. However, we note that the formulation of NoveltyTask is general and applicable to all tasks. We leverage product reviews from Amazon corpus (McAuley et al., 2015; He and McAuley, 2016) for the attribution task. We explore several baseline methods for both detection and accommodation tasks (Section 4).
In summary, our contributions are as follows:
1. We **define a unified task for 'dealing with novelties'** consisting of both novelty detection and novelty accommodation.
2. We **provide a controlled evaluation framework** with its mathematical formulation.
3. We **instantiate NoveltyTask** with the Authorship Attribution task.
4. We **study the performance of several baseline**
methods for NoveltyTask.
## 2 Background And Related Work
In this section, we first discuss the related work on novelty/OOD/anomaly detection tasks and then detail the authorship attribution task.
## 2.1 Novelty/Ood/Anomaly Detection
Novelty Detection and its related tasks such as outof-distribution detection, selective prediction, and anomaly detection have attracted a lot of research attention from both computer vision (Fort et al.,
2021; Esmaeilpour et al., 2022; Sun et al., 2021a; Lu et al., 2022; Liu et al., 2020a; Perera et al.,
2020; Whitehead et al., 2022) and language (Qin et al., 2020; Venkataram, 2018; Yang et al., 2022; Varshney et al., 2022b; Kamath et al., 2020; Varshney et al., 2022c) research communities. OOD
detection for text classification is an active area of research in NLP. Qin et al. (2020) follow a pairwise matching paradigm and calculate the probability of a pair of samples belonging to the same class. Yang et al. (2022) investigate how to detect open classes efficiently under domain shift. Ai et al. (2022)
propose a contrastive learning paradigm, a technique that brings similar samples close and pushes dissimilar samples apart in the vector representation space. Yilmaz and Toraman (2022) propose a method for detecting out-of-scope utterances utilizing the confidence score for a given utterance.
## 2.2 Authorship Attribution
Authorship attribution task (AA) pertains to identifying the correct author of a given text. AA has been studied for short texts (Aborisade and Anwar, 2018a) such as tweets as well as long texts such as court judgments (Sari et al., 2018). Traditional approaches for AA explore techniques based on n-grams, word embeddings, and stylometric features such as the use of punctuation, average word length, sentence length, and number of upper cases
(Sari et al., 2018; Aborisade and Anwar, 2018b; Soler-Company and Wanner, 2017). Transformerbased models have been shown to outperform the traditional methods on this task (Fabien et al., 2020; Tyo et al., 2021; Custódio and Paraboni, 2019).
## 3 Noveltytask
NoveltyTask is a two-stage pipelined framework to evaluate a system's ability to deal with novelties. In this task, examples of a set of labels (referred to as known labels) are made available for initial training. The system is sequentially evaluated in two stages: novelty detection and novelty accommodation. Both these stages consist of distinct unseen evaluation instances belonging to both 'known' and
'novel' labels. We define a label as **novel** if it is not one of the known labels provided for initial training and all instances belonging to the novel labels are referred to as novel instances. We summarize this multi-stage task in Figure 1. In this section, we provide a mathematical formulation of NoveltyTask, detail its performance metrics, and describe the baseline methods.
## 3.1 Formulation 3.1.1 Initial Training (Dt)
Consider a dataset DT of (*x, y*) pairs where x denotes the input instance and y ∈ {1, 2*, ..., K*} denotes the class label. We refer to this label set of K
classes as 'known labels.' In NoveltyTask, the classification dataset DTis provided for initial training.
Then, the trained system is evaluated in the novelty detection stage as described in the next subsection.
3.1.2 Novelty Detection (EvalDet)
The evaluation dataset of this stage (EvalDet) consists of unseen instances of both known and novel labels, i.e., EvalDet includes instances from K∪N
labels where N corresponds to the number of novel labels not seen in the initial training dataset DT.
Here, the system needs to do a (K + 1) class classification, i.e., for each instance, it can either output one of the K known classes or report it as novel
(not belonging to any known class) by outputting the (0)th class. This is followed by the feedback phase described in 3.1.3.
## 3.1.3 Feedback Phase (Df )
For each instance of the EvalDet dataset, we use an indicator function 'f' whose value is 1 if the instance is novel ( i.e. not from the K known labels) and 0 otherwise:
## F(X) = 1[X ̸∈ {1, 2, ..., K}]
In the feedback phase, we reveal the ground truth label of those novel instances (from EvalDet) that the system correctly reports as novel, i.e., feedback results in a dataset (DF ) which is a subset of the novel instances of EvalDet where f(x) is 1 and the system's prediction on x is the (0)th class.
$$D^{F}=\left\{\begin{array}{c l}{{}}&{{\in E v a l_{D e t}}}\\ {{}}&{{}}\\ {{(x,y),}}&{{f(x)=1}}\\ {{}}&{{}}\\ {{}}&{{p r e d(x)=(0)^{t h}c l a s s}}\end{array}\right.$$
Essentially, DF is a dataset that consists of examples of the novel labels. The system can incorporate the feedback by leveraging DF in addition to the initial training dataset DT(refer to Section 3.4 for novelty accommodation methods) to adapt itself for the next evaluation stage, which is the novelty accommodation stage.
## 3.1.4 Novelty Accommodation (Evalacc)
The system incorporates the feedback and is evaluated in the novelty accommodation stage on the EvalAcc dataset. Like the detection stage dataset, EvalAcc also includes instances of both K known and N novel labels (mutually exclusive from EvalDet i.e. EvalDet∩EvalAcc = ∅). However, in this stage, the system needs to identify the true label of all the evaluation instances (including those belonging to the novel labels) i.e. the task for the system is to do a (K + N) class classification instead of a (K + 1) class classification. Here, N
corresponds to the number of novel labels. Essentially, in the feedback phase, the system gets some examples of the novel labels, and it needs to leverage them along with DTto classify the evaluation instances correctly.
## Note That The Feedback Data Df **May Or May**
not contain examples of all the N **novel classes**
as it totally depends on the system's ability to correctly detect novelties in the detection stage. The inability to detect instances of all the novel classes will accordingly impact the system's performance in the accommodation stage. Next, we describe the performance metrics for both the stages.
## 3.2 Performance Evaluation
Novelty Detection: For the novelty detection stage, we use F1 score over all classes to evaluate the performance of the system. We also calculate the F1 score for the known classes (F1Known) and for the novel instances (F1Novel) to evaluate the fine-grained performances.
Let {C1, · · · , CK} be the set of known classes and C0 be the class corresponding to the novel instances, we calculate the micro F1 score using:
$$\mathrm{F1}=2\times{\frac{\mathrm{P}\times\mathrm{R}}{\mathrm{P}+\mathrm{R}}},$$
where P and R are precision and recall values.
Similarly, the F1 scores over known classes
(F1known) and novel class (F1Novel) are computed.
Note that all the above measures are threshold dependent i.e. the system needs to select a confidence threshold (based on which it classifies instances on which it fails to surpass that threshold as novel) and its performance measures depend on that. This is not a fair performance metric as its performance heavily depends on the number of novelties present in the evaluation dataset (EvalDet). To comprehensively evaluate a system, we use a thresholdindependent performance metric in which we compute these precision, recall, and F1 values for a range of reported novelties. To achieve this, we order the evaluation instances of EvalDet based on the system's prediction score (calculated using various techniques described in the next subsection)
and take the least confident instances as reported novelties (for each number in the range of reported novelties). Then, we plot a curve for these performance measures and aggregate the values (AUC)
to calculate the overall performance of the method
(refer to Figure 2). This evaluation methodology
(similar to the OOD detection method) makes the performance measurement comprehensive and also accounts for the number of novelties present in the evaluation dataset.
Novelty Accommodation: In this stage, the task for the system is to do (K + N) class classification instead of (K + 1) class classification. The system leverages the feedback (DF ) (which is contingent on the number of reported novelties) to adapt it for the task, and its performance also depends on that. Following the methodology described for the detection stage, we evaluate the system's performance over a range of reported novelties and hence over a range of feedback. Specifically, we find the feedback dataset DF for a range of reported novelties and for each individual feedback, we incorporate it into the system and then evaluate its prediction performance on the (K + N) classification task. Similar to the detection stage, we plot a curve (across a range of reported novelties) and calculate its area under the curve value to quantify the overall performance of novelty accommodation.
## 3.3 Methods For Novelty Detection
As described in the previous subsection, we calculate the system's performance on a range of reported novelties. To achieve this, we order the evaluation instances of EvalDet based on the system's prediction confidence score (calculated using various techniques described in this subsection) and take the least confident instances as reported novelties (for each number in the range of reported novelties). This implies that the performance depends on the system's method of computing this prediction score. We explore the following methods of computing this score for the evaluation instances:
Maximum Softmax Probability (MaxProb):
Usually, the last layer of models has a softmax activation function that gives the probability distribution P(y) over all possible answer candidates Y . For the classification tasks, Y corresponds to the set of labels. Hendrycks and Gimpel (2017)
introduced a simple method that uses the maximum softmax probability across all answer candidates as the confidence estimator i.e. the prediction confidence score corresponds to maxy∈Y P(y). In this method, we order the evaluation instances of EvalDet based on this confidence measure, and for each value in the range of reported novelties, we report those instances as novel on which the model is least confident. For the remaining instances, we output the label (out of K classes) having the maximum softmax probability.
Euclidean Distance (EuclidDist) : In this approach, we consider each sample as a point in Kdimensional space. For each sample, the probabilities from the K class classifier are chosen as coordinates in the space. We then calculate Euclidean distances between each sample and the entire distribution. The points furthest away from the distribution are classified as novel instances.
q The *Euclidean distance* is given by d =
PK
i=1 (xi − xmu)
2 where xmu is the distribution of all the samples.
Mahalanobis Distance (MahDist): This approach is similar to the previous approach with the only difference that Mahalanobis distance is used to compute the distance between the sample and the distribution.
The *Mahalanobis distance* (Ghorbani, 2019)
between x iand x jis given by ∆2 = (x i −
x j)⊤Σ−1(x i − x j), where Σ is a d × d covariance matrix. ∆2is equivalent to the squared Euclidean distance between y iand y j, where y is a linearly transformed version of x.
Mean (CompMean): For each sample, the mean of K-1 classes is computed. The class with the highest probability is left out. The mean is later subtracted from 1. The resultant score for all the samples is sorted in descending order. The last Y
elements are classified as Novel instances.
Learning Placeholders Algorithm (Placeholder):
Zhou et al. (2021a) propose a Placeholder algorithm for increasing the separation between clusters of samples in different classes. It addresses the challenge of open-set recognition by increasing the distance between class clusters and shrinking the classification boundary, allowing the classifier to classify samples as novel that fall outside these clusters. It demonstrates the effectiveness of the Placeholders algorithm through experiments and comparison with other state-of-the-art open-set recognition methods. Few Shot Open set Recognition (Few Shot OSR):
Jeong et al. (2021) presents a method for recognizing novel classes with few examples available for each class. It uses prototypes to represent each class and a similarity function to compare new examples to these prototypes, allowing for the effective recognition of novel classes. The paper includes experiments on multiple datasets and compares the method's performance to other state-ofthe-art few-shot open-set recognition methods.
We further detail these methods in Appendix B.
We note that other OOD/anomaly detection methods can also be explored here. However, we study only a limited set of methods since the focus of this work is on formulating and exploring NoveltyTask.
## 3.4 Methods For Novelty Accommodation:
After the detection stage, the system gets feedback i.e. examples of novel labels (DF ). We explore the following methods of leveraging this feedback:
Retrain using DT and DF : DT consists of examples of known labels, and DF consists of examples of novel labels. In this approach, we train a new model (K + N) classifier by combining data instances of DTand DF .
Further Fine-tune using DF : In this method, we first train a model on DT with extra dummy labels, i.e., we train a model having more than K
logits. This allows modifying the same model to learn to output the novel labels. To incorporate the feedback, the model initially trained on DT with dummy labels is further fine-tuned using DF .
Further Fine-tune using DT**(sampled) and** DF :
Here, we follow the same strategy as the previous method, but instead of further fine-tuning only on DF , we further fine-tuning using both DT(downsampled) and DF . This is done to reduce catas-
![5_image_0.png](5_image_0.png)
trophic forgetting (Carpenter and Grossberg, 1988)
of the known labels.
## 4 Experiments And Results 4.1 Experimental Setup
Configurations: We use Amazon reviews
(McAuley et al., 2015; He and McAuley, 2016) for the authorship attribution task. In this task, each author corresponds to a class. We compile a dataset consisting of 250k instances across 200 authors and use it for NoveltyTask. We define experimental settings using a set of configuration parameters; for the base setting, we use the following values:
- Number of Known Classes (K): 100
- Training Data DT Class Balanced: True
- \# Instances Per Known Label in DT: 500
- Number of Novel Classes (N): 100
- \# Instances Per Class in EvalDet: 100 - \# Instances Per Class in EvalAcc: 500 In the above setting, the total number of evaluation instances in EvalDet is 20k out of which 10k are novel. In this work, we also study other settings by varying the values of these parameters.
Models: We run all our experiments using the BERT-base model (Devlin et al., 2019). For classification, we add a linear layer on top of BERT
representation and train the model with a standard learning rate ranging in {1−5}e−5. All experiments are done with Nvidia V100 16GB GPUs.
## 4.2 Results 4.2.1 Novelty Detection
Figure 2 shows the novelty detection performance on the base setting (EvalDet has 20k instances out of which 10k are novel) i.e. overall Precision, Recall, and F1 achieved by various methods across the range of reported novelties on EvalDet. Specifically, each point on the curve represents the P, R,
or F1 when its corresponding method reports the specified number of novelties (x-axis value) out of all instances in EvalDet.
MaxProb achieves the best overall performance:
From the plots, it can be observed that MaxProb achieves the highest AUC value and hence the best overall performance. This result supports the prior finding that complex methods fail to consistently outperform the simiple MaxProb method (Varshney et al., 2022b; Azizmalayeri and Rohban, 2022).
Performance Analysis of MaxProb: To further study the performance of MaxProb in detail, we show its P, R, and F1 curves for Known, Novel, and Overall data in Figure 3. As expected, the precision on Known classes tends to increase as more novelties get reported. This is because the system predicts the known classes only for those instances on which it is most confident (highest MaxProb). Similarly, the precision on novel instances tends to decrease as more and more novelties get reported. The overall precision on the
(K+1) classes tends to increase with the increase in the number of reported novelties. We provide a detailed performance analysis on the known classes, novel classes, and overall data in Appendix C.
## 4.2.2 Novelty Accommodation
Figure 4 shows the novelty accommodation performance on the base setting (EvalAcc has 100k instances uniformly split across 200 classes - 100 known and 100 novel) i.e. Overall F1 achieved by systems trained by leveraging the feedback (using different accommodation methods (a, b, and c)) resulting from different detection methods across the range of reported novelties. Note that for a value of reported novelty, each detection method results
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
in a different feedback dataset and hence will have a different accommodation performance. We show the Precision and Recall curves in the Appendix.
Retraining w/ DT + DF : We note that MaxProb and MahDist turned out to be the best detection methods. This implies that their corresponding feedback dataset would contain more examples of the novel labels. This further reflects in the novelty accommodation performance as incorporating the feedback of these methods results in the best overall accommodation performance using the retraining method.
Catastrophic Forgetting Increases in further fine-tuning with DF : As previously mentioned, this method leads to catastrophic forgetting of the known classes resulting in low overall F1 performance. We demonstrate this trend in Figure 5.
Furthermore, with the increase in the number of novelties reported, the extent of catastrophic forgetting also increases.
Further fine-tuning with Sampled DT and DF
improves performance: This method not only mitigates catastrophic forgetting but also results in a slight improvement over the retraining method.
For sampling, we use the maximum number of correctly detected instances of a class in DF as the
![6_image_2.png](6_image_2.png)
threshold for sampling instances of known labels from DT. Furthermore, this method is more **training efficient** than the retraining method as the number of training instances is significantly lower in this method and yet it achieves better performance.
## 4.3 Analysis
Distribution of Instances over classes in the Feedback dataset: We show the distribution of instances over all the classes (novel) in the feedback dataset DF when the number of reported novelties is 10k in Figure 6. It can be observed from the his-
![7_image_0.png](7_image_0.png)
![7_image_2.png](7_image_2.png)
togram that for all the novel classes, novel instances between 55 and 95 are correctly detected. For majority of the classes, 76-85 instances are detected.
This further shows that the detection method is not biased towards or against any set of novel classes in identifying novel instances.
## Trend Of Class Level Performance In The Accommodation Stage Vs The Number Of Instances In The
feedback dataset: In Figure 7, we show a scatter plot of accommodation F1 performance achieved by each class vs the number of its instances in the feedback dataset. The plot is for the MaxProb detection method when 10k novelties are detected and retrain with DT, and DF accommodation method is used. From the trend, it can be inferred that with the increase in the number of instances, the performance generally tends to increase.
Comparing Performance of Known and Novel Classes in the Accommodation Stage: In Figure 8, we compare the performance of the system (in
![7_image_1.png](7_image_1.png)
the accommodation stage) on Known and Novel classes. It clearly shows that the system finds it challenging to adapt itself to the novel classes. This can be partly attributed to the availability of limited number of training examples of novel classes. This also provides opportunities for developing better accommodation techniques that can overcome this limitation.
## 4.4 Other Configuration Settings
In this work, we also study NoveltyTask for different settings (different configuration parameters defined in 4.1). We observe findings and trends similar to the base setting. We provide detailed results and discussion in the Appendix.
## 5 Conclusion And Discussion
To initiate a systematic research in the important yet underexplored area of 'dealing with novelties',
we introduce *NoveltyTask*, a multi-stage task to evaluate a system's performance on pipelined novelty 'detection' and 'accommodation' tasks. We provided mathematical formulation of NoveltyTask and instantiated it with the authorship attribution task. To this end, we also compiled a large dataset (consisting of 250k instances across 200 authors/labels) from Amazon reviews corpus.
We conducted comprehensive experiments and explored several baseline methods for both detection and accommodation tasks.
Looking forward, we believe that our work opens up several avenues for interesting research avenues in this space, such as improving performance of detecting the novel instances and leveraging the feedback in a way that helps the system adapt with just a few examples of the novel labels.
## Limitations
Though the formulation of the task allows exploring several different settings (by varying the configuration parameters), in this work, we investigated only the label-balanced setting. Exploring the labelimbalanced setting is another very interesting research direction, and we leave that for future work.
Another limitation was the limited exploration of novelty detection methods, as a number of methods have been proposed in the recent times. However, we study only a limited set of methods since the focus of this work is on formulating and exploring NoveltyTask. Lastly, we note that NoveltyTask is a controlled task/framework for evaluating a system's ability to deal with novelties and not a method to improve its ability.
## Acknowledgement
We thank the anonymous reviewers for their insightful feedback. This research was supported by DARPA SAIL-ON program.
## References
Opeyemi Aborisade and Mohd Anwar. 2018a. Classification for authorship of tweets by comparing logistic regression and naive bayes classifiers. 2018 IEEE
International Conference on Information Reuse and Integration (IRI), pages 269–276.
Opeyemi Aborisade and Mohd Anwar. 2018b. Classification for authorship of tweets by comparing logistic regression and naive bayes classifiers. In *2018 IEEE*
International Conference on Information Reuse and Integration (IRI), pages 269–276.
Bo Ai, Yuchen Wang, Yugin Tan, and Samson Tan.
2022. Whodunit? learning to contrast for authorship attribution. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1142–1157, Online only. Association for Computational Linguistics.
Malik Altakrori, Jackie Chi Kit Cheung, and Benjamin C. M. Fung. 2021. The topic confusion task: A
novel evaluation scenario for authorship attribution.
In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4242–4256, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mohammad Azizmalayeri and Mohammad Hossein Rohban. 2022. Ood augmentation may be at odds with open-set recognition. *arXiv preprint* arXiv:2206.04242.
Christopher Bagdon. 2021. Profiling spreaders of hate speech with n-grams and roberta. In CLEF (Working Notes), pages 1822–1828.
Georgios Barlas and Efstathios Stamatatos. 2020. Crossdomain authorship attribution using pre-trained language models. In Artificial Intelligence Applications and Innovations, pages 255–266, Cham. Springer International Publishing.
Benedikt Boenninghoff, Steffen Hessler, Dorothea Kolossa, and Robert M Nickel. 2019. Explainable authorship verification in social media via attentionbased similarity learning. In *2019 IEEE International Conference on Big Data (Big Data)*, pages 36–45. IEEE.
Terrance E. Boult, Przemyslaw A. Grabowicz, Derek S.
Prijatelj, R. Stern, Lawrence B. Holder, Joshua Alspector, Mohsen Jafarzadeh, Touqeer Ahmad, Akshay Raj Dhamija, C.Li, Steve Cruz, A. Shrivastava, Carl Vondrick, and Walter J. Scheirer. 2020.
A unifying framework for formal theories of novelty: Framework, examples and discussion. *ArXiv*,
abs/2012.04226.
Christian Caballero, Hiram Calvo, and Ildar Batyrshin.
2021. On explainable features for translatorship attribution: Unveiling the translator's style with causality.
IEEE Access, 9:93195–93208.
Gail A. Carpenter and Stephen Grossberg. 1988. The art of adaptive pattern recognition by a self-organizing neural network. *Computer*, 21(3):77–88.
Raghavendra Chalapathy and Sanjay Chawla. 2019.
Deep learning for anomaly detection: A survey.
arXiv preprint arXiv:1901.03407.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In *ICLR*.
José Eleandro Custódio and Ivandré Paraboni. 2019.
An ensemble approach to cross-domain authorship attribution. In *Experimental IR Meets Multilinguality, Multimodality, and Interaction*, pages 201–212, Cham. Springer International Publishing.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. 2022. Zero-shot out-of-distribution detection based on the pretrained model clip. In Proceedings of the AAAI conference on artificial intelligence.
Maël Fabien, Esau Villatoro-Tello, Petr Motlicek, and Shantipriya Parida. 2020. BertAA : BERT finetuning for authorship attribution. In Proceedings of the 17th International Conference on Natural Language Processing (ICON), pages 127–137, Indian Institute of Technology Patna, Patna, India. NLP Association of India (NLPAI).
Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan.
2021. Exploring the limits of out-of-distribution detection. Advances in Neural Information Processing Systems, 34:7068–7081.
Hamid Ghorbani. 2019. Mahalanobis distance and its application for detecting multivariate outliers. Facta Univ Ser Math Inform, 34(3):583–95.
Tirthankar Ghosal, Vignesh Edithal, Asif Ekbal, Pushpak Bhattacharyya, George Tsatsaronis, and Srinivasa Satya Sameer Kumar Chivukula. 2018. Novelty goes deep. a deep neural solution to document level novelty detection. In *Proceedings of the 27th International Conference on Computational Linguistics*,
pages 2802–2813, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Ruining He and Julian McAuley. 2016. Ups and downs:
Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507–517.
Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. Proceedings of International Conference on Learning Representations.
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020.
Pretrained transformers improve out-of-distribution robustness. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 2744–2751, Online. Association for Computational Linguistics.
Minki Jeong, Seokeon Choi, and Changick Kim. 2021.
Few-shot open-set recognition by transformation consistency. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*,
pages 12566–12575.
Amita Kamath, Robin Jia, and Percy Liang. 2020. Selective question answering under domain shift. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5684–
5696, Online. Association for Computational Linguistics.
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical report.
Ksenia Lagutina and Nadezhda Lagutina. 2021. A survey of models for constructing text features to classify texts in natural language. In *2021 29th Conference* of Open Innovations Association (FRUCT), pages 222–233. IEEE.
Ksenia Lagutina, Nadezhda Lagutina, Elena Boychuk, Vladislav Larionov, and Ilya Paramonov. 2021. Authorship verification of literary texts with rhythm features. In *2021 28th Conference of Open Innovations* Association (FRUCT), pages 240–251. IEEE.
Ksenia Lagutina, Nadezhda Lagutina, Elena Boychuk, and Ilya Paramonov. 2020. The influence of different stylometric features on the classification of prose by centuries. In *2020 27th Conference of Open Innovations Association (FRUCT)*, pages 108–115. IEEE.
Lei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. 2021a. CascadeBERT: Accelerating inference of pre-trained language models via calibrated complete models cascade. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 475–486, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Litao Li, Rylen Sampson, Steven HH Ding, and Leo Song. 2022. Tasr: Adversarial learning of topicagnostic stylometric representations for informed crisis response through social media. *Information Processing & Management*, 59(2):102857.
Ruiqi Li, Hua Hua, Patrik Haslum, and Jochen Renz.
2021b. Unsupervised Novelty Characterization in Physical Environments Using Qualitative Spatial Relations. In *Proceedings of the 18th International* Conference on Principles of Knowledge Representation and Reasoning, pages 454–464.
Bo Liu, Hao Kang, Haoxiang Li, Gang Hua, and Nuno Vasconcelos. 2020a. Few-shot open-set recognition using meta-learning. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8798–8807.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020b.
Ro{bert}a: A robustly optimized {bert} pretraining approach.
Jing Lu, Yunlu Xu, Hao Li, Zhanzhan Cheng, and Yi Niu. 2022. Pmal: Open set recognition via robust prototype mining. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pages 1872–1880.
Nianzu Ma, Alexander Politowicz, Sahisnu Mazumder, Jiahua Chen, Bing Liu, Eric Robertson, and Scott Grigsby. 2021. Semantic novelty detection in natural language descriptions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 866–882, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52.
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems, volume 32.
Curran Associates, Inc.
Pramuditha Perera, Vlad I Morariu, Rajiv Jain, Varun Manjunatha, Curtis Wigington, Vicente Ordonez, and Vishal M Patel. 2020. Generative-discriminative feature representations for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11814–11823.
Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Text classification with novelty detection. *arXiv preprint* arXiv:2009.11119.
Sai Saketh Rambhatla, Ramalingam Chellappa, and Abhinav Shrivastava. 2021. The pursuit of knowledge:
Discovering and localizing novel categories using dual memory. *2021 IEEE/CVF International Conference on Computer Vision (ICCV)*, pages 9133–9143.
Yunita Sari, Mark Stevenson, and Andreas Vlachos.
2018. Topic or style? exploring the most useful features for authorship attribution. In Proceedings of the 27th International Conference on Computational Linguistics, pages 343–353, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Juan Soler-Company and Leo Wanner. 2017. On the relevance of syntactic and discourse features for author profiling and identification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 681–687, Valencia, Spain. Association for Computational Linguistics.
Xin Sun, Henghui Ding, Chi Zhang, Guosheng Lin, and Keck-Voon Ling. 2021a. M2iosr: Maximal mutual information open set recognition. arXiv preprint arXiv:2108.02373.
Xin Sun, Zhenning Yang, Chi Zhang, Keck-Voon Ling, and Guohao Peng. 2020. Conditional gaussian distribution learning for open set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13480–13489.
Yiyou Sun, Chuan Guo, and Yixuan Li. 2021b. React: Out-of-distribution detection with rectified activations. *Advances in Neural Information Processing* Systems, 34:144–157.
Jacob Tyo, Bhuwan Dhingra, and Zachary C Lipton.
2021. Siamese bert for authorship verification. In CLEF (Working Notes), pages 2169–2177.
Neeraj Varshney and Chitta Baral. 2022. Model cascading: Towards jointly improving efficiency and accuracy of NLP systems. In *Proceedings of the*
2022 Conference on Empirical Methods in Natural Language Processing, pages 11007–11021, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Neeraj Varshney and Chitta Baral. 2023. Postabstention: Towards reliably re-attempting the abstained instances in qa. *arXiv preprint* arXiv:2305.01812.
Neeraj Varshney, Man Luo, and Chitta Baral. 2022a.
Can open-domain qa reader utilize external knowledge efficiently like humans? arXiv preprint arXiv:2211.12707.
Neeraj Varshney, Swaroop Mishra, and Chitta Baral.
2022b. Investigating selective prediction approaches across several tasks in IID, OOD, and adversarial settings. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1995–2002, Dublin, Ireland. Association for Computational Linguistics.
Neeraj Varshney, Swaroop Mishra, and Chitta Baral.
2022c. Towards improving selective prediction ability of NLP systems. In *Proceedings of the 7th Workshop on Representation Learning for NLP*, pages 221–
226, Dublin, Ireland. Association for Computational Linguistics.
Vinodini Molukuvan Venkataram. 2018. Open set text classification using neural networks. University of Colorado Colorado Springs.
Xiangyu Wang and Mizuho Iwaihara. 2021. Integrating roberta fine-tuning and user writing styles for authorship attribution of short texts. In *Web and Big* Data, pages 413–421, Cham. Springer International Publishing.
Hongxin Wei, Lue Tao, Renchunzi Xie, and Bo An.
2021. Open-set label noise can improve robustness against inherent label noise. Advances in Neural Information Processing Systems, 34:7978–7992.
Spencer Whitehead, Suzanne Petryk, Vedaad Shakib, Joseph Gonzalez, Trevor Darrell, Anna Rohrbach, and Marcus Rohrbach. 2022. Reliable visual question answering: Abstain rather than answer incorrectly. In Proceedings of the European Conference on Computer Vision (ECCV).
Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin.
2021. The art of abstention: Selective prediction and error regularization for natural language processing.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1040–1051, Online. Association for Computational Linguistics.
Hu Xu, Bing Liu, Lei Shu, and P Yu. 2019. Open-world learning and application to product classification. In The World Wide Web Conference, pages 3413–3419.
Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, and Joost van de Weijer. 2022. One ring to bring them all: Towards open-set recognition under domain shift. arXiv preprint arXiv:2206.03600.
Eyup Yilmaz and Cagri Toraman. 2022. D2u: Distanceto-uniform learning for out-of-scope detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2093–2108.
Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. 2021a.
Learning placeholders for open-set recognition. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4399–4408.
Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. 2021b.
Learning placeholders for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401–
4410.
Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021c.
Contrastive out-of-distribution detection for pretrained transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1100–1111, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A.1.2 Language Appendix A Other Related Work A.1 Novelty/Ood/Anomaly Detection A.1.1 Vision A.2 Authorship Attribution
Zhou et al. (2021b) propose to add an additional classifier in addition to a closed domain classifier for getting a class-specific threshold of known and unknown classes. They generate data placeholders to mimic open set categories. Venkataram (2018) use an ensemble-based approach and replace the softmax layer with an OpenMAX layer. The hypothesis is that the closest (most similar) class to any known class is an unknown one. This allows the classifier to be trained, enforcing the most probable class to be the ground truth class and the runner-up class to be the background class for all source data.
Zhou et al. (2021c) employ a contrastive learning framework for unsupervised OOD detection, which is composed of a contrastive loss and an OOD scoring function. The contrastive loss increases the discrepancy of the representations of instances from different classes in the task, while the OOD scoring function maps the representations of instances to OOD detection scores.
Xu et al. (2019) propose Learning to Accept Classes (L2AC) method based on meta-learning and does not require re-training the model when new classes are added. L2AC works by maintaining a set of seen classes and comparing new data points to the nearest example from each seen class.
Detection approaches are also used in selective prediction (Varshney et al., 2022b; Kamath et al.,
2020; Xin et al., 2021; Varshney and Baral, 2023)
and cascading techniques (Varshney and Baral, 2022; Varshney et al., 2022a; Li et al., 2021a)
where under-confident predictions are detected to avoid incorrect predictions.
Novelty/OOD/Anomaly detection is an active area of research in computer vision (Fort et al., 2021; Esmaeilpour et al., 2022; Sun et al., 2021a; Lu et al., 2022; Liu et al., 2020a; Sun et al., 2020; Perera et al., 2020). Datasets such as CIFAR 10 and 100 (Krizhevsky, 2009) are typically used to evaluate the efficacy of various detection methods.
Fort et al. (2021) demonstrated that pre-training of transformer-based models using large datasets is fairly robust in detecting near-OOD instances using few examples. Esmaeilpour et al. (2022) proposed to detect OOD instances using pairwise similarity score. They generate synthetic unseen examples and use their closed-set classifier to compute pairwise similarity. Wei et al. (2021) use open-set samples with dynamic, noisy labels and assign random labels to open-set examples, and use them for developing a system for OOD detection. Sun et al.
(2021b) analyze activation functions of the penultimate layer of pretrained models and rectify the activations to an upper limit for OOD detection.
BERT (and its different variants like BertAA,
RoBERTa) based, Siamese-based, and ensemblebased approaches have been used for authorship attribution. Tyo et al. (2021) propose an approach that uses a pretrained BERT model in a siamese configuration for audio-visual classification. They experiment with using triplet loss, contrastive loss, and a modified version of contrastive loss and compare the results. Bagdon (2021) combine the results of a n-gram-based logistic regression classifier with a transformer model based on RoBERTa (Liu et al.,
2020b) via a SVM meta-classifier. Altakrori et al.
(2021) explore a new evaluation setting topic confusion task. The topic distribution is controlled by making it dependent on the author, switching the topic-author pairs between training and testing. This setup allows for measuring the degree to which certain features are influenced by the topic, as opposed to the author's identity. Other works include (Barlas and Stamatatos, 2020; Fabien et al.,
2020; Wang and Iwaihara, 2021). N-grams, word embeddings, and other stylometric features have been used as input feature vectors for the task (Caballero et al., 2021; Boenninghoff et al., 2019; Li et al., 2022; Lagutina et al., 2021, 2020; Lagutina and Lagutina, 2021).
## B Novelty Detection Algorithms
Learning placeholders: The Placeholders algorithm consists of two main components: "Learning Classifier Placeholders" and "Learning Data Placeholders". "Learning Classifier Placeholders" involves adding a set of weights called classifier placeholders to the linear classifier layer at the end of the network. This modified classifier function denoted as f(x) = [W T(x)*, wT*(x)], where w represents the weights of the additional k+1 class, is trained using a modified loss function that encourages the classifier to predict the k+1 class as the second most likely class for every sample. This loss function helps the classifier learn an embedding function such that the k+1 class is always the closest class to each class cluster boundary.
In addition to the k+1 class, the Placeholders algorithm includes a tunable number (C) of additional classifiers to make decisions' boundaries smoother. The final classifier function is, therefore, f(x) = [W T(x)*, maxk* = 1*, ..., CwkT*(x)],
meaning that the closest open-set region in the embedding space is taken into consideration. "Learning Data Placeholders" involves tightening the decision boundaries around the known-class clusters in the embedding space through a process called manifold mixup. This involves creating "unknown" class data from known class data and using an additional loss function to penalize classifying this new data as any of the known classes. Manifold mixup works by interpolating the embeddings of two samples from closed-set classes to create an embedding for a new sample, which is considered to belong to an unknown class. After training the model using both classifier and data placeholders, the Placeholders algorithm includes a final calibration step in which an additional bias is added to the open-set logits. This bias is tuned using a validation set of closed-set samples such that 95% of all closed-set samples are classified as known. The combination of these two components and final calibration allows the Placeholders algorithm to train a classifier to identify novel samples even when only trained on closed-set data.
Few Shot Open set Recognition using MetaLearning : In the paper "Few-Shot Open-Set Recognition using Meta-Learning", the authors propose a method for few-shot open-set recognition using meta-learning. The main idea is to train a meta-learner that can recognize new classes given a few examples of each class.
The meta-learner consists of a feature extractor network and a linear classifier. The feature extractor network is responsible for learning an embedding function that maps samples from different classes into a common embedding space. The goal is to learn an embedding function that clusters samples from the same class together while separating samples from different classes by a large margin.
To train the meta-learner, the authors use a metalearning loss function that encourages the embedding function to learn a "smooth" embedding space.
This loss function consists of two terms: a classification loss and a separation loss.
During training, the meta-learner is presented with a small number of examples from each new class and is required to classify these examples correctly. The meta-learner is trained to optimize the meta-learning loss function, which encourages the embedding function to learn a smooth embedding space where samples from different classes are well separated.
After training, the meta-learner can be used to classify new samples by first projecting them into the embedding space using the feature extractor network, and then using the linear classifier to assign them to the appropriate class. The final classifier is able to generalize to new classes not seen during training, as it has learned to recognize the underlying structure of the embedding space.
## C Results
Hyperparameters of the model: Hidden layer dropout probability of 0.15, input Sequence length of 512 tokens, batch size of 32, and standard learning rate ranging in {1−5}e−5.
1806
## C.1 Other Dataset Definitions:
We define two other datasets by varying the following parameters:
Detection_200 setting (Dataset 2) is defined as:
- Number of Known Classes (K): 100
- Training Data DT Class Balanced: True
- \# Instances Per Known Label in DT: 500
- Number of Novel Classes (N): 100
- \# Instances Per Class in EvalDet: 200
- \# Instances Per Class in EvalAcc: 500
Detection_500 setting (Dataset 3) is defined as:
- Number of Known Classes (K): 100
- Training Data DT Class Balanced: True
- \# Instances Per Known Label in DT: 500
- Number of Novel Classes (N): 100
- \# Instances Per Class in EvalDet: 500
- \# Instances Per Class in EvalAcc: 500
## C.2 Novelty Accommodation Stage
Table 1, 2, and 3 show the results of all six novelty detection methods across all three accommodation settings on the first dataset whose results are described in details in the main paper.
Similar results are obtained for Detection_200 setting and Detection_500 setting as well. Novelty Accommodation results for Detection_200 setting are present in table 4, 5, and 6 and in table 7, 8, and 9 for Detection_500 setting.
| Known | | | | | | | | | |
|--------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 1000 | 51.62 | 90.10 | 65.64 | 15.62 | 2.23 | 3.90 | 33.62 | 46.17 | 38.91 |
| 2000 | 54.17 | 90.62 | 67.81 | 30.21 | 8.51 | 13.28 | 42.19 | 49.56 | 45.58 |
| 3000 | 55.25 | 89.92 | 68.44 | 50.29 | 12.69 | 20.27 | 52.77 | 51.31 | 52.03 |
| 4000 | 58.64 | 89.61 | 70.89 | 67.22 | 23.97 | 35.34 | 62.93 | 56.79 | 59.70 |
| 5000 | 60.77 | 90.16 | 72.60 | 74.02 | 30.99 | 43.69 | 67.40 | 60.58 | 63.81 |
| 6000 | 62.60 | 89.59 | 73.70 | 76.85 | 36.23 | 49.24 | 69.73 | 62.91 | 66.14 |
| 7000 | 64.90 | 89.62 | 75.28 | 80.26 | 42.21 | 55.32 | 72.58 | 65.92 | 69.09 |
| 8000 | 67.03 | 89.61 | 76.69 | 82.63 | 48.16 | 60.85 | 74.83 | 68.89 | 71.74 |
| 9000 | 67.90 | 89.35 | 77.16 | 82.83 | 50.46 | 62.71 | 75.37 | 69.91 | 72.54 |
| 10000 | 69.91 | 89.67 | 78.57 | 83.81 | 54.10 | 65.75 | 76.86 | 71.89 | 74.29 |
| Compute Mean | | | | | | | | | |
| 1000 | 55.57 | 89.67 | 68.62 | 6.41 | 7.71 | 7.00 | 30.99 | 48.69 | 37.87 |
| 2000 | 55.86 | 90.10 | 68.96 | 9.09 | 10.98 | 9.95 | 32.48 | 50.54 | 39.55 |
| 3000 | 57.29 | 89.92 | 69.99 | 13.54 | 15.52 | 14.46 | 35.42 | 52.72 | 42.37 |
| 4000 | 58.63 | 89.40 | 70.82 | 15.88 | 18.07 | 16.90 | 37.26 | 53.74 | 44.01 |
| 5000 | 59.39 | 89.72 | 71.47 | 19.00 | 20.82 | 19.87 | 39.19 | 55.27 | 45.86 |
| 6000 | 61.03 | 89.34 | 72.52 | 23.67 | 25.77 | 24.68 | 42.35 | 57.56 | 48.80 |
| 7000 | 61.67 | 89.63 | 73.07 | 27.36 | 28.05 | 27.70 | 44.51 | 58.84 | 50.68 |
| 8000 | 63.27 | 89.30 | 74.06 | 31.58 | 32.01 | 31.79 | 47.42 | 60.66 | 53.23 |
| 9000 | 64.12 | 89.15 | 74.59 | 35.75 | 34.22 | 34.97 | 49.93 | 61.69 | 55.19 |
| 10000 | 65.70 | 89.18 | 75.66 | 40.30 | 38.12 | 39.18 | 53.00 | 63.65 | 57.84 |
| Compute Euclid Distance | | | | | | | | | |
| 1000 | 54.84 | 90.10 | 68.18 | 30.39 | 11.45 | 16.63 | 42.61 | 50.78 | 46.34 |
| 2000 | 59.14 | 90.22 | 71.45 | 56.80 | 21.40 | 31.09 | 57.97 | 55.81 | 56.87 |
| 3000 | 62.09 | 90.02 | 73.49 | 73.87 | 31.43 | 44.10 | 67.98 | 60.72 | 64.15 |
| 4000 | 64.82 | 89.66 | 75.24 | 78.40 | 39.37 | 52.42 | 71.61 | 64.51 | 67.87 |
| 5000 | 67.45 | 89.29 | 76.85 | 80.12 | 47.33 | 59.51 | 73.78 | 68.31 | 70.94 |
| 6000 | 69.63 | 89.93 | 78.49 | 83.73 | 54.05 | 65.69 | 76.68 | 71.99 | 74.26 |
| 7000 | 71.18 | 89.45 | 79.28 | 83.37 | 57.08 | 67.76 | 77.27 | 73.26 | 75.21 |
| 8000 | 71.48 | 89.20 | 79.36 | 83.93 | 58.09 | 68.66 | 77.71 | 73.64 | 75.62 |
| 9000 | 72.10 | 89.09 | 79.70 | 84.41 | 60.30 | 70.35 | 78.26 | 74.69 | 76.43 |
| 10000 | 73.45 | 88.92 | 80.45 | 85.03 | 62.08 | 71.76 | 79.24 | 75.50 | 77.32 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 1000 | 54.72 | 89.65 | 67.96 | 29.72 | 9.65 | 14.57 | 42.22 | 49.65 | 45.63 |
| 2000 | 59.75 | 90.12 | 71.86 | 55.68 | 22.54 | 32.09 | 57.72 | 56.33 | 57.02 |
| 3000 | 61.80 | 89.69 | 73.18 | 66.02 | 31.36 | 42.52 | 63.91 | 60.53 | 62.17 |
| 4000 | 64.73 | 90.20 | 75.37 | 75.62 | 39.72 | 52.08 | 70.17 | 64.96 | 67.46 |
| 5000 | 67.11 | 89.53 | 76.72 | 80.63 | 46.47 | 58.96 | 73.87 | 68.00 | 70.81 |
| 6000 | 69.20 | 89.61 | 78.09 | 82.91 | 51.50 | 63.53 | 76.06 | 70.56 | 73.21 |
| 7000 | 70.90 | 89.39 | 79.08 | 83.39 | 55.98 | 66.99 | 77.14 | 72.68 | 74.84 |
| 8000 | 71.13 | 89.20 | 79.15 | 84.92 | 58.64 | 69.37 | 78.03 | 73.92 | 75.92 |
| 9000 | 73.21 | 89.59 | 80.58 | 85.68 | 61.90 | 71.87 | 79.44 | 75.75 | 77.55 |
| 10000 | 73.92 | 89.11 | 80.81 | 86.08 | 64.00 | 73.42 | 80.00 | 76.55 | 78.24 |
| Compute Max Probability | | | | | | | | | |
| 1000 | 53.87 | 89.72 | 67.32 | 17.00 | 7.15 | 10.07 | 35.44 | 48.43 | 40.93 |
| 2000 | 56.30 | 90.26 | 69.35 | 33.36 | 16.72 | 22.28 | 44.83 | 53.49 | 48.78 |
| 3000 | 59.59 | 89.59 | 71.57 | 56.36 | 24.13 | 33.79 | 57.97 | 56.86 | 57.41 |
| 4000 | 62.23 | 89.46 | 73.40 | 69.30 | 34.70 | 46.24 | 65.77 | 62.08 | 63.87 |
| 5000 | 66.08 | 89.81 | 76.14 | 76.18 | 44.24 | 55.97 | 71.13 | 67.03 | 69.02 |
| 6000 | 68.11 | 89.66 | 77.41 | 84.26 | 50.07 | 62.81 | 76.18 | 69.86 | 72.88 |
| 7000 | 69.64 | 89.52 | 78.34 | 83.82 | 54.43 | 66.00 | 76.73 | 71.97 | 74.27 |
| 8000 | 72.31 | 89.36 | 79.94 | 85.28 | 59.61 | 70.17 | 78.80 | 74.48 | 76.58 |
| 9000 | 73.03 | 89.30 | 80.35 | 85.77 | 62.45 | 72.28 | 79.40 | 75.87 | 77.59 |
| 10000 | 74.69 | 88.94 | 81.19 | 85.24 | 65.03 | 73.78 | 79.96 | 76.98 | 78.44 |
| Placeholders Algorithm | | | | | | | | | |
| 1000 | 54.45 | 90.31 | 67.94 | 29.47 | 10.84 | 15.85 | 41.96 | 50.57 | 45.86 |
| 2000 | 57.17 | 90.10 | 69.95 | 44.44 | 18.82 | 26.44 | 50.80 | 54.46 | 52.57 |
| 3000 | 61.10 | 89.96 | 72.77 | 55.81 | 29.59 | 38.67 | 58.45 | 59.77 | 59.10 |
| 4000 | 62.74 | 89.85 | 73.89 | 73.03 | 36.60 | 48.76 | 67.88 | 63.23 | 65.47 |
| 5000 | 65.90 | 89.69 | 75.98 | 79.94 | 43.86 | 56.64 | 72.92 | 66.77 | 69.71 |
| 6000 | 67.59 | 89.34 | 76.96 | 83.69 | 49.98 | 62.58 | 75.64 | 69.66 | 72.53 |
| 7000 | 70.52 | 89.73 | 78.97 | 84.42 | 56.92 | 67.99 | 77.47 | 73.33 | 75.34 |
| 8000 | 71.90 | 89.64 | 79.80 | 85.32 | 59.24 | 69.93 | 78.61 | 74.44 | 76.47 |
| 9000 | 72.34 | 89.04 | 79.83 | 83.72 | 58.95 | 69.18 | 78.03 | 73.99 | 75.96 |
| 10000 | 73.41 | 89.71 | 80.75 | 85.84 | 61.92 | 71.94 | 79.63 | 75.81 | 77.67 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 1: Novelty Accommodation Stage: Dataset 1: Retrain using DT and DF | | | | | | | | | | | Known | | | | | | | | | |
|-------------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 1000 | 74.55 | 26.02 | 38.58 | 3.43 | 4.92 | 4.04 | 38.99 | 15.47 | 22.15 |
| 2000 | 61.86 | 18.28 | 28.22 | 11.74 | 15.12 | 13.22 | 36.80 | 16.70 | 22.97 |
| 3000 | 75.14 | 23.99 | 36.37 | 27.67 | 25.67 | 26.63 | 51.41 | 24.83 | 33.49 |
| 4000 | 73.70 | 20.93 | 32.60 | 34.57 | 38.53 | 36.44 | 54.14 | 29.73 | 38.38 |
| 5000 | 77.08 | 19.84 | 31.56 | 40.72 | 48.88 | 44.43 | 58.90 | 34.36 | 43.40 |
| 6000 | 72.90 | 19.24 | 30.44 | 43.49 | 57.53 | 49.53 | 58.19 | 38.38 | 46.25 |
| 7000 | 61.77 | 8.98 | 15.68 | 44.02 | 61.30 | 51.24 | 52.90 | 35.14 | 42.23 |
| 8000 | 58.13 | 8.69 | 15.12 | 47.92 | 66.64 | 55.75 | 53.03 | 37.66 | 44.04 |
| 9000 | 53.48 | 6.53 | 11.64 | 48.35 | 69.98 | 57.19 | 50.91 | 38.25 | 43.68 |
| 10000 | 38.57 | 6.35 | 10.90 | 48.24 | 71.15 | 57.50 | 43.41 | 38.75 | 40.95 |
| Compute Mean | | | | | | | | | |
| 1000 | 39.95 | 6.68 | 11.45 | 1.03 | 9.21 | 1.85 | 20.49 | 7.95 | 11.46 |
| 2000 | 16.75 | 1.06 | 1.99 | 2.01 | 13.72 | 3.51 | 9.38 | 7.39 | 8.27 |
| 3000 | 27.00 | 3.90 | 6.82 | 3.91 | 18.67 | 6.47 | 15.45 | 11.29 | 13.05 |
| 4000 | 23.00 | 1.59 | 2.97 | 4.62 | 21.82 | 7.63 | 13.81 | 11.71 | 12.67 |
| 5000 | 19.92 | 1.56 | 2.89 | 5.25 | 24.94 | 8.67 | 12.59 | 13.25 | 12.91 |
| 6000 | 11.00 | 0.87 | 1.61 | 8.57 | 30.25 | 13.36 | 9.79 | 15.56 | 12.02 |
| 7000 | 20.67 | 3.02 | 5.27 | 11.31 | 33.58 | 16.92 | 15.99 | 18.30 | 17.07 |
| 8000 | 11.00 | 0.75 | 1.40 | 12.96 | 37.76 | 19.30 | 11.98 | 19.26 | 14.77 |
| 9000 | 11.00 | 0.11 | 0.22 | 15.43 | 41.53 | 22.50 | 13.22 | 20.82 | 16.17 |
| 10000 | 12.00 | 0.32 | 0.62 | 18.28 | 45.52 | 26.08 | 15.14 | 22.92 | 18.23 |
| Compute Euclid Distance | | | | | | | | | |
| 1000 | 79.11 | 40.70 | 53.75 | 17.10 | 16.37 | 16.73 | 48.10 | 28.54 | 35.82 |
| 2000 | 85.75 | 35.22 | 49.93 | 35.85 | 31.74 | 33.67 | 60.80 | 33.48 | 43.18 |
| 3000 | 76.62 | 17.90 | 29.02 | 46.81 | 44.39 | 45.57 | 61.72 | 31.14 | 41.39 |
| 4000 | 72.88 | 13.35 | 22.57 | 47.42 | 56.88 | 51.72 | 60.15 | 35.11 | 44.34 |
| 5000 | 59.96 | 11.48 | 19.27 | 49.27 | 67.04 | 56.80 | 54.62 | 39.26 | 45.68 |
| 6000 | 43.27 | 6.98 | 12.02 | 49.74 | 70.48 | 58.32 | 46.50 | 38.73 | 42.26 |
| 7000 | 33.50 | 3.19 | 5.83 | 48.67 | 73.52 | 58.57 | 41.09 | 38.35 | 39.67 |
| 8000 | 35.45 | 4.49 | 7.97 | 49.63 | 75.66 | 59.94 | 42.54 | 40.08 | 41.27 |
| 9000 | 26.33 | 2.93 | 5.27 | 50.30 | 78.02 | 61.17 | 38.32 | 40.47 | 39.37 |
| 10000 | 17.00 | 3.14 | 5.30 | 51.87 | 79.31 | 62.72 | 34.43 | 41.23 | 37.52 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 1000 | 77.64 | 31.72 | 45.04 | 11.59 | 15.90 | 13.41 | 44.61 | 23.81 | 31.05 |
| 2000 | 79.25 | 22.94 | 35.58 | 27.57 | 32.88 | 29.99 | 53.41 | 27.91 | 36.66 |
| 3000 | 88.60 | 23.51 | 37.16 | 35.89 | 42.37 | 38.86 | 62.25 | 32.94 | 43.08 |
| 4000 | 78.20 | 10.35 | 18.28 | 42.53 | 56.15 | 48.40 | 60.37 | 33.25 | 42.88 |
| 5000 | 50.86 | 7.91 | 13.69 | 45.89 | 60.54 | 52.21 | 48.38 | 34.23 | 40.09 |
| 6000 | 50.45 | 6.47 | 11.47 | 47.37 | 70.73 | 56.74 | 48.91 | 38.60 | 43.15 |
| 7000 | 27.26 | 2.83 | 5.13 | 47.90 | 74.23 | 58.23 | 37.58 | 38.53 | 38.05 |
| 8000 | 21.00 | 2.65 | 4.71 | 49.53 | 77.02 | 60.29 | 35.27 | 39.83 | 37.41 |
| 9000 | 22.00 | 3.56 | 6.13 | 50.31 | 78.15 | 61.21 | 36.15 | 40.85 | 38.36 |
| 10000 | 17.97 | 1.25 | 2.34 | 51.21 | 80.49 | 62.60 | 34.59 | 40.87 | 37.47 |
| Compute Max Probability | | | | | | | | | |
| 1000 | 64.52 | 20.06 | 30.60 | 3.20 | 6.38 | 4.26 | 33.86 | 13.22 | 19.02 |
| 2000 | 84.61 | 25.01 | 38.61 | 18.63 | 22.15 | 20.24 | 51.62 | 23.58 | 32.37 |
| 3000 | 57.24 | 11.32 | 18.90 | 24.95 | 35.16 | 29.19 | 41.10 | 23.24 | 29.69 |
| 4000 | 63.88 | 11.03 | 18.81 | 36.70 | 47.23 | 41.30 | 50.29 | 29.13 | 36.89 |
| 5000 | 56.02 | 6.07 | 10.95 | 42.32 | 58.27 | 49.03 | 49.17 | 32.17 | 38.89 |
| 6000 | 43.84 | 5.29 | 9.44 | 45.00 | 65.42 | 53.32 | 44.42 | 35.35 | 39.37 |
| 7000 | 37.02 | 5.00 | 8.81 | 47.77 | 71.41 | 57.25 | 42.39 | 38.21 | 40.19 |
| 8000 | 28.85 | 4.55 | 7.86 | 48.98 | 75.65 | 59.46 | 38.91 | 40.10 | 39.50 |
| 9000 | 16.00 | 1.86 | 3.33 | 49.15 | 78.29 | 60.39 | 32.58 | 40.07 | 35.94 |
| 10000 | 19.90 | 2.05 | 3.72 | 51.34 | 79.65 | 62.44 | 35.62 | 40.85 | 38.06 |
| Placeholders Algorithm | | | | | | | | | |
| 1000 | 75.81 | 26.33 | 39.09 | 5.38 | 10.21 | 7.05 | 40.60 | 18.27 | 25.20 |
| 2000 | 83.63 | 24.58 | 37.99 | 18.81 | 27.74 | 22.42 | 51.22 | 26.16 | 34.63 |
| 3000 | 84.29 | 19.97 | 32.29 | 28.87 | 40.48 | 33.70 | 56.58 | 30.23 | 39.41 |
| 4000 | 59.16 | 9.35 | 16.15 | 35.60 | 50.84 | 41.88 | 47.38 | 30.09 | 36.81 |
| 5000 | 60.67 | 6.64 | 11.97 | 42.08 | 59.11 | 49.16 | 51.38 | 32.87 | 40.09 |
| 6000 | 50.79 | 5.49 | 9.91 | 48.35 | 69.81 | 57.13 | 49.57 | 37.65 | 42.80 |
| 7000 | 29.00 | 3.19 | 5.75 | 47.59 | 74.18 | 57.98 | 38.30 | 38.68 | 38.49 |
| 8000 | 26.99 | 3.25 | 5.80 | 48.55 | 73.98 | 58.63 | 37.77 | 38.62 | 38.19 |
| 9000 | 25.70 | 2.20 | 4.05 | 50.46 | 77.10 | 61.00 | 38.08 | 39.65 | 38.85 |
| 10000 | 19.94 | 1.77 | 3.25 | 49.57 | 78.38 | 60.73 | 34.76 | 40.07 | 37.23 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 2: Novelty Accommodation Stage: Dataset 1: Further Fine-tune using DF . | | | | | | | | | | | Known | | | | | | | | | |
|----------------------------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 1000 | 59.69 | 87.26 | 70.89 | 15.37 | 4.64 | 7.13 | 37.53 | 45.95 | 41.32 |
| 2000 | 62.66 | 87.70 | 73.09 | 32.20 | 14.14 | 19.65 | 47.43 | 50.92 | 49.11 |
| 3000 | 65.50 | 86.96 | 74.72 | 48.58 | 22.47 | 30.73 | 57.04 | 54.72 | 55.86 |
| 4000 | 68.97 | 87.08 | 76.97 | 62.66 | 34.79 | 44.74 | 65.81 | 60.94 | 63.28 |
| 5000 | 73.44 | 86.87 | 79.59 | 65.70 | 44.87 | 53.32 | 69.57 | 65.87 | 67.67 |
| 6000 | 75.33 | 85.70 | 80.18 | 67.43 | 51.22 | 58.22 | 71.38 | 68.46 | 69.89 |
| 7000 | 78.45 | 85.66 | 81.90 | 70.25 | 57.98 | 63.53 | 74.35 | 71.82 | 73.06 |
| 8000 | 80.33 | 85.03 | 82.61 | 72.67 | 62.59 | 67.25 | 76.50 | 73.81 | 75.13 |
| 9000 | 81.31 | 84.65 | 82.95 | 73.82 | 66.87 | 70.17 | 77.56 | 75.76 | 76.65 |
| 10000 | 82.88 | 84.44 | 83.65 | 74.13 | 68.83 | 71.38 | 78.50 | 76.64 | 77.56 |
| Compute Mean | | | | | | | | | |
| 1000 | 57.36 | 88.99 | 69.76 | 3.95 | 8.73 | 5.44 | 30.66 | 48.86 | 37.68 |
| 2000 | 60.92 | 88.75 | 72.25 | 5.96 | 12.81 | 8.14 | 33.44 | 50.78 | 40.32 |
| 3000 | 63.45 | 88.03 | 73.75 | 9.74 | 17.96 | 12.63 | 36.60 | 52.99 | 43.30 |
| 4000 | 65.72 | 87.84 | 75.19 | 11.33 | 20.86 | 14.68 | 38.52 | 54.35 | 45.09 |
| 5000 | 65.68 | 87.91 | 75.19 | 13.52 | 23.76 | 17.23 | 39.60 | 55.83 | 46.33 |
| 6000 | 68.42 | 87.28 | 76.71 | 18.28 | 28.87 | 22.39 | 43.35 | 58.08 | 49.65 |
| 7000 | 69.71 | 86.83 | 77.33 | 21.18 | 31.86 | 25.44 | 45.44 | 59.34 | 51.47 |
| 8000 | 71.32 | 87.33 | 78.52 | 25.46 | 35.86 | 29.78 | 48.39 | 61.60 | 54.20 |
| 9000 | 73.99 | 86.78 | 79.88 | 28.26 | 39.38 | 32.91 | 51.12 | 63.08 | 56.47 |
| 10000 | 75.89 | 86.23 | 80.73 | 32.54 | 43.27 | 37.15 | 54.21 | 64.75 | 59.01 |
| Compute Euclid Distance | | | | | | | | | |
| 1000 | 63.23 | 87.81 | 73.52 | 30.48 | 15.33 | 20.40 | 46.86 | 51.57 | 49.10 |
| 2000 | 68.99 | 87.47 | 77.14 | 52.53 | 28.39 | 36.86 | 60.76 | 57.93 | 59.31 |
| 3000 | 72.93 | 86.67 | 79.21 | 64.15 | 42.46 | 51.10 | 68.54 | 64.57 | 66.50 |
| 4000 | 75.98 | 86.22 | 80.78 | 70.70 | 53.82 | 61.12 | 73.34 | 70.02 | 71.64 |
| 5000 | 79.22 | 85.36 | 82.18 | 73.09 | 62.86 | 67.59 | 76.15 | 74.11 | 75.12 |
| 6000 | 81.70 | 85.02 | 83.33 | 74.91 | 68.35 | 71.48 | 78.31 | 76.69 | 77.49 |
| 7000 | 82.01 | 85.48 | 83.71 | 76.69 | 70.31 | 73.36 | 79.35 | 77.89 | 78.61 |
| 8000 | 82.80 | 85.56 | 84.16 | 77.90 | 72.09 | 74.88 | 80.35 | 78.83 | 79.58 |
| 9000 | 83.54 | 84.62 | 84.08 | 77.92 | 73.29 | 75.53 | 80.73 | 78.95 | 79.83 |
| 10000 | 84.50 | 84.31 | 84.40 | 78.39 | 75.05 | 76.68 | 81.45 | 79.68 | 80.56 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 1000 | 61.52 | 88.11 | 72.45 | 32.72 | 16.57 | 22.00 | 47.12 | 52.34 | 49.59 |
| 2000 | 68.34 | 87.25 | 76.65 | 49.43 | 30.10 | 37.42 | 58.89 | 58.67 | 58.78 |
| 3000 | 73.16 | 86.78 | 79.39 | 61.57 | 41.30 | 49.44 | 67.36 | 64.04 | 65.66 |
| 4000 | 76.49 | 86.25 | 81.08 | 67.36 | 51.58 | 58.42 | 71.93 | 68.92 | 70.39 |
| 5000 | 78.93 | 85.77 | 82.21 | 72.94 | 61.55 | 66.76 | 75.94 | 73.66 | 74.78 |
| 6000 | 80.99 | 85.69 | 83.27 | 74.78 | 66.77 | 70.55 | 77.88 | 76.23 | 77.05 |
| 7000 | 81.71 | 84.69 | 83.17 | 75.56 | 68.76 | 72.00 | 78.63 | 76.72 | 77.66 |
| 8000 | 83.11 | 85.45 | 84.26 | 77.29 | 72.60 | 74.87 | 80.20 | 79.03 | 79.61 |
| 9000 | 83.76 | 84.45 | 84.10 | 77.86 | 73.79 | 75.77 | 80.81 | 79.12 | 79.96 |
| 10000 | 84.55 | 84.18 | 84.36 | 78.58 | 75.10 | 76.80 | 81.57 | 79.64 | 80.59 |
| Compute Max Probability | | | | | | | | | |
| 1000 | 58.20 | 88.66 | 70.27 | 23.09 | 12.71 | 16.40 | 40.65 | 50.69 | 45.12 |
| 2000 | 65.05 | 88.25 | 74.89 | 37.00 | 26.00 | 30.54 | 51.02 | 57.12 | 53.90 |
| 3000 | 68.66 | 87.88 | 77.09 | 57.39 | 35.26 | 43.68 | 63.03 | 61.57 | 62.29 |
| 4000 | 73.22 | 87.58 | 79.76 | 64.21 | 47.60 | 54.67 | 68.71 | 67.59 | 68.15 |
| 5000 | 77.43 | 86.70 | 81.80 | 70.83 | 57.74 | 63.62 | 74.13 | 72.22 | 73.16 |
| 6000 | 79.69 | 86.50 | 82.96 | 74.79 | 63.95 | 68.95 | 77.24 | 75.23 | 76.22 |
| 7000 | 81.81 | 85.93 | 83.82 | 76.06 | 69.15 | 72.44 | 78.93 | 77.54 | 78.23 |
| 8000 | 82.86 | 85.38 | 84.10 | 76.89 | 71.83 | 74.27 | 79.88 | 78.60 | 79.23 |
| 9000 | 84.09 | 85.00 | 84.54 | 77.74 | 74.36 | 76.01 | 80.91 | 79.68 | 80.29 |
| 10000 | 84.60 | 84.47 | 84.53 | 78.24 | 75.36 | 76.77 | 81.42 | 79.91 | 80.66 |
| Placeholders Algorithm | | | | | | | | | |
| 1000 | 58.94 | 88.66 | 70.81 | 28.65 | 17.12 | 21.43 | 43.80 | 52.89 | 47.92 |
| 2000 | 64.40 | 88.71 | 74.63 | 49.10 | 27.78 | 35.48 | 56.75 | 58.25 | 57.49 |
| 3000 | 68.68 | 88.11 | 77.19 | 59.69 | 38.73 | 46.98 | 64.19 | 63.42 | 63.80 |
| 4000 | 72.46 | 87.32 | 79.20 | 68.83 | 46.58 | 55.56 | 70.64 | 66.95 | 68.75 |
| 5000 | 75.73 | 87.02 | 80.98 | 75.08 | 58.55 | 65.79 | 75.41 | 72.79 | 74.08 |
| 6000 | 79.48 | 85.99 | 82.61 | 75.94 | 65.82 | 70.52 | 77.71 | 75.91 | 76.80 |
| 7000 | 81.18 | 85.87 | 83.46 | 77.52 | 69.19 | 73.12 | 79.35 | 77.53 | 78.43 |
| 8000 | 83.05 | 85.40 | 84.21 | 78.70 | 73.36 | 75.94 | 80.87 | 79.38 | 80.12 |
| 9000 | 82.48 | 85.17 | 83.80 | 78.39 | 72.41 | 75.28 | 80.44 | 78.79 | 79.61 |
| 10000 | 83.14 | 85.43 | 84.27 | 78.92 | 73.88 | 76.32 | 81.03 | 79.65 | 80.33 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 3: Novelty Accommodation Stage: Dataset 1: Further Fine-tune using Sampled DT and DF . | | | | | | | | | |
| Known | | | | | | | | | |
|--------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 4000 | 57.47 | 89.90 | 70.12 | 57.28 | 20.96 | 30.69 | 57.38 | 55.43 | 56.39 |
| 8000 | 66.31 | 88.96 | 75.98 | 83.59 | 45.46 | 58.89 | 74.95 | 67.21 | 70.87 |
| 12000 | 71.26 | 88.93 | 79.12 | 85.16 | 57.43 | 68.60 | 78.21 | 73.18 | 75.61 |
| 16000 | 75.42 | 89.55 | 81.88 | 86.76 | 67.05 | 75.64 | 81.09 | 78.30 | 79.67 |
| 20000 | 77.47 | 89.41 | 83.01 | 88.40 | 71.43 | 79.01 | 82.94 | 80.42 | 81.66 |
| Compute Mean | | | | | | | | | |
| 4000 | 56.84 | 89.63 | 69.56 | 9.09 | 12.39 | 10.49 | 32.96 | 51.01 | 40.05 |
| 8000 | 60.52 | 89.47 | 72.20 | 15.40 | 21.08 | 17.80 | 37.96 | 55.27 | 45.01 |
| 12000 | 62.75 | 89.67 | 73.83 | 23.36 | 28.78 | 25.79 | 43.06 | 59.22 | 49.86 |
| 16000 | 66.27 | 89.51 | 76.16 | 31.26 | 36.25 | 33.57 | 48.77 | 62.88 | 54.93 |
| 20000 | 68.62 | 89.19 | 77.56 | 40.37 | 43.76 | 42.00 | 54.49 | 66.47 | 59.89 |
| Compute Euclid Distance | | | | | | | | | |
| 4000 | 65.56 | 89.81 | 75.79 | 73.37 | 39.72 | 51.54 | 69.47 | 64.77 | 67.04 |
| 8000 | 73.69 | 89.30 | 80.75 | 85.84 | 63.55 | 73.03 | 79.77 | 76.43 | 78.06 |
| 12000 | 77.47 | 89.19 | 82.92 | 88.01 | 71.65 | 78.99 | 82.74 | 80.42 | 81.56 |
| 16000 | 79.19 | 88.81 | 83.72 | 88.18 | 74.73 | 80.90 | 83.68 | 81.77 | 82.71 |
| 20000 | 80.33 | 88.63 | 84.28 | 89.57 | 77.62 | 83.17 | 84.95 | 83.12 | 84.03 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 4000 | 65.24 | 89.96 | 75.63 | 75.98 | 38.95 | 51.50 | 70.61 | 64.46 | 67.39 |
| 8000 | 73.35 | 89.48 | 80.62 | 85.56 | 61.38 | 71.48 | 79.45 | 75.43 | 77.39 |
| 12000 | 76.74 | 89.29 | 82.54 | 87.99 | 70.02 | 77.98 | 82.37 | 79.66 | 80.99 |
| 16000 | 80.08 | 89.05 | 84.33 | 89.10 | 76.62 | 82.39 | 84.59 | 82.83 | 83.70 |
| 20000 | 80.83 | 89.01 | 84.72 | 90.47 | 78.57 | 84.10 | 85.65 | 83.79 | 84.71 |
| Compute Max Probability | | | | | | | | | |
| 4000 | 61.55 | 90.02 | 73.11 | 58.92 | 28.77 | 38.66 | 60.24 | 59.40 | 59.82 |
| 8000 | 70.28 | 89.56 | 78.76 | 82.11 | 53.64 | 64.89 | 76.19 | 71.60 | 73.82 |
| 12000 | 76.39 | 88.87 | 82.16 | 87.62 | 68.74 | 77.04 | 82.00 | 78.80 | 80.37 |
| 16000 | 80.09 | 89.08 | 84.35 | 88.95 | 76.14 | 82.05 | 84.52 | 82.61 | 83.55 |
| 20000 | 81.25 | 88.80 | 84.86 | 89.25 | 78.42 | 83.49 | 85.25 | 83.61 | 84.42 |
| Placeholders Algorithm | | | | | | | | | |
| 4000 | 61.32 | 89.86 | 72.90 | 57.04 | 30.12 | 39.42 | 59.18 | 59.99 | 59.58 |
| 8000 | 69.68 | 89.87 | 78.50 | 85.11 | 52.89 | 65.24 | 77.39 | 71.38 | 74.26 |
| 12000 | 74.92 | 89.04 | 81.37 | 87.46 | 65.95 | 75.20 | 81.19 | 77.50 | 79.30 |
| 16000 | 78.69 | 88.63 | 83.36 | 88.87 | 74.19 | 80.87 | 83.78 | 81.41 | 82.58 |
| 20000 | 80.82 | 88.82 | 84.63 | 90.20 | 78.17 | 83.76 | 85.51 | 83.49 | 84.49 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 4: Novelty Accommodation Stage: Dataset 2: Retrain using DT and DF | | | | | | | | | |
| Known | | | | | | | | | |
|-------------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 4000 | 83.55 | 28.67 | 42.69 | 33.25 | 32.69 | 32.97 | 58.40 | 30.68 | 40.23 |
| 8000 | 48.80 | 7.04 | 12.30 | 45.59 | 61.05 | 52.20 | 47.20 | 34.05 | 39.56 |
| 12000 | 36.71 | 4.56 | 8.11 | 49.62 | 75.64 | 59.93 | 43.17 | 40.10 | 41.58 |
| 16000 | 18.00 | 2.26 | 4.02 | 53.16 | 80.24 | 63.95 | 35.58 | 41.25 | 38.21 |
| 20000 | 10.00 | 3.27 | 4.93 | 55.63 | 84.25 | 67.01 | 32.81 | 43.76 | 37.50 |
| Compute Mean | | | | | | | | | |
| 4000 | 12.89 | 0.76 | 1.44 | 1.90 | 14.41 | 3.36 | 7.39 | 7.58 | 7.48 |
| 8000 | 6.00 | 1.75 | 2.71 | 4.59 | 22.88 | 7.65 | 5.29 | 12.32 | 7.40 |
| 12000 | 13.98 | 2.31 | 3.96 | 8.50 | 31.69 | 13.40 | 11.24 | 17.00 | 13.53 |
| 16000 | 7.76 | 0.22 | 0.43 | 13.20 | 39.82 | 19.83 | 10.48 | 20.02 | 13.76 |
| 20000 | 10.00 | 0.13 | 0.26 | 19.13 | 48.70 | 27.47 | 14.57 | 24.42 | 18.25 |
| Compute Euclid Distance | | | | | | | | | |
| 4000 | 79.08 | 15.55 | 25.99 | 45.61 | 54.41 | 49.62 | 62.34 | 34.98 | 44.81 |
| 8000 | 30.00 | 3.81 | 6.76 | 51.96 | 79.00 | 62.69 | 40.98 | 41.41 | 41.19 |
| 12000 | 15.00 | 1.50 | 2.73 | 54.29 | 84.03 | 65.96 | 34.65 | 42.76 | 38.28 |
| 16000 | 10.00 | 0.75 | 1.40 | 55.82 | 86.85 | 67.96 | 32.91 | 43.80 | 37.58 |
| 20000 | 4.00 | 0.42 | 0.76 | 55.57 | 87.67 | 68.02 | 29.78 | 44.04 | 35.53 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 4000 | 64.78 | 14.62 | 23.86 | 43.47 | 52.66 | 47.63 | 54.12 | 33.64 | 41.49 |
| 8000 | 27.95 | 2.69 | 4.91 | 53.26 | 77.48 | 63.13 | 40.61 | 40.09 | 40.35 |
| 12000 | 9.33 | 0.47 | 0.89 | 53.85 | 83.92 | 65.60 | 31.59 | 42.20 | 36.13 |
| 16000 | 11.00 | 1.31 | 2.34 | 55.60 | 86.41 | 67.66 | 33.30 | 43.86 | 37.86 |
| 20000 | 6.00 | 0.04 | 0.08 | 56.65 | 87.98 | 68.92 | 31.33 | 44.01 | 36.60 |
| Compute Max Probability | | | | | | | | | |
| 4000 | 70.51 | 20.17 | 31.37 | 36.49 | 39.94 | 38.14 | 53.50 | 30.05 | 38.48 |
| 8000 | 25.66 | 1.86 | 3.47 | 46.68 | 69.62 | 55.89 | 36.17 | 35.74 | 35.95 |
| 12000 | 8.00 | 1.15 | 2.01 | 53.00 | 82.74 | 64.61 | 30.50 | 41.94 | 35.32 |
| 16000 | 8.00 | 0.55 | 1.03 | 55.56 | 86.32 | 67.61 | 31.78 | 43.43 | 36.70 |
| 20000 | 7.00 | 0.24 | 0.46 | 57.11 | 87.49 | 69.11 | 32.05 | 43.87 | 37.04 |
| Placeholders Algorithm | | | | | | | | | |
| 4000 | 67.19 | 13.68 | 22.73 | 30.37 | 40.68 | 34.78 | 48.78 | 27.18 | 34.91 |
| 8000 | 31.93 | 2.62 | 4.84 | 47.52 | 69.70 | 56.51 | 39.72 | 36.16 | 37.86 |
| 12000 | 9.00 | 1.53 | 2.62 | 53.00 | 82.51 | 64.54 | 31.00 | 42.02 | 35.68 |
| 16000 | 7.00 | 0.63 | 1.16 | 55.64 | 86.24 | 67.64 | 31.32 | 43.43 | 36.39 |
| 20000 | 7.00 | 0.43 | 0.81 | 56.06 | 87.89 | 68.46 | 31.53 | 44.16 | 36.79 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 5: Novelty Accommodation Stage: Dataset 2: Further Fine-tune using DF . | | | | | | | | | |
| Known | | | | | | | | | |
|----------------------------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 4000 | 67.78 | 87.40 | 76.35 | 53.65 | 28.48 | 37.21 | 60.71 | 57.94 | 59.29 |
| 8000 | 78.41 | 85.86 | 81.97 | 73.39 | 59.38 | 65.65 | 75.90 | 72.62 | 74.22 |
| 12000 | 83.72 | 84.33 | 84.02 | 77.60 | 72.56 | 75.00 | 80.66 | 78.45 | 79.54 |
| 16000 | 86.11 | 84.61 | 85.35 | 80.39 | 78.25 | 79.31 | 83.25 | 81.43 | 82.33 |
| 20000 | 86.59 | 84.91 | 85.74 | 82.77 | 81.47 | 82.11 | 84.68 | 83.19 | 83.93 |
| Compute Mean | | | | | | | | | |
| 4000 | 59.33 | 89.27 | 71.28 | 7.03 | 13.27 | 9.19 | 33.18 | 51.27 | 40.29 |
| 8000 | 64.66 | 88.79 | 74.83 | 12.25 | 21.84 | 15.70 | 38.46 | 55.31 | 45.37 |
| 12000 | 67.24 | 88.77 | 76.52 | 19.30 | 29.96 | 23.48 | 43.27 | 59.37 | 50.06 |
| 16000 | 71.70 | 88.24 | 79.11 | 26.76 | 37.66 | 31.29 | 49.23 | 62.95 | 55.25 |
| 20000 | 75.77 | 88.14 | 81.49 | 35.22 | 46.40 | 40.04 | 55.49 | 67.27 | 60.81 |
| Compute Euclid Distance | | | | | | | | | |
| 4000 | 74.92 | 87.17 | 80.58 | 71.98 | 53.51 | 61.39 | 73.45 | 70.34 | 71.86 |
| 8000 | 83.65 | 86.13 | 84.87 | 81.38 | 75.63 | 78.40 | 82.52 | 80.88 | 81.69 |
| 12000 | 85.43 | 85.89 | 85.66 | 84.03 | 80.79 | 82.38 | 84.73 | 83.34 | 84.03 |
| 16000 | 86.43 | 86.35 | 86.39 | 85.35 | 82.88 | 84.10 | 85.89 | 84.61 | 85.25 |
| 20000 | 87.02 | 86.11 | 86.56 | 85.92 | 84.20 | 85.05 | 86.47 | 85.15 | 85.80 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 4000 | 75.00 | 87.16 | 80.62 | 69.84 | 52.95 | 60.23 | 72.42 | 70.05 | 71.22 |
| 8000 | 82.97 | 86.17 | 84.54 | 80.70 | 74.21 | 77.32 | 81.83 | 80.19 | 81.00 |
| 12000 | 85.56 | 85.86 | 85.71 | 83.39 | 80.00 | 81.66 | 84.48 | 82.93 | 83.70 |
| 16000 | 86.82 | 86.13 | 86.47 | 85.13 | 82.81 | 83.95 | 85.97 | 84.47 | 85.21 |
| 20000 | 87.49 | 86.27 | 86.88 | 85.72 | 84.46 | 85.09 | 86.61 | 85.36 | 85.98 |
| Compute Max Probability | | | | | | | | | |
| 4000 | 69.16 | 88.39 | 77.60 | 61.06 | 39.69 | 48.11 | 65.11 | 64.04 | 64.57 |
| 8000 | 80.98 | 86.87 | 83.82 | 78.15 | 68.53 | 73.02 | 79.57 | 77.70 | 78.62 |
| 12000 | 84.78 | 85.84 | 85.31 | 82.84 | 78.16 | 80.43 | 83.81 | 82.00 | 82.90 |
| 16000 | 86.71 | 85.50 | 86.10 | 84.36 | 82.70 | 83.52 | 85.54 | 84.10 | 84.81 |
| 20000 | 87.21 | 85.93 | 86.57 | 85.63 | 84.05 | 84.83 | 86.42 | 84.99 | 85.70 |
| Placeholders Algorithm | | | | | | | | | |
| 4000 | 67.89 | 89.14 | 77.08 | 60.03 | 40.77 | 48.56 | 63.96 | 64.95 | 64.45 |
| 8000 | 77.86 | 88.05 | 82.64 | 80.32 | 65.58 | 72.21 | 79.09 | 76.82 | 77.94 |
| 12000 | 83.66 | 87.03 | 85.31 | 84.28 | 78.54 | 81.31 | 83.97 | 82.78 | 83.37 |
| 16000 | 85.80 | 86.50 | 86.15 | 85.24 | 81.85 | 83.51 | 85.52 | 84.18 | 84.84 |
| 20000 | 86.72 | 86.51 | 86.61 | 86.32 | 83.99 | 85.14 | 86.52 | 85.25 | 85.88 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 6: Novelty Accommodation Stage: Dataset 2: Further Fine-tune using Sampled DT and DF . | | | | | | | | | |
| Known | | | | | | | | | |
|--------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 10000 | 70.00 | 90.48 | 78.93 | 86.80 | 54.61 | 67.04 | 78.40 | 72.55 | 75.36 |
| 20000 | 79.06 | 88.63 | 83.57 | 88.26 | 74.07 | 80.54 | 83.66 | 81.35 | 82.49 |
| 30000 | 83.41 | 89.02 | 86.12 | 91.10 | 82.49 | 86.58 | 87.25 | 85.75 | 86.49 |
| 40000 | 85.19 | 88.49 | 86.81 | 91.91 | 86.37 | 89.05 | 88.55 | 87.43 | 87.99 |
| 50000 | 86.87 | 88.36 | 87.61 | 92.39 | 88.83 | 90.58 | 89.63 | 88.59 | 89.11 |
| Compute Mean | | | | | | | | | |
| 10000 | 58.04 | 89.86 | 70.53 | 8.73 | 14.06 | 10.77 | 33.39 | 51.96 | 40.65 |
| 20000 | 61.41 | 89.43 | 72.82 | 15.30 | 22.70 | 18.28 | 38.35 | 56.06 | 45.54 |
| 30000 | 64.68 | 89.13 | 74.96 | 23.33 | 31.21 | 26.70 | 44.00 | 60.17 | 50.83 |
| 40000 | 68.93 | 89.55 | 77.90 | 30.62 | 39.67 | 34.56 | 49.78 | 64.61 | 56.23 |
| 50000 | 72.91 | 89.15 | 80.22 | 40.10 | 48.95 | 44.09 | 56.50 | 69.05 | 62.15 |
| Compute Euclid Distance | | | | | | | | | |
| 10000 | 78.29 | 89.46 | 83.50 | 89.16 | 72.90 | 80.21 | 83.72 | 81.18 | 82.43 |
| 20000 | 84.24 | 88.78 | 86.45 | 92.01 | 84.95 | 88.34 | 88.13 | 86.87 | 87.50 |
| 30000 | 86.28 | 88.35 | 87.30 | 92.63 | 88.62 | 90.58 | 89.46 | 88.48 | 88.97 |
| 40000 | 87.84 | 88.96 | 88.40 | 93.47 | 90.92 | 92.18 | 90.66 | 89.94 | 90.30 |
| 50000 | 88.36 | 88.41 | 88.38 | 93.43 | 91.92 | 92.67 | 90.89 | 90.16 | 90.52 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 10000 | 77.53 | 89.67 | 83.16 | 88.91 | 70.52 | 78.65 | 83.22 | 80.10 | 81.63 |
| 20000 | 83.83 | 88.98 | 86.33 | 91.82 | 83.42 | 87.42 | 87.83 | 86.20 | 87.01 |
| 30000 | 86.29 | 88.76 | 87.51 | 92.95 | 88.61 | 90.73 | 89.62 | 88.68 | 89.15 |
| 40000 | 87.90 | 88.79 | 88.34 | 93.61 | 91.32 | 92.45 | 90.75 | 90.05 | 90.40 |
| 50000 | 88.45 | 88.49 | 88.47 | 93.58 | 92.15 | 92.86 | 91.02 | 90.32 | 90.67 |
| Compute Max Probability | | | | | | | | | |
| 10000 | 70.30 | 89.58 | 78.78 | 77.38 | 53.36 | 63.16 | 73.84 | 71.47 | 72.64 |
| 20000 | 81.93 | 88.82 | 85.24 | 90.36 | 79.85 | 84.78 | 86.14 | 84.34 | 85.23 |
| 30000 | 86.22 | 88.71 | 87.45 | 92.28 | 87.57 | 89.86 | 89.25 | 88.14 | 88.69 |
| 40000 | 88.01 | 89.02 | 88.51 | 93.34 | 90.93 | 92.12 | 90.68 | 89.97 | 90.32 |
| 50000 | 88.47 | 88.88 | 88.67 | 93.77 | 92.06 | 92.91 | 91.12 | 90.47 | 90.79 |
| Placeholders Algorithm | | | | | | | | | |
| 10000 | 69.76 | 89.82 | 78.53 | 79.02 | 52.47 | 63.06 | 74.39 | 71.14 | 72.73 |
| 20000 | 80.36 | 89.10 | 84.50 | 90.91 | 77.81 | 83.85 | 85.63 | 83.45 | 84.53 |
| 30000 | 85.01 | 88.88 | 86.90 | 92.68 | 86.46 | 89.46 | 88.84 | 87.67 | 88.25 |
| 40000 | 87.65 | 89.18 | 88.41 | 93.48 | 90.50 | 91.97 | 90.56 | 89.84 | 90.20 |
| 50000 | 88.53 | 88.51 | 88.52 | 93.38 | 91.89 | 92.63 | 90.96 | 90.20 | 90.58 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 7: Novelty Accommodation Stage: Dataset 3: Retrain using DT and DF | | | | | | | | | |
| Known | | | | | | | | | |
|-------------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 10000 | 61.09 | 8.26 | 14.55 | 53.03 | 69.85 | 60.29 | 57.06 | 39.06 | 46.37 |
| 20000 | 11.00 | 1.10 | 2.00 | 56.49 | 86.58 | 68.37 | 33.74 | 43.84 | 38.13 |
| 30000 | 7.00 | 2.34 | 3.51 | 59.43 | 89.90 | 71.56 | 33.22 | 46.12 | 38.62 |
| 40000 | 5.00 | 0.02 | 0.04 | 58.84 | 92.42 | 71.90 | 31.92 | 46.22 | 37.76 |
| 50000 | 2.00 | 0.16 | 0.30 | 59.19 | 93.52 | 72.50 | 30.59 | 46.84 | 37.01 |
| Compute Mean | | | | | | | | | |
| 10000 | 20.73 | 1.59 | 2.95 | 2.04 | 14.72 | 3.58 | 11.38 | 8.16 | 9.50 |
| 20000 | 15.00 | 0.93 | 1.75 | 5.22 | 23.50 | 8.54 | 10.11 | 12.22 | 11.07 |
| 30000 | 12.00 | 0.47 | 0.90 | 8.75 | 32.31 | 13.77 | 10.38 | 16.39 | 12.71 |
| 40000 | 6.00 | 0.01 | 0.02 | 14.42 | 40.90 | 21.32 | 10.21 | 20.46 | 13.62 |
| 50000 | 4.00 | 0.04 | 0.08 | 20.16 | 50.39 | 28.80 | 12.08 | 25.21 | 16.33 |
| Compute Euclid Distance | | | | | | | | | |
| 10000 | 24.98 | 1.43 | 2.71 | 55.64 | 86.18 | 67.62 | 40.31 | 43.81 | 41.99 |
| 20000 | 8.00 | 0.24 | 0.47 | 57.99 | 91.57 | 71.01 | 32.99 | 45.91 | 38.39 |
| 30000 | 5.00 | 0.36 | 0.67 | 58.09 | 93.42 | 71.64 | 31.55 | 46.89 | 37.72 |
| 40000 | 3.00 | 0.02 | 0.04 | 57.68 | 94.68 | 71.69 | 30.34 | 47.35 | 36.98 |
| 50000 | 3.00 | 0.01 | 0.02 | 58.78 | 94.69 | 72.53 | 30.89 | 47.35 | 37.39 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 10000 | 18.00 | 2.07 | 3.71 | 55.95 | 84.56 | 67.34 | 36.98 | 43.31 | 39.90 |
| 20000 | 7.00 | 0.55 | 1.02 | 57.81 | 91.36 | 70.81 | 32.40 | 45.95 | 38.00 |
| 30000 | 4.00 | 0.57 | 1.00 | 58.52 | 93.62 | 72.02 | 31.26 | 47.10 | 37.58 |
| 40000 | 2.00 | 0.14 | 0.26 | 59.38 | 94.64 | 72.97 | 30.69 | 47.39 | 37.25 |
| 50000 | 0.00 | 0.00 | 0.00 | 59.54 | 95.01 | 73.20 | 29.77 | 47.51 | 36.60 |
| Compute Max Probability | | | | | | | | | |
| 10000 | 23.87 | 4.33 | 7.33 | 48.60 | 68.95 | 57.01 | 36.23 | 36.64 | 36.43 |
| 20000 | 8.00 | 0.06 | 0.12 | 56.57 | 89.78 | 69.41 | 32.28 | 44.92 | 37.57 |
| 30000 | 3.00 | 0.13 | 0.25 | 58.05 | 92.88 | 71.45 | 30.52 | 46.51 | 36.86 |
| 40000 | 2.00 | 0.02 | 0.04 | 59.85 | 94.23 | 73.20 | 30.93 | 47.12 | 37.35 |
| 50000 | 1.00 | 0.00 | 0.00 | 59.12 | 94.99 | 72.88 | 30.06 | 47.50 | 36.82 |
| Placeholders Algorithm | | | | | | | | | |
| 10000 | 38.14 | 3.87 | 7.03 | 42.99 | 69.57 | 53.14 | 40.57 | 36.72 | 38.55 |
| 20000 | 9.97 | 0.21 | 0.41 | 56.54 | 89.17 | 69.20 | 33.26 | 44.69 | 38.14 |
| 30000 | 4.00 | 0.06 | 0.12 | 58.77 | 92.85 | 71.98 | 31.38 | 46.46 | 37.46 |
| 40000 | 2.00 | 0.01 | 0.02 | 59.66 | 94.14 | 73.04 | 30.83 | 47.07 | 37.26 |
| 50000 | 1.00 | 0.05 | 0.10 | 59.44 | 94.73 | 73.05 | 30.22 | 47.39 | 36.90 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 8: Novelty Accommodation Stage: Dataset 3: Further Fine-tune using DF . | | | | | | | | | |
| Known | | | | | | | | | |
|----------------------------------------------------------------------------------------------|----------------|-----------------------|--------------------|---------|---------|-------|-------|-------|-------|
| # of | Class | | | | | | | | |
| Novelties | precision | Novel | Overall | Overall | Overall | | | | |
| Class | Precision | Recall | F1 | | | | | | |
| F1 | | | | | | | | | |
| 10000 | 81.85 | 85.73 | 83.75 | 77.75 | 67.71 | 72.38 | 79.80 | 76.72 | 78.23 |
| 20000 | 87.54 | 85.57 | 86.54 | 84.59 | 83.74 | 84.16 | 86.06 | 84.65 | 85.35 |
| 30000 | 89.39 | 85.82 | 87.57 | 87.03 | 88.01 | 87.52 | 88.21 | 86.91 | 87.56 |
| 40000 | 90.01 | 86.48 | 88.21 | 88.53 | 89.79 | 89.16 | 89.27 | 88.14 | 88.70 |
| 50000 | 90.35 | 87.23 | 88.76 | 89.81 | 91.12 | 90.46 | 90.08 | 89.17 | 89.62 |
| Compute Mean | | | | | | | | | |
| 10000 | 60.69 | 90.58 | 72.68 | 7.41 | 14.23 | 9.75 | 34.05 | 52.40 | 41.28 |
| 20000 | 65.10 | 90.15 | 75.60 | 13.30 | 22.92 | 16.83 | 39.20 | 56.53 | 46.30 |
| 30000 | 67.89 | 90.59 | 77.61 | 20.53 | 31.52 | 24.86 | 44.21 | 61.05 | 51.28 |
| 40000 | 73.00 | 90.40 | 80.77 | 28.22 | 39.92 | 33.07 | 50.61 | 65.16 | 56.97 |
| 50000 | 76.60 | 90.26 | 82.87 | 36.91 | 49.37 | 42.24 | 56.75 | 69.81 | 62.61 |
| Compute Euclid Distance | | | | | | | | | |
| 10000 | 85.32 | 87.97 | 86.62 | 86.95 | 81.83 | 84.31 | 86.14 | 84.90 | 85.52 |
| 20000 | 88.53 | 88.27 | 88.40 | 89.91 | 88.44 | 89.17 | 89.22 | 88.35 | 88.78 |
| 30000 | 89.74 | 89.35 | 89.54 | 91.56 | 90.55 | 91.05 | 90.65 | 89.95 | 90.30 |
| 40000 | 90.03 | 89.28 | 89.65 | 92.15 | 91.62 | 91.88 | 91.09 | 90.45 | 90.77 |
| 50000 | 90.96 | 89.34 | 90.14 | 92.21 | 92.58 | 92.39 | 91.58 | 90.96 | 91.27 |
| Compute Mahalanobis Distance | | | | | | | | | |
| 10000 | 84.75 | 87.44 | 86.07 | 85.76 | 79.61 | 82.57 | 85.26 | 83.52 | 84.38 |
| 20000 | 88.19 | 88.47 | 88.33 | 90.19 | 88.10 | 89.13 | 89.19 | 88.28 | 88.73 |
| 30000 | 89.59 | 88.70 | 89.14 | 91.41 | 90.69 | 91.05 | 90.50 | 89.70 | 90.10 |
| 40000 | 90.82 | 89.31 | 90.06 | 91.88 | 92.13 | 92.00 | 91.35 | 90.72 | 91.03 |
| 50000 | 91.16 | 89.51 | 90.33 | 92.55 | 93.09 | 92.82 | 91.85 | 91.30 | 91.57 |
| Compute Max Probability | | | | | | | | | |
| 10000 | 78.52 | 89.09 | 83.47 | 80.19 | 64.44 | 71.46 | 79.36 | 76.76 | 78.04 |
| 20000 | 87.03 | 88.68 | 87.85 | 89.04 | 85.28 | 87.12 | 88.03 | 86.98 | 87.50 |
| 30000 | 89.45 | 88.28 | 88.86 | 90.24 | 89.79 | 90.01 | 89.84 | 89.03 | 89.43 |
| 40000 | 90.66 | 88.28 | 89.45 | 90.99 | 91.76 | 91.37 | 90.82 | 90.02 | 90.42 |
| 50000 | 91.05 | 89.12 | 90.07 | 91.95 | 92.62 | 92.28 | 91.50 | 90.87 | 91.18 |
| Placeholders Algorithm | | | | | | | | | |
| 10000 | 76.23 | 89.90 | 82.50 | 80.92 | 64.15 | 71.57 | 78.57 | 77.03 | 77.79 |
| 20000 | 85.80 | 89.42 | 87.57 | 89.87 | 83.76 | 86.71 | 87.83 | 86.59 | 87.21 |
| 30000 | 89.06 | 89.68 | 89.37 | 91.44 | 89.43 | 90.42 | 90.25 | 89.55 | 89.90 |
| 40000 | 90.03 | 89.08 | 89.55 | 91.97 | 91.42 | 91.69 | 91.00 | 90.25 | 90.62 |
| 50000 | 90.96 | 88.93 | 89.93 | 91.83 | 92.40 | 92.11 | 91.39 | 90.66 | 91.02 |
| Few shot Open set Recognition | | | | | | | | | |
| Known class recall | Known Class F1 | Novel Class Precision | Novel Class Recall | | | | | | |
| Table 9: Novelty Accommodation Stage: Dataset 3: Further Fine-tune using Sampled DT and DF . | | | | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We have Limitations Section at the end of the paper after Conclusion A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** References
✓ B1. Did you cite the creators of artifacts you used?
We use the publicly available standard NLP datasets in this work with proper citations and references.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We use the publicly available standard NLP datasets in this work with proper citations and references.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We do not collect any data for this research and repurpose the standard publicly available NLP
datasets
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We do not collect any data for this research and repurpose the standard publicly available NLP
datasets
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We do not collect any data for this research and repurpose the standard publicly available NLP
datasets
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
duan-etal-2023-cda | {CDA}: A Contrastive Data Augmentation Method for {A}lzheimer{'}s Disease Detection | https://aclanthology.org/2023.findings-acl.114 | Alzheimer{'}s Disease (AD) is a neurodegenerative disorder that significantly impacts a patient{'}s ability to communicate and organize language. Traditional methods for detecting AD, such as physical screening or neurological testing, can be challenging and time-consuming. Recent research has explored the use of deep learning techniques to distinguish AD patients from non-AD patients by analysing the spontaneous speech. These models, however, are limited by the availability of data. To address this, we propose a novel contrastive data augmentation method, which simulates the cognitive impairment of a patient by randomly deleting a proportion of text from the transcript to create negative samples. The corrupted samples are expected to be in worse conditions than the original by a margin. Experimental results on the benchmark ADReSS Challenge dataset demonstrate that our model achieves the best performance among language-based models. | # Cda: A Contrastive Data Augmentation Method For Alzheimer'S Disease Detection
Junwen Duan1, Fangyuan Wei1, Hongdong Li1, Tianming Liu2, Jianxin Wang1, Jin Liu1∗
1Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University 2School of Computing, The University of Georgia
{jwduan,weify9,hongdong,liujin06}@csu.edu.cn, [email protected] [email protected]
## Abstract
Alzheimer's Disease (AD) is a neurodegenerative disorder that significantly impacts a patient's ability to communicate and organize language. Traditional methods for detecting AD,
such as physical screening or neurological testing, can be challenging and time-consuming.
Recent research has explored the use of deep learning techniques to distinguish AD patients from non-AD patients by analysing the spontaneous speech. These models, however, are limited by the availability of data. To address this, we propose a novel contrastive data augmentation method, which simulates the cognitive impairment of a patient by randomly deleting a proportion of text from the transcript to create negative samples. The corrupted samples are expected to be in worse conditions than the original by a margin. Experimental results on the benchmark ADReSS Challenge dataset demonstrate that our model achieves the best performance among language-based models1.
## 1 Introduction
Alzheimer's Disease (AD) is a debilitating neurodegenerative disorder characterized by a progressive cognitive decline that is currently incurable. It accounts for up to 70% of all cases of dementia (Association, 2020). With an aging population, the prevalence of AD is on the rise. As symptoms of Alzheimer's disease can be mistaken for a variety of other cognitive disorders, traditional diagnostic methods, such as physical screening or neurological testing, can be challenging and time-consuming.
Furthermore, they require a certain degree of clinician expertise (Prabhakaran et al., 2018).
Consequently, the development of automatic detection methods for Alzheimer's disease is essential to the advancement of current medical treatment.
The use of machine learning methods to detect
∗*Corresponding author 1Our code is publicly available at https://github.com/
CSU-NLP-Group/CDA-AD.
AD or other diseases automatically has gained increasing attention in recent years (Luz et al., 2018; Martinc and Pollak, 2020; Liu et al., 2021; Yu et al.,
2023). Nevertheless, these approaches have limitations due to a lack of data and the generalizability of the models. Some studies have attempted to address this problem by model ensembling (Syed et al., 2021; Rohanian et al., 2021), multi-task learning (Li et al., 2022; Duan et al., 2022) or data augmentation (Woszczyk et al., 2022), but the improvement in performance is not always substantial.
Inspired by previous research that AD patients often have language disorders, such as difficulties in word finding and comprehension (Rohanian et al., 2021), we propose a novel Contrastive Data Augmentation (CDA) approach for automatic AD
detection. In our study, we simulated cognitive decline associated with Alzheimer's disease by randomly deleting words from the speech transcript to create negative samples. It is expected that the corrupted samples are in worse condition than the original due to the degradation of coherence and semantic integrity. Compared to traditional data augmentation methods, the CDA method expands the dataset scale and utilizes augmented data more effectively. We have demonstrated in our experiments on the ADReSS Challenge dataset that our approach uses linguistic features alone, is more generalizable to unseen data, and achieves superior results compared to strong baselines.
## 2 Data And Preprocessing
We use the data from the ADReSS Challenge
(Alzheimer's Dementia Recognition through Spontaneous Speech) (Luz et al., 2020), a subset of the DementiaBank's English Pitt Corpus (Becker et al.,
1994). It consists of recordings and transcripts of spoken picture descriptions from the Boston Diagnostic Aphasia Examination. During the examination, the subject is shown a picture and is asked to describe its content in their own language.
![1_image_0.png](1_image_0.png)
A total of 156 speech audio recordings and transcripts were obtained from English-speaking participants in the ADReSS dataset, with an equal number of participants (N=78) diagnosed with and not suffering from Alzheimer's disease, as shown in Table 1. Annotated transcripts in the dataset are in CHAT format (MacWhinney, 2014). Participants' ages and genders are also balanced to minimize the risk of bias in prediction. As some of the tokens in CHAT format are highly specific and are unlikely to be included in BERT tokenizers, we converted them into actual repetitions of words. We remain with only words, punctuation, and pauses for input into the BERT model. Our method uses only the transcripts from the dataset.
$$\begin{array}{l l}{{\overline{{\begin{array}{l l}{{\mathrm{non-AD}}}\\ {{\mathrm{~M}}}&{{\mathrm{~F}}}\\ {{\overline{{24}}}}&{{\overline{{30}}}}\end{array}}}}\\ {{\overline{{\begin{array}{l l}{{11}}&{{13}}\\ {{\overline{{35}}}}&{{43}}\end{array}}}}\end{array}$$
![1_image_1.png](1_image_1.png)
## 3 Methods
Figure 1 illustrates the framework of the proposed model. Firstly, for each transcript, we generate a number of augmented instances, which are then input to Text Encoder along with the original transcripts to obtain their corresponding representations. Then the classifier uses feature vectors acquired in Text Encoder and output a probability of being AD for each transcript and its corresponding augmented samples. We will discuss more details in the following subsections.
## 3.1 Text Encoder And Classifier
For fair comparisons with previous work (Woszczyk et al., 2022), the input text is encoded using the pre-trained BERT (bertbase-uncased) and represented by [CLS] after bert_pooler. Given a text sequence xi, we can get the encoded representations hi through the encoder.
$$h_{i}=\mathrm{BERT}(x_{i})$$
$$(1)$$
hi = BERT(xi) (1)
After obtaining the embedding of the transcript, we pass it through a simple linear classifier (Eq. 2)
to get final prediction scores, we use the commonly used binary cross-entropy (BCE) as our classification loss function, and the classification loss is denoted as LBCE (Eq. 3).
$$\begin{array}{c}{{\hat{y}_{i}=\sigma(W\hbar_{i}+b)}}\\ {{{\mathcal L}_{B C E}=-\sum_{i=1}^{N}y_{i}\log(\hat{y}_{i})}}\end{array}$$
(2) $\text{(3)}$ .
, where yi is the golden label for xi, W and b are trainable parameters in classifier.
## 3.2 Contrastive Data Augmentation
The performance of previous work is limited due to a lack of data availability. To alleviate this, we propose the contrastive data augmentation approach
(CDA) to replicate the cognitive decline associated with AD to expand the data size and improve the model robustness.
Negative Sample Generation Assuming that the dataset {xi, yi}Ni=1 contains N training samples.
We randomly delete a proportion of p ∈ [0, 1]
1820 words from each sample for nneg times to create nneg negative samples. After that we can get an augmented set {xi, yi, X ineg}Ni=1, where X ineg =
{x˜ji }
nneg j=1 are from xi. We can further augment the training set by repeating the whole process for naug times to get {xi, yi, X ineg}
N×naug i=1 and expand the data size by naug.
Positive Sample Generation Inspired by Gao et al. (2021), we resort to the randomness of dropout to construct positive samples. Dropout is a popular regularization technique due to its simplicity, but the randomness it introduces may hinder further improvements in the model's generalization performance. R-Drop (Wu et al., 2021) is proposed to fix the aforementioned problem by ensuring consistency between the outputs of two forward-pass with the same data. We deploy the R-Drop algorithm as a regularization method for generating positive instances. More specifically, the original sample xi is fed to the model twice at each step, and two corresponding predictions, denoted as yˆ1i and yˆ2i , are obtained. Then we try to minimize the bidirectional Kullback-Leibler (KL) divergence between them, which is denoted as LKL (Eq. 4):
$$\mathcal{L}_{KL}=\sum_{i=1}^{N}\frac{1}{2}[\mathcal{D}_{KL}(\hat{y}_{i}^{1}||\hat{y}_{i}^{2})+\mathcal{D}_{KL}(\hat{y}_{i}^{2}||\hat{y}_{i}^{1})]\tag{4}$$ **Contrastive Loss** It is reasonable to assume that
the negative samples are more likely to have AD
than the original ones in view of the degradation in semantic coherence and integrity. To achieve this, we regularize their differences to be larger than a margin m.
Particularly, the encoder receives xi and X ineg as input and outputs their corresponding embedding representations hi and Hineg. Then, their representations are fed to the classifier to get a final score yˆi and y˜
j i for xi and x˜ji , respectively. Their differences becomes Eq. 5:
$ \mathcal{L}_{margin}=\sum_{i=1}^{N}max(0,m-\hat{y}_{i}+\frac{\sum_{j=1}^{n_{neg}}\tilde{y}_{i}^{j}}{n_{neg}})$ (5) where $ m$ is the margin between positive and nega
, where m is the margin between positive and negative samples. The final loss is a combination of the above three loss terms LBCE, L*margin* and LKL.
$${\mathcal{L}}={\mathcal{L}}_{B C E}+\alpha{\mathcal{L}}_{m a r g i n}+\mu{\mathcal{L}}_{K L}$$
, where α and μ are hyperparameters that control the impact of positive and negative samples, and we set α = 0.5 and μ = 0.5 in our model.
## 4 Experiments
We employ 10-fold cross-validation to estimate the generalization error and adjust the model's parameter settings. The best setting is used to retrain models on the whole train set with five different random seeds and is then applied to the test set.
The results reported in this paper are the average of these models. The accuracy is used as the primary metric of task performance since the dataset is balanced. Recall, precision, and F1 are also reported for the AD class to provide a more comprehensive assessment. The hyperparameters in our model are: learning rate=1e-04, batch size=8, epoch=5, naug=3, nneg=3, p=0.3, *margin*=0.1.
## 4.1 Baselines
We compare our method with: 1) LDA, which is the challenge baseline linear discriminant analysis
(LDA) (Luz et al., 2020); 2) BERT, Balagopalan et al. (2021) compared BERT models with featurebased Models and obtained relatively better results using the former; 3) Fusion, Campbell et al. (2021)
fused the features of language and audio for classification; 4) SVM(BT RU)(Woszczyk et al., 2022), is the SVM model using Back-translation from Russian that achieves the best results over the BERT
model using Back-translation from German (BT
DE); 5) Ensemble methods, Sarawgi et al. (2020)
take a majority vote between three individual models. ERNIE0p and ERNIE3p are based on ERNIElarge (Sun et al., 2020) that use original transcripts and transcripts with pauses manually inserted for AD classification, respectively.
## 4.2 Results
The main experimental results are shown in Table 2. We can observe that the performance significantly improves when BERT is applied. Backtranslation data augmentation results in consistent improvements in both BERT (BT DE) and SVM
(BT RU), suggesting that data argumentation is a promising strategy. Our method achieves accuracy
(87.5%), precision (88.1%), and F1 score (86.9%),
outperforming the baseline method by a substantial margin, suggesting the effectiveness of cognitive impairment simulation in our method. By ensembling our models on five models with a majority vote mechanism, the performance improves significantly (4.2% absolute improvements in accuracy and 4% absolute improvements in F1 score, respectively) and achieves the best results among all
$\mathrm{f}$.
Methods Accuracy% Precision% Recall% F1%
LDA (Luz et al., 2020) 75.0 83.0 62.0 71.0
BERT (Balagopalan et al., 2021) 83.3 83.9 83.3 83.3
Fusion (Campbell et al., 2021) 83.3 80.1 **87.5** 84.0
BERT(BT DE) (Woszczyk et al., 2022) 84.0 - 75.0 - SVM(BT RU) (Woszczyk et al., 2022) 85.0 - 79.0 –
CDA (single-model, ours) **87.5 88.1** 83.3 **86.9**
Ensemble Methods Sarawgi et al. (2020) 83.0 83.0 83.0 83.0 ERNIE0p (Yuan et al., 2020) 85.4 94.7 75.0 83.7
ERNIE3p (Yuan et al., 2020) 89.6 95.2 **83.3** 88.9 CDA (ensembled, ours) **91.7 100.0 83.3 90.9**
Table 2: Results of our method and the baselines on the test set.
methods, outperforming even ERINE, a larger and knowledge-richer pre-trained model.
## 4.3 Ablation Study
To determine the effectiveness of the main modules, namely random deletion (RD) and regularized dropout (R-Drop), we removed them from the model one by one and tested their impact on performance in 10-fold cross-validation.
As shown in Table 3, by combining the contrastive data augmentation strategy with the base BERT, our model outperforms it by a large margin. However, when either module is removed, the model experiences a significant loss of performance, suggesting their positive contributions to the performance.
## 4.4 Parameter Analysis
| Methods | Accuracy% | Recall% |
|--------------|-------------|-----------|
| BERT | 72.3 | 71.9 |
| CDA (ours) | 77.5 | 75.2 |
| - w/o RD | 72.3 | 74.2 |
| - w/o R-Drop | 76.7 | 76.5 |
We also perform parameter analysis under the same experimental settings. As illustrated in Figure 2, we can see that a lower deletion rate leads to relatively higher accuracy, as the more words deleted, the less informative the transcript is. But a large margin negatively impacts both recall and accuracy.
As for naug, the model performs better regarding recall and accuracy when it is set to 3, and lower or higher values will affect the performance. The same conclusion applies to nneg, where a breakdown of the model is observed when nneg=7. The model performance also improves as the number of negative samples increases. However, this will take more computing resources.
![3_image_0.png](3_image_0.png)
## 5 Conclusion
Our experiments show the potential of contrastive data argumentation in improving the accuracy of models for Alzheimer's disease diagnosis. As a comparison to large, complex multimodal models, and other techniques of data augmentation, we obtain the best results by simulating cognitive impairment caused by AD. Despite the small size of the dataset, the results of this study provide a basis for further research into more complex issues.
## Limitations
The limitation of our study is that we only evaluated our model on a limited set of spoken language transcripts. We believe that additional attention should be given to features specific to AD patients, such as pauses and filler words in speech.
Furthermore, the lack of diversity in the data may also adversely affect the model's performance on unseen samples. Our model would benefit from further testing on a wider range of data, including different languages and different modalities, to see if it is capable of generalizing to other domains in the future.
## Ethics Statement
The dataset we use in this paper is from the public ADReSS challenge, which contains the minimum amount of personal information and restricts unauthorized access. Data usage and data sharing for ADReSS data has been conducted in accordance with the Ground Rules and Code of Ethics. Furthermore, it is important to note that the study does not include all possible diagnoses of Alzheimer's disease since it is based on transcript text data from an English-speaking cultural context. As this model was designed primarily for academic research, it is unlikely to provide a valid diagnosis in every situation and will be risky if applied to real-world clinical diagnosis situations.
## Acknowledgements
We thank anonymous reviewers for their helpful feedback. We thank Jiang Han and Guo Huai for their initial review and feedback for the earlier version of the paper. This work is supported in part by National Natural Science Foundation of China (Grant No.62172444, 62006251, U22A2041) and the Natural Science Foundation of Hunan Province (No. 2021JJ40783), Central South University Innovation-Driven Research Programme under Grant 2023CXQD018. We are grateful for resources from the High Performance Computing Center of Central South University.
## References
Alzheimers Association. 2020. What is dementia?
https://www.alz.org/alzheimers-dementia/
what-is-dementia, Last accessed on 2023-01-03.
Aparna Balagopalan, Benjamin Eyre, Jessica Robin, Frank Rudzicz, and Jekaterina Novikova. 2021. Com-
paring pre-trained and feature-based models for prediction of alzheimer's disease based on speech. *Frontiers in aging neuroscience*, 13:635945.
James T Becker, François Boiler, Oscar L Lopez, Judith Saxton, and Karen L McGonigle. 1994. The natural history of alzheimer's disease: description of study cohort and accuracy of diagnosis. *Archives of* neurology, 51(6):585–594.
Edward L Campbell, Laura Docío Fernández, Javier Jiménez Raboso, and Carmen García-Mateo.
2021. Alzheimer's dementia detection from audio and language modalities in spontaneous speech. In IberSPEECH.
Junwen Duan, Huai Guo, Min Zeng, and Jianxin Wang.
2022. Asnet: An adversarial sparse network for multi-task biomedical named entity recognition. In 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 416–421.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Changye Li, David Knopman, Weizhe Xu, Trevor Cohen, and Serguei Pakhomov. 2022. Gpt-d: Inducing dementia-related linguistic anomalies by deliberate degradation of artificial neural language models.
arXiv preprint arXiv:2203.13397.
Zhaoci Liu, Zhiqiang Guo, Zhenhua Ling, and Yunxia Li. 2021. Detecting alzheimer's disease from speech using neural networks with bottleneck features and data augmentation. In *ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pages 7323–7327. IEEE.
Saturnino Luz, Sofia de la Fuente, and Pierre Albert.
2018. A method for analysis of patient speech in dialogue for dementia detection. arXiv preprint arXiv:1811.09919.
Saturnino Luz, Fasih Haider, Sofia de la Fuente, Davida Fromm, and Brian MacWhinney. 2020. Alzheimer's Dementia Recognition Through Spontaneous Speech:
The ADReSS Challenge. In *Proc. Interspeech 2020*,
pages 2172–2176.
Brian MacWhinney. 2014. The CHILDES project:
Tools for analyzing talk. Psychology Press.
Matej Martinc and Senja Pollak. 2020. Tackling the adress challenge: A multimodal approach to the automated recognition of alzheimer's dementia. In *INTERSPEECH*, pages 2157–2161.
Gokul Prabhakaran, Rajbir Bakshi, et al. 2018. Analysis of structure and cost in a longitudinal study of alzheimer's disease. *Journal of Health Care Finance*,
44(3).
Morteza Rohanian, Julian Hough, and Matthew Purver.
2021. Multi-modal fusion with gating using audio, lexical and disfluency features for alzheimer's dementia recognition from spontaneous speech. *arXiv* preprint arXiv:2106.09668.
Utkarsh Sarawgi, Wazeer Zulfikar, Nouran Soliman, and Pattie Maes. 2020. Multimodal Inductive Transfer Learning for Detection of Alzheimer's Dementia and its Severity. In *Proc. Interspeech 2020*, pages 2212–
2216.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0:
A continual pre-training framework for language understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8968–8975.
Zafi Sherhan Syed, Muhammad Shehram Shah Syed, Margaret Lech, and Elena Pirogova. 2021. Automated recognition of alzheimer's dementia using bagof-deep-features and model ensembling. *IEEE Access*, 9:88377–88390.
Dominika Woszczyk, Anna Hedlikova, Alican Akman, Soteris Demetriou, and Björn Schuller. 2022. Data Augmentation for Dementia Detection in Spoken Language. In *Proc. Interspeech 2022*, pages 2858–
2862.
Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. R-drop:
Regularized dropout for neural networks. *Advances* in Neural Information Processing Systems, 34:10890–
10905.
Ying Yu, Junwen Duan, and Min Li. 2023. Fusion model for tentative diagnosis inference based on clinical narratives. *Tsinghua Science and Technology*,
28(4):686–695.
Jiahong Yuan, Yuchen Bian, Xingyu Cai, Jiaji Huang, Zheng Ye, and Kenneth Church. 2020. Disfluencies and fine-tuning pre-trained language models for detection of alzheimer's disease. In *INTERSPEECH*,
volume 2020, pages 2162–6.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitaion.
✓ A2. Did you discuss any potential risks of your work?
Section Ethics Statement.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.
✓ B1. Did you cite the creators of artifacts you used?
Section 2.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section Ethics Statement.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section Ethics Statement.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 2.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2.
## C ✓ **Did You Run Computational Experiments?** Section 3.
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Because the data size is small and the overall computational expenditure is minimal.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
zhu-etal-2023-disentangling | Disentangling Aspect and Stance via a {S}iamese Autoencoder for Aspect Clustering of Vaccination Opinions | https://aclanthology.org/2023.findings-acl.115 | Mining public opinions about vaccines from social media has been increasingly relevant to analyse trends in public debates and to provide quick insights to policy-makers. However, the application of existing models has been hindered by the wide variety of users{'} attitudes and the new aspects continuously arising in the public debate. Existing approaches, frequently framed via well-known tasks, such as aspect classification or text span detection, make direct usage of the supervision information constraining the models to predefined aspect classes, while still not distinguishing those aspects from users{'} stances. As a result, this has significantly hindered the dynamic integration of new aspects. We thus propose a model, namely Disentangled Opinion Clustering (DOC), for vaccination opinion mining from social media. DOC is able to disentangle users{'} stances from opinions via a disentangling attention mechanism and a Swapping-Autoencoder, and is designed to process unseen aspect categories via a clustering approach, leveraging clustering-friendly representations induced by out-of-the-box Sentence-BERT encodings and disentangling mechanisms. We conduct a thorough experimental assessment demonstrating the benefit of the disentangling mechanisms and cluster-based approach on both the quality of aspect clusters and the generalization across new aspect categories, outperforming existing methodologies on aspect-based opinion mining. | # Disentangling Aspect And Stance Via A Siamese Autoencoder For Aspect Clustering Of Vaccination Opinions
Lixing Zhu†, Runcong Zhao†, Gabriele Pergola‡, Yulan He**†,‡,§**
†Department of Computer Science, University of Warwick, UK
‡Department of Informatics, King's College London, UK
§The Alan Turing Institute, UK
{lixing.zhu,runcong.zhao,yulan.he}@kcl.ac.uk [email protected]
## Abstract
Mining public opinions about vaccines from social media has been increasingly relevant to analyse trends in public debates and to provide quick insights to policy-makers. However, the application of existing models has been hindered by the wide variety of users' attitudes and the new aspects continuously arising in the public debate. Existing approaches, frequently framed via well-known tasks, such as aspect classification or text span detection, make direct usage of the supervision information constraining the models to predefined aspect classes, while still not distinguishing those aspects from users' stances. As a result, this has significantly hindered the dynamic integration of new aspects. We thus propose a model, namely *Disentangled Opinion Clustering* (DOC), for vaccination opinion mining from social media. DOC is able to disentangle users' stances from opinions via a disentangling attention mechanism and a SwappingAutoencoder, and is designed to process unseen aspect categories via a clustering approach, leveraging *clustering-friendly* representations induced by out-of-the-box Sentence-BERT encodings and disentangling mechanisms. We conduct a thorough experimental assessment demonstrating the benefit of the disentangling mechanisms and cluster-based approach on both the quality of aspect clusters and the generalization across new aspect categories, outperforming existing methodologies on aspectbased opinion mining.
## 1 Introduction
Mining public opinions about vaccines from social media has been hindered by the wide variety of users' attitudes and the continuously new aspects arising in the public debate of vaccination (Hussain et al., 2021). The most recent approaches have adopted holistic frameworks built on morality analysis (Pacheco et al., 2022) or neural-based models predicting users' stances on different aspects of the online debate (Zhu et al., 2022). So far, these frameworks have been frequently framed via well-known tasks, such as aspect classification or text span detection, that use supervision to train text classifiers.
However, such a direct usage of the supervision information has constrained the models to predefined aspect classes and restricted their flexibility in generalising to opinions with aspects never seen before (e.g., new moral issues or immunity level).
To mitigate this limitation, some of the most promising approaches have been devised as supervised models generating *clustering-friendly representations* (Tao et al., 2021). These have recently shown promising results on open-domain tasks when combined with pre-trained language models (PLM) thanks to their flexibility, generalisation, and need for minimal tweaks (Reimers and Gurevych, 2019; Sircar et al., 2022). However, despite the improved capabilities in capturing the overall text semantics, existing models for text clustering (Miranda et al., 2018; Meng et al.,
2019; Shen et al., 2021; Zhang et al., 2021a), still struggles to distinguish between the mixed users' stances and aspects on vaccination, and as a result, they often generate clusters that do not reflect the novel aspects of interest. As an illustrating example, consider the tweets "mRNA vaccines are poison" and *"The Pfizer vaccine is safe"*, that the majority of existing methodologies are prone to cluster into different groups due to the opposite stances manifested, despite the fact that both of them are targeting safety issues.
To address the aforementioned problem, we posit that a model should be able to (i) disentangle the stance from the aspect discussed, and simultaneously (ii) use the generated representations in a framework (e.g., clustering) that ease the integration of aspects never seen before. We thus propose a novel representation learning approach, called the *Disentangled Opinion Clustering* (DOC) model, which performs disentan1827 gled learning (Mathieu et al., 2016) via text autoencoders (Bowman et al., 2016; Montero et al.,
2021), and generates *clustering-friendly* representations suitable for the integration of novel aspects1.
The proposed model, DOC, learns clusteringfriendly representations through a denoising autoencoder (Montero et al., 2021) driven by outof-the-box Sentence-BERT embeddings (Reimers and Gurevych, 2019), and disentangles stance from opinions by using the supervision signal to drive a disentangled cross-attention mechanism and a Swapping Autoencoder (Park et al., 2020).
We conducted an experimental assessment on two publicly available datasets on vaccination opinion mining, the Covid-Moral-Foundation
(CMF) (Pacheco et al., 2022) and the Vaccination Attitude Detection (VAD) corpora (Zhu et al.,
2022). We first assessed the quality of the disentangled representation in generating aspect-coherent clusters. Then, we measured the generalisation of the proposed approach via a cross-dataset evaluation by performing clustering on a novel dataset with unknown aspect categories. Finally, we showed the benefit of this approach on the traditional stance classification task, along with a report on the thorough ablation study highlighting the impact of each model component on the clustering quality and the degree of disentanglement of the generated representations. Our contributions can be summarized as follows:
- We introduce DOC, a Disentangled Opinion Clustering model to generate clustering-friendly representations, which distinguishes between users' stances and opinions in the vaccination debate and integrates newly arising aspects via a clustering approach.
- Unlike traditional aspect-based classification models, we outline a framework adopting limited supervised signals provided by few stance and aspect labels, functioning as inductive biases to generate clustering-friendly representations.
- We conduct a thorough experimental analysis on the two major publicly available datasets on vaccination opinion mining from social media, and demonstrate the benefit of the disentangling mechanisms on the quality of aspect clusters, the generalization across datasets with different as-
pect categories, and the traditional stance classification task.
## 2 Related Work
Sentence Bottleneck Representation Sentence representation learning typically aims to generate a fixed-sized latent vector that encodes a sentence into a low-dimensional space. In recent years, in the wake of the wide application of pre-trained language models (PLMs), several approaches have been developed leveraging the PLMs to encode sentence semantics. The most prevalent work is the SBERT (Reimers and Gurevych, 2019) that fine-tunes BERT (Devlin et al., 2019) on the SNLI
dataset (Bowman et al., 2015) through a siamese pooling structure. The learned representations are immediately applicable to a wide range of tasks, such as information retrieval and clustering, significantly reducing the effort required to generate the task-specific representations (Thakur et al., 2021).
More recently, Montero et al. (2021) presented a sentence bottleneck autoencoder, called AutoBot, that learns a latent code by reconstructing the perturbated text. Their model indicates the importance of topic labels as reconstruction objectives.
Disentangled Latent Representation Earlier works explored disentangled representation to facilitate domain adaptation (Bengio et al., 2013; Kingma et al., 2014; Mathieu et al., 2016). In recent years, John et al. (2019) generated disentangled representations geared to transfer holistic style such as tone and theme in text generation. Park et al. (2020) proposed the Swapping autoencoder to separate texture encoding from structure vectors in image editing. The input images are formed in pairs to induce the model to discern the variation
(e.g., structure) while retaining the common property (e.g., texture). However, recent studies show that disentanglement in the latent space is theoretically unachievable without access to some inductive bias (Locatello et al., 2019). It is suggested that local isometry between variables of interest is sufficient to establish a connection between the observed variable and the latent variable (Locatello et al., 2020a; Horan et al., 2021), even with few annotations (Locatello et al., 2020b). This is in line with (Reimers and Gurevych, 2019; Lu et al., 2022)
where contrastive pairs are leveraged for training, which illuminates our work to utilize labels and reconstruction of perturbed text to induce the disentanglement.
Text Clustering The recent development in neural architectures has reshaped clustering practices (Xie et al., 2016). For example, Zhang et al.
(2021b) leveraged transformer encoders for clustering over the user intents. Several methods utilised PLM embeddings to discover topics which were subsequently used for clustering news articles and product reviews (Huang et al., 2020; Meng et al.,
2022). Others exploited the neural components, i.e., the BiLSTM-CNN (Zhang et al., 2019), the CNN-Attention (Goswami et al., 2020) and the Self-Attention (Zhang et al., 2021c) to offer endto-end clustering. Zhang et al. (2021a) developed the Supporting Clustering with Contrastive Learning (SCCL) model by augmenting the disparity between short text. A notable work is DSClustering (Sircar et al., 2022), which extracts aspect phrases first then clusters the aspect embeddings. Outside of clustering methods, there is a surging interest in clustering-friendly representations (Tao et al., 2021). Yet, few methods cluster documents along a particular axis or provide disentangled representations to cluster over a subspace.
Vaccination Opinion Mining The task of vaccination opinion mining is commonly carried out on social media to detect user attitudes and provide insights to be used against the related 'infodemic' (Kunneman et al., 2020; Wang et al., 2021; Chandrasekaran et al., 2022; Zhao et al., 2023). Recent approaches rely on semantic matching and stance classification with extensions including human-in-the-loop protocols and text span prediction to scale to the growing amount of text (Pacheco et al., 2022; Zhu et al., 2022).
## 3 Methodology
We build our approach upon two vaccination opinion corpora (Pacheco et al., 2022; Zhu et al., 2022).
In both corpora, a small number of tweets are labelled, each of which is annotated with a stance label ('*pro-vaccine*', '*anti-vaccine*' and '*neutral*')
and a text span or an argumentative pattern denoting an aspect. For example, for the tweet, 'The Pfizer vaccine is safe.', its stance label is '*provaccine*' and the argumentative pattern is '*vaccine* safety'. Since vaccination opinions explode over time, supervised classifiers or aspect extractors would soon become outdated and fail to handle constantly evolving tweets. In an effort to mitigate this issue, we address the problem of vaccination opinion mining by learning disentangled stance and aspect vectors of tweets in order to cluster tweets along the aspect axis.
Our proposed model, called Disentangled Opinion Clustering (DOC), is shown in Figure 1. It is trained in two steps. In **unsupervised learning**
(Figure 1(a)), a tweet is fed into an autoencoder with DeBERTa as both the encoder and the decoder to learn the latent sentence vector z. Here, each tweet is mapped to two embeddings, the context embedding us which encodes the stance label information and the aspect embedding ua which captures the aspect information. Under unsupervised learning, these two embeddings are not distinguished. Together with the hidden representation of the input text, H, they are mapped to the latent sentence vector z by cross-attention. As the autoencoder can be trained on a large-scale unannotated tweets relating to vaccination, it is expected that z would capture the vaccine-related topics.
Then in the second step of **supervised learning**
(Figure 1(b)), the DeBERTa-based autoencoder is fine-tuned to learn the latent stance vector zs and the latent aspect vector za using the tweet-level annotated stance label and aspect text span (or the argumentative pattern '*vaccine safety*' in Figure 1(b)) as the inductive bias. Here, the latent stance vector zs is derived from us. It is expected that zs can be used to predict the stance label. On the other hand, the latent aspect vector za is derived from ua only and it can be used to generate the SBERTencoded aspect text span. Both zs and za, together with the hidden representation of the input text H,
are used to reconstruct the original text through the DeBERTa decoder. The training instances are organized in pairs since we use the idea of swapped autoencoder (shown in Figure 1(c)) to swap the aspect embedding of one tweet with that of another if both discuss the same aspect. The resulting latent vector can still be used to reconstruct the original tweet. In what follows, we describe the two steps, unsupervised and supervised learning, in detail.
Unsupervised Learning of Sentence Representation Due to the versatility of PLMs, sentence representations are usually derived directly from contextualised representations generated by the PLMs.
However, as has been previously discussed in Montero et al. (2021), sentence representations derived in this way cannot guarantee reliable reconstruction of the input text. Partly inspired by the use of autoencoder for sentence representation learning as in (Montero et al., 2021), we adopt the autoencoder
![3_image_0.png](3_image_0.png)
architecture to initially guide the sentence representation learning by fine-tuning it on vaccination tweets. Rather than RoBERTa (Liu et al., 2019), we adopt DeBERTa, a variant of BERT in which each word is represented using two vectors encoding its content and position. The attention weight of a word pair is computed as a sum of four attention scores calculated from different directions based on their content/position vectors, i.e., content-tocontent, content-to-position, position-to-content, and position-to-position. Instead of representing each word by a content vector and a position vector, we modify DeBERTa by representing an input sentence using two vectors, a context embedding us encoding its stance label information and an aspect embedding ua encoding its aspect information.
We will discuss later in this section how to perform disentangled representation learning with us and ua. During the unsupervised learning stage, we do not distinguish between us and ua and simply use u = [us,ua] to denote them.
More specifically, we train the autoencoder on an unannotated Twitter corpus with the masked token prediction as the training objective. The encoder applies the multi-head attention to clamp the hidden representations of the top layer of the pre-trained transformer. If we use H to denote the hidden representations, the multi-head attention can be expressed as:
$$\text{head}_{i}=\text{softmax}\left(\frac{\mathbf{u}W_{Q}(HW_{K})^{\top}}{\sqrt{d_{H}}}\right)HW_{V},\tag{1}$$ $$\mathbf{z}=[\text{head}_{1},\text{head}_{2},\ldots,\text{head}_{h}]W_{O},\tag{2}$$ where $H\in\mathbb{R}^{n\times d_{H}}$, $W_{Q}\in\mathbb{R}^{2d_{H}\times d_{K}},W_{K}\in\mathbb{R}^{d_{H}\times d_{K}}$, $W_{V}\in\mathbb{R}^{d_{H}\times d_{V}}$, $\text{head}_{i}\in\mathbb{R}^{d_{V}}$ and $W_{O}\in\mathbb{R}^{hd_{V}\times d_{\mathbf{z}}}$, $\mathbf{u}\in\mathbb{R}^{2d_{H}}$ is generated from a
WO ∈ R
fully-connected layer over the hidden vectors. The bottleneck representation z is supposed to encode the semantics of the whole sentence.
The transformer decoder comprises n layers of cross-attention such that the output of the previous layer is processed by a gating mechanism (Hochreiter and Schmidhuber, 1997). The recurrence is repeated n times to reconstruct the input, where n denotes the token length of the input text.
Injecting Inductive Biases by Disentangled Attention Recent work on disentanglement learning suggested unsupervised disentanglement is impossible without inductive bias (Locatello et al.,
2020b). In the datasets used in our experiments, there are a small number of labelled tweets. We can use the tweet-level stance labels and the annotated aspect text spans as inductive bias. Here, the disentangled attention of DeBERTa is utilized to mingle different factors. Assuming each sentence is mapped to two vectors, the context vector us encoding its stance label information and the aspect vector ua encoding its aspect information, we can then map us to a latent stance vector zs which can be used to predict the stance label, and map ua to a latent aspect vector za which can be used to reconstruct the aspect text span. We use the cross-attention between us and ua to reconstruct the original input sentence.
Stance Classification Let hCLS denote the hidden representation of the [CLS] token, the stance bias is injected by classification over the stance categories:
$$\mathbf{z}_{s}=\text{softmax}\left(\frac{\mathbf{u}_{s}W_{q,s}(\mathbf{h}_{\text{CL}5}W_{\mathbf{k},\text{CL}5})^{\top}}{\sqrt{d_{H}}}\right)\mathbf{h}_{\text{CL}5}W_{\mathbf{v},\text{CL}5},\tag{3}$$ $$\hat{y}_{s}=\text{softmax}(\mathbf{z}_{s}W),\quad\mathcal{L}_{s}=-y_{s}^{(i)}\log\hat{y}_{s}^{(i)}.\tag{4}$$
Essentially, we use us as query and hCLS as key and value to derive zs, which is subsequently fed to a softmax layer to predict a stance label yˆs. The objective function is a cross-entropy loss between the true and the predicted labels.
Aspect Text Span Reconstruction We assume ua encoding the sentence-level aspect information and use self-attention to derive the latent aspect representation za. To reconstruct the aspect text span from za, we use the embedding generated by SBERT (Reimers and Gurevych, 2019) as the targeted aspect span, since SBERT has been empirically shown achieving the state-of-the-art on Semantic Textual Similarity tasks. Those clusteringfriendly representations, if they encode the argumentative patterns or aspect spans alone, are strong inductive biases in the axis of aspects.
Specifically, the sentence embedding of the aspect expression is generated by a Gaussian MLP
decoder (Kingma and Welling, 2014):
$$\mathbf{z}_{a}=\text{softmax}\left(\frac{\mathbf{u}_{a}W_{q,a}(\mathbf{u}_{a}W_{k,a})^{\top}}{\sqrt{d_{H}}}\right)\mathbf{u}_{a}W_{\mathbf{v},a},\tag{5}$$ $$\mathcal{L}_{a}=-\log\mathcal{N}(\mathbf{y}_{a};\text{MLP}_{\mu}(\mathbf{z}_{a}),\text{MLP}_{\sigma}(\mathbf{z}_{a})\mathbf{I}),\tag{6}$$
where xa denotes the aspect text span in the original input sentence, ya is the ground-truth aspect text span embedding produced by ya =
SBERT(xa), whose value is used for computing the Gaussian negative log-likelihood loss2.
Input Text Reconstruction To reconstruct the original input text, we need to make use of both the latent stance vector zs and the latent aspect vector za. Here we use the cross attention of these two vectors to derive the content vector zc.
2https://pytorch.org/docs/stable/generated/-ch.nn.GaussianNLLloss.html
Qc = uWq,c, Kc = HWk,c, Vc = HWv,c, Qs = usWq,s, Ks = usWk,s, Qa = uaWq,a, Ka = uaWk,a, aj = QcK c⊤ j + QcK ⊤ s + K c j Qs + QcK ⊤ a + K c j Qa headi = softmax a √5dH HWv,c, zc = [head1, head2, . . . , headh]WO, (7)
where u = [us, ua], aj is the j-th element of a, and Kc j represents the j-th row of Kc. The resulting zc is the content representation for reconstructing the original sentence.
Disentanglement of Aspect and Stance Although the inductive biases, i.e., the tweet-level stance label and the annotated aspect span, are used to learn the latent stance vectors zs and the aspect vectors za , there could still be possible dependencies between the two latent variables. To further the disentanglement, we propose to swap the learned aspect embeddings of two tweets discussing the same aspect in Siamese networks. We draw inspiration from the Swapping Autoencoder (Park et al.,
2020) where a constituent vector of a Generative Adversarial Network (GAN) is swapped with that produced by another image. The original swapping autoencoder was designed for image editing and required a patch discriminator with texture cropping to the corresponding disentangled factors with the desired properties. In our scenario, such alignment is instead induced by tweets discussing the same aspect.
We create pairs of tweets by permutations within the same aspect group {x A, x B}A,B∈Gk,A̸=B.
Here, by abuse of notation, we use k to denote the k-th aspect group, Gk. The groups are identified by tweets with the same aspect label, regardless of their stances. We sketch the structure of pair-wised training in Figure 1(c). The tweets are organized in pairs and a bottleneck representation is obtained for each tweet:
$${\bf z}^{A}={\rm enc}({\bf z}^{A}),\quad{\bf z}^{B}={\rm enc}({\bf z}^{B}).\tag{8}$$
We would like z A to disentangle into latent factors, i.e., the variation in a factor of z A is associated with a change in x A (Locatello et al., 2020a). Unlike the majority of works (Zhang et al., 2021d) that directly splits z A in the latent space, we assume that the entangled vector is decomposed by a causal network. We train a vector u = [us, ua] to trigger the activation of the networks (i.e., the self-attentions in Eq. 3-Eq. 7). The outputs of the networks are independent components that encode the desiderata. If zs and za are parameterized independent components triggered by us and ua respectively, the substitution of u B
a with u A a can be regarded as soft exchanges between z A
a and z B
a
.
We thus substitute u B
a with u A a to cause changes in z B
c
. This substitution will also be reflected by changes in z B a
. In practice, we train on all permutations with the same aspect group, regardless of the stance. The reconstruction loss for each latent factor (i.e., stance and aspect) is calculated once to balance the number of training examples unless it is content text generated from the swapped bottleneck representation.
Formally, the swapping autoencoder presented in Figure 1(c) can be expressed as
Q B s = u B s Wq,s, KB s = u B s Wk,s, Q A a = u A a Wq,a, KA a = u A a Wk,a, aj = QcK c⊤ j + QcK B⊤ s + K c j Q B s + QcK A a ⊤+ K c j Q A a , headi = softmax a √5dH HWv,c, z B c = [head1, head2, . . . , headh]WO, z B s = softmax u B s Wq,s(KCLS) ⊤ √dH VCLS, z B a = softmax Q A a (KA a ) ⊤ √dH u A a Wv,a,
where z B
cis input to the decoder for the reconstruction of x B. Note that the above equations are specially used in the swapping autoencoder for the computation of z B. If there is no substitution in the latent space, the above equations will not be calculated. Given L
B
c = dec(z B
c), the final objective function is written as
$${\mathcal{L}}={\mathcal{L}}_{c}^{A}+\lambda_{s}{\mathcal{L}}_{s}^{A}+\lambda_{a}{\mathcal{L}}_{a}^{A}+\lambda_{B}{\mathcal{L}}_{c}^{B}\,,$$
c , (9)
where λs, λa and λB are hyper-parameters controlling the importance of each desirable property.
In our experiments, we choose λs = λa = 1 and λB = 0.5.
## 4 Experiments
Datasets We conduct our experimental evaluation on two publicly available Twitter datasets about the Covid-19 vaccination: the Covid-MoralFoundation (CMF) (Pacheco et al., 2022) and the Vaccination Attitude Detection (VAD) corpus (Zhu et al., 2022). CMF is a tweet dataset focused on the Covid-19 vaccine debates, where each tweet is assigned an argumentative pattern. VAD consists of 8 aspect categories further refined by vaccine
| Aspect Group | Pro-Vax | Anti-Vax | Neutral |
|------------------------|-----------|------------|-----------|
| CMF | | | |
| Care/Harm | 70 | 11 | 2 |
| Fairness/Cheating | 25 | 18 | 13 |
| Loyalty/Betrayal | 25 | 0 | 5 |
| Authority/Subversion | 20 | 46 | 13 |
| Purity/Degradation | 2 | 15 | 0 |
| Liberty/Oppression | 6 | 62 | 5 |
| Non-moral | 167 | 47 | 41 |
| VAD | | | |
| Health Institution | 400 | 84 | 36 |
| Personal Experience | 381 | 16 | 3 |
| Vaccines Save Lives | 12 | 1 | 0 |
| (Adverse) Side Effects | 179 | 256 | 63 |
| Immunity Level | 433 | 113 | 52 |
| Economic Effects | 23 | 12 | 5 |
| Personal Freedom | 5 | 18 | 7 |
| Moral Attitudes | 5 | 43 | 2 |
bands. Similar to the argumentative pattern in the CMF dataset, each tweet is characterised by a text span indicating its aspect. The dataset statistics are reported in Table 1, with examples shown in A.1. The train/test split follows 4 : 1. For the unsupervised pre-training of sentence bottleneck representations, we combine the unlabelled Covid19 datasets from both CMF3and VAD4repositories.
The final dataset consists of 4.37 million tweets.
Baselines We employ 5 baseline approaches:
SBERT5, AutoBot6, DS-Clustering, VADet, and SCCL7, of which SBERT and AutoBot are out-ofthe-box sentence embedding generators. VADet is specialised to learn disentangled representations.
However, it is noteworthy that even though it employs DEC (Xie et al., 2016), the resulting representations are unsuitable for distance-based clustering.
SCCL performs joint representation learning and document clustering. DS-Clustering is a pipeline approach that predicts a text span and employs SBERT to generate an aspect embedding. For clustering-friendly representation learning methods, we examine their performance using k-means and k-medoids (Leonard and Peter, 1990), and 3https://gitlab.com/mlpacheco/
covid-moral-foundations 4https://github.com/somethingx1202/VADet 5https://github.com/UKPLab/
sentence-transformers 6https://github.com/ivanmontero/autobot 7https://github.com/amazon-research/sccl the Agglomerative Hierarchical Clustering (AHC).
The comparison involves three tasks: tweet clustering based on aspect categories (intra- and crossdatasets), and tweet-level stance classification. For stance classification, we employ RoBERTa and DeBERTa, and use their averaged embeddings for clustering.
Evaluation Metrics First, we use Clustering Accuracy (CA) and Normalized Mutual Information
(NMI) to evaluate the quality of clusters in line with (Shaham et al., 2018; Tao et al., 2021). NMI
is defined as NMI = 2×I(y; ˆy)
/
H(y)+H(ˆy)
,
where I(y; ˆy) denotes the mutual information between the ground-truth labels and the predicted labels, H(·) denotes their entropy. Then we employ BERTScore (Zhang et al., 2020) to evaluate the performance of models in clustering in the absence of ground-truth cluster labels. BERTScore is a successor of Cosine Similarity (John et al., 2019) that measures the sentence distance by calculating the cross distance between their corresponding word embeddings. We follow Bilal et al. (2021) to compute the averaged BERTScore as
$$\text{AvgBS}=\frac{1}{K}\sum_{k=1}^{K}\frac{1}{\binom{|G_{k}|}{2}}\sum_{\begin{subarray}{c}i,j\in G_{k}\\ i<j\end{subarray}}\text{BS}(\text{twet}_{i},\text{twet}_{j}),\tag{10}$$
where |Gk| is the size of the k-th group or cluster. We report the average performance for all the models. As a quantitative evaluation metric for disentanglement, we use the Mean Correlation Coefficient (MCC). We refer the readers to A.3 for qualitative results.
Clustering-Friendly Representation We first show the advantages of disentangled representations in clustering. With the representations obtained from SBERT and AutoBot, we employ kmeans to perform clustering. Since the similarity between sentences in SBERT is measured by cosine similarity which is less favorable for k-means algorithm, we also use k-medoids to ensure a fair comparison. The other baseline approaches are run with their default settings. We assign the aspect labels to the predicted clusters with the optimal permutation such that the permutation of {1*, . . . , K*}
yields the highest accuracy score, where K denotes the total number of clusters. For the CMF dataset, we set K = 7, and on VAD K = 8.
Table 2 lists the performance of baseline methods on all the tasks and datasets. We see consistent
| CMF | VAD | | | | | |
|-------------------|-------|------|------|------|------|------|
| Models | CA | NMI | Avg | | | |
| SBERT-k-means | 49.2 | 47.6 | 18.2 | 60.5 | 58.3 | 19.2 |
| SBERT-k-medoids | 50.8 | 48.1 | 18.5 | 62.1 | 60.1 | 19.5 |
| SBERT-AHC | 51.7 | 48.5 | 18.9 | 64.4 | 61.2 | 20.9 |
| AutoBot-k-means | 49.2 | 47.4 | 18.5 | 62.8 | 60.4 | 20.1 |
| AutoBot-k-medoids | 52.5 | 49.5 | 19.5 | 65.6 | 62.5 | 20.7 |
| AutoBot-AHC | 52.5 | 48.5 | 18.9 | 63.5 | 60.8 | 20.5 |
| DS-C-k-means | 50.0 | 47.7 | 18.5 | 63.5 | 60.5 | 20.7 |
| DS-C-k-medoids | 52.5 | 48.3 | 18.8 | 64.7 | 61.9 | 21.3 |
| DS-C-k-AHC | 50.8 | 47.8 | 18.6 | 64.4 | 61.5 | 21.7 |
| VADet | 51.7 | 47.9 | 18.0 | 65.4 | 61.4 | 20.7 |
| SCCL | 48.3 | 46.9 | 18.2 | 63.2 | 60.8 | 19.9 |
| RoBERTa-k-means | 35.0 | 35.2 | 15.0 | 45.8 | 46.6 | 15.7 |
| DeBERTa-k-means | 35.8 | 37.1 | 15.2 | 47.7 | 47.4 | 16.2 |
| DOC-k-means | 51.7 | 47.8 | 18.5 | 64.2 | 60.7 | 20.3 |
| DOC-k-medoids | 54.2 | 51.0 | 20.7 | 66.7 | 63.1 | 21.4 |
| DOC-AHC | 52.5 | 49.1 | 19.1 | 66.7 | 63.6 | 22.8 |
improvements across all the evaluation metrics using our proposed DOC. When compared with endto-end methods (i.e., VADet and SCCL) whose intermediate representations cannot be used to calculate a distance, the disparity depends on DOC's clustering approaches employed. On CMF, VADet outperforms SCCL. But DOC gives superior performance overall regardless of the clustering approaches used, showing the flexibility of the DOC
representations. In comparisons against representation learning methods, DOC takes the lead as long as it is attached with competent clustering algorithms. This shows the benefit of clustering with disentangled representations since the clustering algorithm will no longer obfuscate the stance polarities and the aspect categories. DOC achieves higher scores on the VAD dataset compared to CMF, with more prominent improvement over the baselines, which may be credited to the increased size of the dataset. When DOC is evaluated with different clustering algorithms, k-medoids excels on CMF,
while AHC outperforms the others on VAD, showing that cosine similarity is more appropriate for distance calculation since the k-means algorithm relies on Euclidean distance.
Cross-Dataset Evaluation In this context, the most interesting property of clustering-friendly representations is their ability to perform clustering in novel datasets whose categories are unknown in advance. To assess this, we use the models trained on CMF to perform clustering on VAD, and repeat the process vice versa. We specify the number of
| VAD → CMF | CMF → VAD | | | | | |
|-------------------|-------------|------|--------|------|------|--------|
| Models | CA | NMI | Avg BS | CA | NMI | Avg BS |
| SBERT-AHC | 51.6 | 49.8 | 19.3 | 52.4 | 50.5 | 17.9 |
| AutoBot-k-medoids | 53.1 | 50.6 | 20.1 | 53.7 | 51.0 | 18.1 |
| DS-C-k-medoids | 54.1 | 51.2 | 20.2 | 54.9 | 52.4 | 19.0 |
| VADet | 53.5 | 50.1 | 19.6 | 55.2 | 52.8 | 19.3 |
| SCCL | 48.6 | 47.0 | 18.5 | 53.6 | 51.6 | 18.5 |
| DOC-k-medoids | 55.3 | 51.9 | 21.7 | 56.2 | 53.8 | 19.5 |
| DOC-AHC | 53.5 | 50.4 | 19.8 | 55.8 | 53.7 | 19.2 |
| Models | CMF | VAD | | |
|----------|----------|----------|----------|---------|
| Micro F1 | Macro F1 | Micro F1 | Macro F1 | |
| RoBERTa | 72.3±.5 | 71.2±.4 | 76.7±.1 | 75.9±.1 |
| DeBERTa | 74.0±.6 | 73.5±.6 | 77.8±.2 | 76.8±.2 |
| DOC-AHC | 73.5±.6 | 72.7±.6 | 78.0±.2 | 76.8±.2 |
clusters as 7 and 8, respectively. The alignment between the clustered groups and gold labels is solved by the Hungarian algorithm. Note that direct aspect classification across datasets would not be possible since an accurate mapping between the two sets of classes cannot be established. Table 3 reports the performance of cross-dataset clustering. Our metrics of interest are still CA, NMI and averaged BERTScore. All the methods show a performance drop on VAD overall, while the performance on CMF turns out to be a bit higher. DOC-k-medoids achieved competitive results across the datasets, demonstrating that clustering-friendly representations disentangle the opinions and, as a result, can integrate unknown aspects.
Stance Classification We report in Table 4 the results of DOC, RoBERTa and DeBERTa. For DOC,
we only report DOC-AHC since stance labels are by-products of clustering-friendly representations.
We see the DOC performance on CMF close to that of DeBERTa, and that the improvement on VAD
is marginal. This may be attributed to the absence of the swapping operation on zs, and therefore the stance latent vector may contain other semantics or noise. Nevertheless, DOC is still preferred over DeBERTa considering its significant performance gain over DeBERTa on aspect clustering.
Ablations Study We study the effects by taking away components of different functionality in disentanglement, and experiment with different
| Model | CMF | VAD | | |
|---------------------------|-------|-------|-------|------|
| CA | AvgBS | CA | AvgBS | |
| Component | | | | |
| DOC-k-means | 51.7 | 18.5 | 64.2 | 20.3 |
| w/o pre-trained LM | 43.3 | 16.2 | 48.4 | 16.7 |
| w/o inductive bias | 50.0 | 18.0 | 62.3 | 19.2 |
| w/o swapped codes | 50.8 | 17.8 | 62.8 | 19.0 |
| Choice of Context Vectors | | | | |
| MLP | 51.7 | 18.5 | 64.2 | 20.3 |
| CLS | 50.0 | 17.6 | 63.2 | 19.5 |
| MEAN | 48.3 | 17.4 | 60.7 | 18.7 |
choices of context vectors, i.e., us and ua. The results are shown in Table 5. We see a significant performance drop without loading the pre-trained weights for the language model. The removal of inductive biases and the swapped autoencoder both hamper the clustering of the model across the metrics. The performance gap is more obvious without the inductive bias, which we attribute to the weaker supervision induced by swapping the latent codes.
Ablating choices of context vectors show the superiority of the MLP strategy. In contrast, the performance of the context vector generated by mean pooling is rather poor. It shows that the context vector produced by mean-pooling can hardly trigger the disentanglement of the hidden semantics.
![7_image_0.png](7_image_0.png)
Evaluation of Disentangled Representations As with the nonlinear ICA community (Khemakhem et al., 2020), we use Mean Correlation Coefficient (MCC) to quantify the extent to which DOC managed to learn disentangled representations. Here, the Point-Biserial Correlation Coefficient between *dist*(za, z¯
k a) (i.e., the distance between the aspect vector and the centroid of cluster k) and Y (i.e., the dichotomous variable indicating whether it belongs to or not belongs to group k in groundtruth) is chosen to measure the isometry between za and k. Notice that we specify *dist* as Euclidean Distance here. However, isometry does not hinge on the Euclidean Distance, and it could be easily substituted with Cosine Similarity, in which case the mean is no longer the best estimation for the cluster center and would be replaced by the medoid of cluster k. The clustering method would be k-medoids accordingly.
For each cluster k ∈ {1, 2*, . . . , K*}, we calculate the correlation coefficient between *dist*(za, z¯
k a)
and Y . We then obtain MCC by averaging the correlation coefficients. A high MCC indicates that the group identity of a data point is closely associated with the geometric position of its za in the latent space, which means that za captures the group information. The results are shown in Figure 2. We observe consistent improvement over the sentence representation models. DS-Clustering is able to encode tweets into aspect embeddings. Nevertheless, its distance between aspect latent vectors is a weaker indicator for group partition compared with that of DOC, suggesting that za discovered by DOC
better captures the difference between aspects.
## 5 Conclusion
In this work, we introduced DOC, a *Disentangled* Opinion Clustering model for vaccination opinion mining from social media. DOC is able to disentangle users' stances from opinions via a disentangling attention mechanism and a swap-autoencoder.
It was designed to process unseen aspect categories thanks to the clustering approach, leveraging clustering-friendly representations induced by outof-the-box Sentence-BERT encodings and the disentangling mechanisms. A thorough experimental assessment demonstrated the benefit of the disentangling mechanism on the quality of aspect-based clusters and the generalization capability across datasets with different aspect categories outperforming existing approaches in terms of generalisation and coherence of the generated clusters.
## 6 Limitations
There are a few limitations we would like to address. First of all, the number of clusters needs manual configuration. This is a limitation of the clustering algorithms (Xie et al., 2016) since we need to set a threshold for convergence, which consequentially pinpoints k. An expedient alternative is to analyse the dataset for the realistic settings or probe into k for the optimal setup, which is, however, beyond the scope of this paper. Another limitation is the pre-requisite for millions of unannotated data. The autoencoder needs enormous data to learn bottleneck representations. Its performance would be hindered without access to abundant corpora. Lastly, the performance of the acquired clustering-friendly representations depends on the similarity metric chosen. Efforts need to be made to find the best option, whether it is Euclidean distance or cosine similarity etc.
## Acknowledgements
This work was supported in part by the UK Engineering and Physical Sciences Research Council
(grant no. EP/T017112/1, EP/V048597/1), YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (EP/V020579/2).
This work was conducted on the UKRI/EPSRC
HPC platform, Avon, hosted in the University of Warwick's Scientific Computing Group.
## References
Yoshua Bengio, Aaron C. Courville, and Pascal Vincent.
2013. Representation learning: A review and new perspectives. *IEEE Trans. Pattern Anal. Mach. Intell.*,
35(8):1798–1828.
Iman Munire Bilal, Bo Wang, Maria Liakata, Rob Procter, and Adam Tsakalidis. 2021. Evaluation of thematic coherence in microblogs. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6800–6814, Online. Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016.
Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics.
Ranganathan Chandrasekaran, Rashi Desai, Harsh Shah, Vivek Kumar, and Evangelos Moustakas. 2022. Examining public sentiments and attitudes toward covid19 vaccination: Infoveillance study using twitter posts. *JMIR Infodemiology*, 2(1):e33909.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Koustava Goswami, Rajdeep Sarkar, Bharathi Raja Chakravarthi, Theodorus Fransen, and John P. McCrae. 2020. Unsupervised deep language and dialect identification for short texts. In *Proceedings of* the 28th International Conference on Computational Linguistics, pages 1606–1617, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735–
1780.
Daniella Horan, Eitan Richardson, and Yair Weiss. 2021.
When is unsupervised disentanglement possible? In Advances in Neural Information Processing Systems, volume 34, pages 5150–5161. Curran Associates, Inc.
Jiaxin Huang, Yu Meng, Fang Guo, Heng Ji, and Jiawei Han. 2020. Weakly-supervised aspect-based sentiment analysis via joint aspect-sentiment topic embedding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6989–6999, Online. Association for Computational Linguistics.
Amir Hussain, Ahsen Tahir, Zain Hussain, Zakariya Sheikh, Mandar Gogate, Kia Dashtipour, Azhar Ali, and Aziz Sheikh. 2021. Artificial intelligence–
enabled analysis of public attitudes on facebook and twitter toward covid-19 vaccines in the united kingdom and the united states: Observational study. J
Med Internet Res, 23(4):e26627.
Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 424–434, Florence, Italy. Association for Computational Linguistics.
Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. 2020. Variational autoencoders and nonlinear ica: A unifying framework. In *Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics*, volume 108 of *Proceedings of Machine Learning Research*,
pages 2207–2217. PMLR.
Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3581–3589, Cambridge, MA, USA. MIT Press.
Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In *2nd International* Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
Florian Kunneman, Mattijs Lambooij, Albert Wong, Antal van den Bosch, and Liesbeth Mollema. 2020.
Monitoring stance towards vaccination in twitter messages. *BMC medical informatics and decision making*, 20(1):1–14.
Kaufman Leonard and J Rousseeuw Peter. 1990. Finding groups in data: an introduction to cluster analysis. Probability and Mathematical Statistics. Applied Probability and Statistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach.
Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. 2019. Challenging common assumptions in the unsupervised learning of disentangled representations. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 4114–4124. PMLR.
Francesco Locatello, Ben Poole, Gunnar Raetsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. 2020a. Weakly-supervised disentanglement without compromises. In *Proceedings of the* 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6348–6359. PMLR.
Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, and Olivier Bachem. 2020b. Disentangling factors of variations using few labels. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Junru Lu, Xingwei Tan, Gabriele Pergola, Lin Gui, and Yulan He. 2022. Event-centric question answering via contrastive learning and invertible event transformation. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 2377–2389, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun.
2016. Disentangling factors of variation in deep representation using adversarial training. In *Advances in*
Neural Information Processing Systems, volume 29.
Curran Associates, Inc.
Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han.
2019. Weakly-supervised hierarchical text classification. *Proceedings of the AAAI Conference on* Artificial Intelligence, 33(01):6826–6833.
Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Topic discovery via latent space clustering of pretrained language model representations. In *Proceedings of the ACM Web Conference* 2022, WWW '22, page 3143–3152, New York, NY,
USA. Association for Computing Machinery.
Sebastião Miranda, Arturs Znoti ¯ n,š, Shay B. Cohen, and Guntis Barzdins. 2018. Multilingual clustering of streaming news. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 4535–4544, Brussels, Belgium.
Association for Computational Linguistics.
Ivan Montero, Nikolaos Pappas, and Noah A. Smith.
2021. Sentence bottleneck autoencoders from transformer language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1822–1831, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Maria Pacheco, Tunazzina Islam, Monal Mahajan, Andrey Shor, Ming Yin, Lyle Ungar, and Dan Goldwasser. 2022. A holistic framework for analyzing the COVID-19 vaccine debate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5821–5839, Seattle, United States. Association for Computational Linguistics.
Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei Efros, and Richard Zhang.
2020. Swapping autoencoder for deep image manipulation. In *Advances in Neural Information Processing Systems*, volume 33, pages 7198–7211. Curran Associates, Inc.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Uri Shaham, Kelly P. Stanton, Henry Li, Ronen Basri, Boaz Nadler, and Yuval Kluger. 2018. Spectralnet:
Spectral clustering using deep neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.
OpenReview.net.
Jiaming Shen, Wenda Qiu, Yu Meng, Jingbo Shang, Xiang Ren, and Jiawei Han. 2021. TaxoClass: Hierarchical multi-label text classification using only class names. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4239–4249, Online. Association for Computational Linguistics.
Prateek Sircar, Aniket Chakrabarti, Deepak Gupta, and Anirban Majumdar. 2022. Distantly supervised aspect clustering and naming for E-commerce reviews.
In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies:
Industry Track, pages 94–102, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
Yaling Tao, Kentaro Takagi, and Kouta Nakata. 2021.
Clustering-friendly representation learning via instance discrimination and feature decorrelation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Nandan Thakur, Nils Reimers, Johannes Daxenberger, and Iryna Gurevych. 2021. Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 296–310, Online.
Association for Computational Linguistics.
Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Ranran Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Yi Fung, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed ELsayed, Martha Palmer, Clare Voss, Cynthia Schneider, and Boyan Onyshkevych.
2021. COVID-19 literature knowledge graph construction and drug repurposing report generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies:
Demonstrations, pages 66–77, Online. Association for Computational Linguistics.
Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016.
Unsupervised deep embedding for clustering analysis. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of *Proceedings of* Machine Learning Research, pages 478–487, New York, New York, USA. PMLR.
Dejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021a.
Supporting clustering with contrastive learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
pages 5419–5430, Online. Association for Computational Linguistics.
Haidong Zhang, Wancheng Ni, Meijing Zhao, and Ziqi Lin. 2019. Cluster-gated convolutional neural network for short text classification. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 1002–1011, Hong Kong, China. Association for Computational Linguistics.
Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu.
2021b. Discovering new intents with deep aligned clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14365–14373.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Wei Zhang, Chao Dong, Jianhua Yin, and Jianyong Wang. 2021c. Attentive representation learning with adversarial training for short text clustering. *IEEE*
Transactions on Knowledge and Data Engineering.
Xiongyi Zhang, Jan-Willem van de Meent, and Byron Wallace. 2021d. Disentangling representations of text by masking transformers. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 778–791, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Runcong Zhao, Miguel Arana-catania, Lixing Zhu, Elena Kochkina, Lin Gui, Arkaitz Zubiaga, Rob Procter, Maria Liakata, and Yulan He. 2023. PANACEA:
An automated misinformation detection system on COVID-19. In *Proceedings of the 17th Conference of* the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 67–74, Dubrovnik, Croatia. Association for Computational Linguistics.
Lixing Zhu, Zheng Fang, Gabriele Pergola, Robert Procter, and Yulan He. 2022. Disentangled learning of stance and aspect topics for vaccine attitude detection in social media. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1566–1580, Seattle, United States. Association for Computational Linguistics.
## A Appendix A.1 Dataset Details
In this section, we provide a detailed analysis of the dataset instances.
In the Covid-Moral-Foundation (CMF) dataset, each tweet is associated with a pre-defined and manually annotated argumentative pattern. The annotated tweets are categorized by moral foundations that can be regarded as coarse aspects distilled from argumentative patterns. Each moral foundation is associated with two polarities (e.g.,
care/harm), and is treated as the group label of a cluster of tweets. The polarity is given by the vaccination stance label. In the example in Table A1,
'The vaccine is safe' is the argumentative pattern, while *'Care/Harm'* is the categorical label denoting the aspect group. An exhaustive list to the argumentative patterns can be found in the original paper of Pacheco et al. (2022).
In the Vaccination Attitude Detection (VAD), a training instance comprises a stance label, a categorical aspect label and an aspect text span. For example, Table A1 shows the tweet 'Study reports Oxford/AstraZeneca vaccine is protective against Brazilian P1 strain of COVID19.' is annotated with the text span *'Oxford/AstraZeneca vaccine is protective against Brazilian P1 strain of COVID19'*, and its aspect belongs to the aspect category *'Immunity Level'*.
## A.2 Training Details
We experiment with a pre-trained DeBERTa8 base model. The hidden size is dH = 768. We set both dV and dK = 768, and dz = 1024. The learning rate is initialised with η = 3e − 5 and the number of epochs is 10. We use Linear Warmup to enforce the triangular learning rate.
We train the model with two Titan RTX graphics cards on a station of an Intel(R) Xeon(R) W-2245 CPU. The training process takes less than 9 hours, with the inference time under 30 minutes.
## A.3 Additional Results Clustering With Different Latent Vectors We
experiment clustering using the disentangled aspect vectors za or the content vectors z (i.e., without the disentanglement of aspects and stances) on both CMF and VAD datasets, and have the detailed results reported in Table A2. It can be observed that using the disentangled aspect vectors for clustering gives better results compared to using the content vectors, regardless of the clustering approaches used. On CMF, the best results are obtained using k-medoids, while on VAD, similar results are obtained using either k-medoids or AHC.
8https://huggingface.co/docs/transformers/
model_doc/deberta-v2
| CMF | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|------------------------|
| Tweet | Argumentative | Aspect |
| Pattern | Group | |
| Vaccine decreases your chances of getting severe life-threat. | The vaccine is safe | Care/Harm |
| There is no way someone can tell me that the COVID vaccine does not cause harm to pregnant women. | The covid | |
| vaccine is harmful for pregnant women | Care/Harm | |
| and kids | | |
| The tyranny is not locking down and not using the vaccine to appease the crazies who think it's oppression. | The vaccine mandate is not | |
| oppression because it will help to end this pandemic | Liberty/ Oppression | |
| VAD | | |
| Tweet | Aspect | Aspect |
| Span | Group | |
| Study reports Oxford/AstraZeneca vaccine is protective against Brazilian P1 strain of COVID19. | Oxford/AstraZeneca vaccine is protective against Brazilian P1 strain of COVID19 | Immunity Level |
| @user @user @user team, told Reuters while the government admits, it is unknown whether COVID19 mRNA Vaccine BNT162b2 has an impact on fertility. | COVID19 mRNA Vaccine BNT162b2 has an impact on fertility | (Adverse) Side Effects |
| Latent Vector | CMF | VAD | | |
|------------------|-------|-------|-------|------|
| CA | AvgBS | CA | AvgBS | |
| DOC-k-means-za | 51.7 | 18.5 | 64.2 | 20.3 |
| DOC-k-means-z | 48.3 | 17.5 | 60.7 | 18.7 |
| DOC-k-medoids-za | 54.2 | 20.7 | 66.7 | 21.4 |
| DOC-k-medoids-z | 50.8 | 18.0 | 61.4 | 18.9 |
| DOC-AHC-za | 52.5 | 19.1 | 66.7 | 22.8 |
| DOC-AHC-z | 49.2 | 17.8 | 61.9 | 19.0 |
Qualitative Results We illustrate in Figure A1 and Figure A2 the clustering results and the latent space of the entangled/disentangled representation projected by the t-SNE method. Figure A1(ab) display the cluster assignments after permutation, whereas Figure A2(a-b) show the groundtruth labels. The class labels are rendered by colours whose detailed mapping is provided in Figure A2. From Figure A1, we see clear improvements in terms of clustering quality on both datasets when the model is compared against the DeBERTa-averaged-embedding. Figure 2 shows more separated groups thanks to the disentangled representation, providing strong distance-based discrimination for the clustering algorithms. As a result, simple clustering methods like k-means can achieve competitive results against deep clustering methods (i.e., SCCL and VAD), which have access to weak labels or data augmentations.
![12_image_0.png](12_image_0.png)
## Color Mappings In Visualisation
We illustrate in Figure A2 the color mapping from t-SNE plots to the true aspect category labels.
It is shown that the vectors are more separated and their grouping aligns closer to the ground-truth labels when they are clustered on the space of za, indicating that such latent vectors provide strong distance-based discrimination among groups in the Euclidean space, as has been used as a distance metric in the t-SNE algorithm. We also experiment with cosine-similarity metric for k-medoids and the results have been reported in the Experiments section.
![13_image_1.png](13_image_1.png)
![13_image_0.png](13_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✗ A2. Did you discuss any potential risks of your work?
Our work does not introduce a novel dataset.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Our work does not create new datasets. Our model is not designed for specific purposes.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data are anonymous, as stated in their publications of origin. We double-checked the datasets and can confirm that they are anonymous.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4, Section 7, Appendix A2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
cohen-bar-2023-temporal | Temporal Relation Classification using {B}oolean Question Answering | https://aclanthology.org/2023.findings-acl.116 | Classifying temporal relations between a pair of events is crucial to natural language understanding and a well-known natural language processing task. Given a document and two event mentions, the task is aimed at finding which one started first. We propose an efficient approach for temporal relation classification (TRC) using a boolean question answering (QA) model which we fine-tune on questions that we carefully design based on the TRC annotation guidelines, thereby mimicking the way human annotators approach the task. Our new QA-based TRC model outperforms previous state-of-the-art results by 2.4{\%}. | # Temporal Relation Classification Using Boolean Question Answering
Omer Cohen Efi Arazi School of Computer Science Reichman University, Israel [email protected]
## Abstract
Classifying temporal relations between a pair of events is crucial to natural language understanding and a well-known natural language processing task. Given a document and two event mentions, the task is aimed at finding which one started first. We propose an efficient approach for temporal relation classification (TRC) using a boolean question answering
(QA) model which we fine-tune on questions that we carefully design based on the TRC annotation guidelines, thereby mimicking the way human annotators approach the task. Our new QA-based TRC model outperforms previous state-of-the-art results by 2.4%.
## 1 Introduction
Events in stories are not necessarily mentioned in a chronological order. The timeline of events is important for understanding the main narrative of a story as well as the correct order of actions. For example, the timeline may be used directly by clinicians looking for a convenient way to explore the disease course of their patients, or by algorithms to follow instructions in the right order, given as text, such as in cooking recipes. Building the timeline is done based on two main subtasks: (1) event extraction, that is, detecting the most important events in a given textual input, and (2) temporal relation classification (TRC), also known as temporal relation extraction, which is about putting two events, given as gold spans, in the right chronological order. For example, consider the following text: "Before you put the cake in the oven, say a little prayer." In the first subtask, known as *event extraction*, we would like to detect only the relevant events for our domain of interest. In this case, the words put and say are both verbs representing some relevant actions; therefore, we mark them as events. In the second subtask, TRC, we put every two events in a chronological order by classifying them using a closed set of temporal relations. In this case, the 1843 Kfir Bar School of Computer Science, College of Management, Israel [email protected] two events put and say should be assigned with the label *AFTER* indicating that put is happening after say in a chronological order.
In this study we focus on TRC, which is typically handled as a classification problem of two events provided along with the context in which they are mentioned. MATRES (Ning et al., 2018b) is one of the dominant datasets for TRC comprised of news documents manually annotated with temporal relation labels. The events are deterministically chosen to be all actions (mostly verbs) mentioned in the documents. Every pair of events (*n, m*) are manually labeled with one of four labels: BEFORE
(n happened before m), AFTER (n happened after m), EQUAL (n and m happened at the same time),
and VAGUE (it is impossible to know which event happened before the other).
Traditional classification approaches have already been demonstrated for TRC. In this work, we get inspiration from a relatively new promising approach for solving natural language processing
(NLP) tasks, in which the target algorithm is based on a reduction of the task to another problem. In our case, we solve the TRC problem using a model that handles the boolean question-answering (QA)
task, which is about answering a Yes/No question given a passage used as a context. We decide to use boolean QA as our proxy problem due to the way the annotation work for building MATRES has been done. In the main annotation guidelines of MATRES (Ning et al., 2018b), the annotators are asked to assign a label to a pair of events (*n, m*)
by answering the two following questions: (1) Is it possible that the start time of n is before the start time of m? and (2) Is it possible that the start time of m is before the start time of n? There are four possible answer combinations, each is mapped to one label: (yes, no) ⇒ BEFORE, (no, yes) ⇒
AFTER, (no, no) ⇒ EQUAL, and (yes, yes) ⇒
VAGUE. Therefore, we transform an instance of TRC, composed of a pair of events and a document, into a pair of Yes/No QA instances, one for each of the two questions, and then fine-tune a Yes/No QA model to answer them. The final prediction is made based on the combination of the Yes/No answers retrieved by the QA model.
## 2 Related Work
TRC has received increasing levels of attention in the past decade. There is a relatively long list of related shared tasks (Verhagen et al., 2007, 2010; Bethard et al., 2016; MacAvaney et al., 2017). Modern approaches for TRC use some sort of a neural network as a classifier. For example, Dligach et al.
(2017) showed that a neural network that uses only words as input, performs better than the traditional models that process features which were manually created. A more modern approach for TRC is based on large pre-trained language models. Han et al.
(2021) continued to pre-train a language model before fine-tuning it on TRC; Zhou et al. (2021) incorporated a global inference mechanism to tackle the problem at the document level; Han et al. (2019a)
combined a recurrent neural network (RNN) over BERT (Devlin et al., 2019) embedding and a structured support vector machine (SSVM) classifier to make joint predictions; Ning et al. (2019) integrated BERT with a temporal commonsense knowledge base, and improved accuracy significantly by 10% over the previously known best result; and Han et al. (2019b) developed a multitask model for the two related subtasks, event extraction and TRC.
Mathur et al. (2021) train a gated relational graph convolution network using rhetorical discourse features and temporal arguments from semantic role labels, in addition to some traditional syntactic features. Wang et al. (2022b) use a unified form of the document creation time to improve modeling and classification performance, and Wang et al.
(2022a) improve the faithfulness of TRC extraction model. Zhang et al. (2021) built a syntactic graph constructed from one or two continuous sentences and combined it with a pre-trained language model.
The best result so far has been reported recently by Zhou et al. (2022), who extract relational syntactic and semantic structures, and encode them using a graph neural network. In another recent work (Man et al., 2022), the authors introduce a novel method to better model long document-level contexts by detecting and encoding important sentences in the document. None of those studies use QA to address the TRC problem.
Our boolean QA-based approach continues to improve on Zhou et al.'s (2022) work, achieving a new stat-of-the-art result for TRC.
## 3 Datasets
We conduct experiments with two datasets. MATRES (Ning et al., 2018b) is a composition of three datasets (TIMEBANK, AQUAINT and PLATINUM) which were re-annotated following new guidelines. Following previous work, we use TIMEBANK and AQUAINT together as a training set and PLATINUM as a testing set. For validation and development we use a different dataset named TCR (Ning et al., 2018a), which has been used similarly in other works (Zhang et al.,
2021). As mentioned above, MATRES has four labels: BEFORE, AFTER, EQUAL, and VAGUE.
TimeBank-Dense (Cassidy et al., 2014), or TBDense in short, is the second dataset which we use in this work. TB-Dense has two additional labels: INCLUDES and IS-INCLUDED. Following common practices, we evaluate our models using the relaxed micro-average F1 score (i.e., for MATRES ignoring all mistakes on VAGUE instances during evaluation, and for TB-Dense completely removing VAGUE instances from the validation and testing sets). Overall, MATRES contains 12, 736 training instances, 837 testing instances, and 2, 600 validation instances from TRC. TB-Dense contains 4, 032 training instances, 1, 427 testing instances, and 629 validation instances. The label distributions is summarized under Appendix B.
## 4 Methodology
We design our problem as Yes/No question answering problem. Therefore, we fine-tune a pre-trained language model (PLM) by taking a Yes/No QA
classification approach for which every instance is composed of a passage (text) and a question, provided along with a Yes/No answer. Our QA model is designed as a traditional classifier; the input is a concatenation of the passage and the question with a special separator token in between, and the output is a two-way label distribution vector. We use RoBERTa (Liu et al., 2019), which comes in two sizes, base and large; we use both.
An instance of TRC is composed of a document, two event spans, and a label. In order to use our QA model for TRC, we convert each such instance into two or three Yes/No QA instances, which we use for fine-tuning and testing. Each QA instance
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
is composed of a passage and a question. Therefore, we cut the sentence from the input document, containing the spans of the two events, and use it as a passage. Sentence breaks are detected using full stops (e.g., a dot followed by a white space). The passage is paired with the Yes/No questions, generating multiple QA instances. MATRES uses a label set of size four, and TB-Dense has two additional labels: INCLUDES and IS-INCLUDED. Therefore, for MATRES we compose the following two question templates (<EVENT 1> and <EVENT 2>
are used here as placeholders), inspired by the TRC annotation guidelines: (1) *Is it possible that*
<EVENT 1> started before <EVENT 2>? and
(2) *Is it possible that <EVENT 2> started before*
<EVENT 1>? For TB-Dense, we add another question template: (3) Is it possible that <EVENT 1>
ended before <EVENT 2>? We experiment with additional phrasing, as described in the following section. The answers to the questions are determined by the label of the TRC instance, using Table 1.
| Question Templates | MATRES | TB-Dense | | |
|----------------------|----------|------------|--------|-------------|
| 1 | 2 | 3 | | |
| no | no | <not used> | EQUAL | EQUAL |
| yes | yes | <not used> | VAGUE | VAGUE |
| yes | no | yes | BEFORE | BEFORE |
| yes | no | no | BEFORE | INCLUDES |
| no | yes | yes | AFTER | IS-INCLUDED |
| no | yes | no | AFTER | AFTER |
Each QA instance is processed independently during fine-tuning. At inference time we run the instances through the model and assign a TRC label based on the answers.
Naturally, a document may contain more events than the two relevant ones. Therefore, we use markers (Baldini Soares et al., 2019) in order to mark the two relevant events. Specifically, each relevant event is surrounded by the '@' character in both, the passage and the question. Figure 1 demonstrates how we process a MATRES instance.
## 5 Experiments And Results
Table 2 summarizes our evaluation results on MATRES and TB-Dense, using the two sizes of RoBERTa. We compare our results with two baseline models, and some previous work. We experiment with three variations for the questions (only for the two MATRES-related questions; for TBDense we only use the best out of the three),1as reported in the three first rows of Table 2:
QV1: *<EVENT1> before <EVENT2>?*
QV2: Is it possible that the start time of <EVENT1>
is before the start time of <EVENT2>?
QV3: Is it possible that <EVENT1> started before
<EVENT2>?
We fine-tune our models for the duration of five epochs and evaluate them on the validation set every epoch; we use the best checkpoint as the output model. We run every experiment three times using different seeds and report on the averaged accuracy and standard deviation on the testing set.2 The MATRES model with the best question variation (QV3)
has been further processed with two additional procedures: Perturbation and fine-tuning with BoolQ.
Perturbation. To achieve better model generalization, we perturb the instances of the training
| Model | MATRES | TB-Dense | | |
|---------------------------------------------------|-----------|------------|-----------|-----------|
| Base PLM | Large PLM | Base PLM | Large PLM | |
| Ours Our-Model (QV1) | 84.7±0.7 | 85.2±0.6 | - | - |
| Our-Model (QV2) | 85.1±0.8 | 85.9±1.1 | - | - |
| Our-Model (QV3) | 85.4±0.6 | 86.3±0.7 | 72.9±0.5 | 73.21±0.6 |
| Our-Model (QV3) + AUG | 86.4±0.5 | 87.7±0.6 | 73.8±0.7 | 74.34±0.7 |
| Our-Model (QV3) + AUG + BoolQ | 86.4±0.6 | 87.5±0.5 | - | - |
| Baselines Standard QA (QV1) | 73.1±0.7 | 74.6±0.6 | 61.3±0.7 | 62.2±0.5 |
| Standard QA (QV2) | 71.1±0.6 | 72.5±0.7 | 60.1±0.6 | 61.3±0.6 |
| Sentence Classification | 70.2±0.7 | 70.9±1.1 | 58.4±0.4 | 59.7±0.6 |
| Others Structrued Joint Model (Han et al., 2019b) | 75.5 | - | 64.5 | - |
| ECONET (Han et al., 2021) | - | 79.3 | - | 66.8 |
| (Zhang et al., 2021) | 79.3 | 80.3 | 66.7 | 67.1 |
| (Wang et al., 2020) | - | 78.8 | - | - |
| TIMERS (Mathur et al., 2021) | 82.3 | - | 67.8 | - |
| SCS-EERE (Man et al., 2022) | 83.4 | - | - | - |
| Faithfulness (Wang et al., 2022a) | 82.7 | - | - | - |
| DTRE (Wang et al., 2022b) | - | - | 72.3 | - |
| RSGT (Zhou et al., 2022) | 84.0 | - | - | - |
set, using nlpaug, 3a data augmentation library for text. We employ the optical-character recognition
(OCR) error simulation, using the default argument values, which replaces about 30% of the characters
(except the characters of the events) with random letters or digits considered as common OCR mistakes (e.g., l vs. 1). We modify the original training instances in place; therefore, we do not increase the size of the training set. In Table 2 we refer to this procedure as AUG. It adds about 1% to F1 in the base model, and a slightly higher percentage in the large model, on both datasets.
BoolQ. Before fine-tuning on MATRES, we finetune the model on the BoolQ dataset (Clark et al.,
2019) in which every instance is composed of a passage (text) and a question, provided along with a Yes/No answer. Overall, BoolQ has 9, 427 training instances, which we use for fine-tuning. In Table 2 we refer to this procedure as BoolQ. As reported, this step does not improve performance. Therefore, we did not use it for TB-Dense.
Baseline Algorithms. To assess the contribution of our Yes/No QA design, we define two baseline algorithms. The first baseline is a traditional multiclass QA model, which is given with the same passage as in our original Yes/No QA model, paired with only one question that takes one of the labels as an answer. We experiment with two question variations:
QV1: *What is the chronological order of the two* marked events: <EVENT 1> and <EVENT 2>?
QV2: Is <EVENT 1> happening before, after or at the same time as <EVENT 2>?
The second baseline is a simple multiclass sentence-classification RoBERTa model, which receives as input for this model comprises only the passage, and the output is one of the labels from the dataset. As seen in Table 2, our models outperform the baselines and previous work, introducing a new state-of-the-art result for TRC on both datasets.4
## 6 Conclusions
We proposed a novel approach for TRC using a pretrained language model fine-tuned for a Yes/No QA
classification task. Our model was fine-tuned to answer questions which were originally designed to support decision making during the annotation process. We believe we have demonstrated the potential of this method to leverage the Yes/No QA
design to break down the prediction process into a set of Yes/No questions; our approach outperforms existing methods, achieving a new state-of-the-art result for TRC on two datasets. There is a potential practical limitation to this work, which is related to time complexity and speed performance. Since every instance is transformed into multiple QA instances, it may take a relatively long time to process a document.
4Qualitative analysis is provided in Appendix C.
## Limitations
There are two primary limitations of the system presented in this work. First, each set of questions we use for training the QA model is designed specifically for the dataset we trained our model on. While we provide a set of questions for each of the two common TRC datasets, we believe that training the model on other datasets may require rewrite of the questions. Second, as mentioned in the previous section, every TRC instance is converted into multiple QA instances which we then process individually. This may increase the overall inference time and pose a practical limitation which needs to be carefully considered.
## Acknowledgements
This research was supported by the Ministry of Science and Technology, Israel.
## References
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks:
Distributional similarity for relation learning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2895–
2905, Florence, Italy. Association for Computational Linguistics.
Steven Bethard, Guergana Savova, Wei-Te Chen, Leon Derczynski, James Pustejovsky, and Marc Verhagen.
2016. SemEval-2016 task 12: Clinical TempEval. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1052–
1062, San Diego, California. Association for Computational Linguistics.
Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501–506, Baltimore, Maryland. Association for Computational Linguistics.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Dmitriy Dligach, Timothy Miller, Chen Lin, Steven Bethard, and Guergana Savova. 2017. Neural temporal relation extraction. In *Proceedings of the 15th* Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 746–751, Valencia, Spain. Association for Computational Linguistics.
Rujun Han, I Hsu, Mu Yang, Aram Galstyan, Ralph Weischedel, Nanyun Peng, et al. 2019a. Deep structured neural network for event temporal relation extraction. *arXiv preprint arXiv:1909.10094*.
Rujun Han, Qiang Ning, and Nanyun Peng. 2019b. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 434–444, Hong Kong, China. Association for Computational Linguistics.
Rujun Han, Xiang Ren, and Nanyun Peng. 2021.
ECONET: Effective continual pretraining of language models for event temporal reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP),
Punta Cana, Dominican Republic.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *ArXiv*, abs/1907.11692.
Sean MacAvaney, Arman Cohan, and Nazli Goharian.
2017. GUIR at SemEval-2017 task 12: A framework for cross-domain clinical temporal information extraction. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017),
pages 1024–1029, Vancouver, Canada. Association for Computational Linguistics.
Hieu Man, Nghia Trung Ngo, Linh Ngo Van, and Thien Huu Nguyen. 2022. Selecting optimal context sentences for event-event relation extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11058–11066.
Puneet Mathur, Rajiv Jain, Franck Dernoncourt, Vlad Morariu, Quan Hung Tran, and Dinesh Manocha. 2021. TIMERS: Document-level temporal relation extraction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),
pages 524–533, Online. Association for Computational Linguistics.
Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2018a.
Joint reasoning for temporal and causal relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2278–2288, Melbourne, Australia. Association for Computational Linguistics.
Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019.
An improved neural baseline for temporal relation extraction. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6203–6209, Hong Kong, China. Association for Computational Linguistics.
Qiang Ning, Hao Wu, and Dan Roth. 2018b. A multiaxis annotation scheme for event temporal relations.
In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1318–1328, Melbourne, Australia. Association for Computational Linguistics.
Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky.
2007. SemEval-2007 task 15: TempEval temporal relation identification. In *Proceedings of the* Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 75–80, Prague, Czech Republic. Association for Computational Linguistics.
Marc Verhagen, Roser Saurí, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 task 13:
TempEval-2. In *Proceedings of the 5th International* Workshop on Semantic Evaluation, pages 57–62, Uppsala, Sweden. Association for Computational Linguistics.
Haoyu Wang, Muhao Chen, Hongming Zhang, and Dan Roth. 2020. Joint constrained learning for eventevent relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 696–706. Association for Computational Linguistics.
Haoyu Wang, Hongming Zhang, Yuqian Deng, Jacob R Gardner, Muhao Chen, and Dan Roth. 2022a.
Extracting or guessing? improving faithfulness of event temporal relation extraction. arXiv preprint arXiv:2210.04992.
Liang Wang, Peifeng Li, and Sheng Xu. 2022b. DCTcentered temporal relation extraction. In *Proceedings* of the 29th International Conference on Computational Linguistics, pages 2087–2097.
Shuaicheng Zhang, Lifu Huang, and Qiang Ning. 2021.
Extracting temporal event relation with syntacticguided temporal graph transformer. *arXiv preprint* arXiv:2104.09570.
Jie Zhou, Shenpo Dong, Hongkui Tu, Xiaodong Wang, and Yong Dou. 2022. RSGT: Relational structure guided temporal relation extraction. In *Proceedings*
of the 29th International Conference on Computational Linguistics, pages 2001–2010.
Yichao Zhou, Yu Yan, Rujun Han, J. Harry Caufield, Kai-Wei Chang, Yizhou Sun, Peipei Ping, and Wei Wang. 2021. Clinical temporal relation extraction with probabilistic soft logic regularization and global inference. In *AAAI*.
## A Technical Details
All our models are trained with the same learning rate value of 0.00001 and a batch size value of 20. We use Pytorch's distributed-data-parallel
(DDP) mechanism with SyncBatchNorm over two GALAX GeForce RTX™ 3090 GPUs. Fine-tuning our QA model on the MATRES training set takes us about 25 minutes, and 13 minutes on TB-Dense.
## B Label Distribution
We summarize the label distributions of MATRES
and TB-Dense in Tables 3 and 4, respectively.
| Label | Train | Val. | Test |
|---------|---------|--------|--------|
| VAGUE | 12.0 | 0.0 | 3.8 |
| EQUAL | 3.5 | 0.3 | 13.5 |
| BEFORE | 50.7 | 67.2 | 50.6 |
| AFTER | 33.8 | 32.5 | 32.1 |
Table 3: Label distribution (%) in MATRES.
| Label | Train | Val. | Test |
|-------------|---------|--------|--------|
| VAGUE | 48.4 | 39.3 | 43.3 |
| EQUAL | 2.9 | 2.9 | 2.6 |
| BEFORE | 20.2 | 24.6 | 26 |
| AFTER | 16.9 | 27.4 | 19.3 |
| INCLUDES | 5.1 | 2.7 | 4.3 |
| IS-INCLUDED | 6.5 | 3.1 | 4.5 |
Table 4: Label distribution (%) in TB-Dense.
## C Qualitative Analysis
Table 5 lists some examples from MATRES. The first column contains the passage in which we highlight the two relevant events. The second and third columns show the answers given by the fine-tuned boolean QA model, following by the forth and fifth columns which provide the corresponding model's label and the gold label, as assigned by the annotators. Finally, the last column provides indication for whether the model was right or wrong.
Some examples are relatively simple, while other are more challenging. For instance, Example 3 was manually assigned with EQUAL, indicating that none of the actions **found** and **floating** had started before the other. However, our QA model might be right about the second question, answering yes, since one may assume that the pigs were *floating* even before they were *found*.
Example 5 shows the difficulty in putting two events in a chronological order, when one of them did not really happen. This difficulty is addressed by the creators of MATRES by introducing the concept of *multi-axis modeling* to separate the story into different temporal axes, which allows the annotators to ignore some pairs of events that do not align chronologically.
| Passage+Events | Ans. 1 | Ans. 2 | Prediction | Gold | Correct? | |
|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|--------|------------|-----|
| 1 | President Barack Obama arrived in refugee-flooded Jordan on Friday after scoring a diplomatic coup just before leaving Israel when Prime Minister Benjamin Netanyahu apologized to Turkey for a 2010 commando raid that killed nine activists on a Turkish vessel in a Gaza-bound flotilla. | No | Yes | AFTER | AFTER | Yes |
| 2 | The FAA on Friday announced it will close 149 regional airport control towers because of forced spending cuts - sparing 40 others that the FAA had been expected to shutter. | Yes | No | BEFORE | BEFORE | Yes |
| 3 | China's state leadership transition has taken place this month against an ominous backdrop. More than 16,000 dead pigs have been found floating in rivers that provide drinking water to Shanghai. | Yes | Yes | VAGUE | EQUAL | No |
| 4 | China's state leadership transition has taken place this month against an ominous backdrop. More than 16,000 dead pigs have been found floating in rivers that provide drinking water to Shanghai. A haze akin to volcanic fumes cloaked the capital, causing convulsive coughing and obscuring the portrait of Mao Zedong on the gate to the Forbidden City. | Yes | No | BEFORE | AFTER | No |
| 5 | Before the arrival of Keep, which Google launched this week, there was no default note-taking app for Android. It was a glaring hole, considering that Apple's iPhone has built-in Notes and Reminders apps that can be powered by Siri. Instead of settling for a bare bones app to fill the void, the search giant took things one step further. | Yes | No | BEFORE | AFTER | No |
| 6 | Former President Nicolas Sarkozy was informed Thursday that he would face a formal investigation into whether he abused the frailty of Liliane Bettencourt, 90, the heiress to the L'Oreal fortune and France's richest woman, to get funds for his 2007 presidential campaign. | No | Yes | AFTER | AFTER | Yes |
Table 5: Examples from MATRES, provided along with predictions given by our model.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
chiang-lee-2023-synonym | Are Synonym Substitution Attacks Really Synonym Substitution Attacks? | https://aclanthology.org/2023.findings-acl.117 | In this paper, we explore the following question: Are synonym substitution attacks really synonym substitution attacks (SSAs)?We approach this question by examining how SSAs replace words in the original sentence and show that there are still unresolved obstacles that make current SSAs generate invalid adversarial samples. We reveal that four widely used word substitution methods generate a large fraction of invalid substitution words that are ungrammatical or do not preserve the original sentence{'}s semantics. Next, we show that the semantic and grammatical constraints used in SSAs for detecting invalid word replacements are highly insufficient in detecting invalid adversarial samples. | # Are Synonym Substitution Attacks Really Synonym **Substitution Attacks?**
Cheng-Han Chiang National Taiwan University, Taiwan [email protected]
## Abstract
In this paper, we explore the following question: Are synonym substitution attacks really synonym substitution attacks (SSAs)? We approach this question by examining how SSAs replace words in the original sentence and show that there are still unresolved obstacles that make current SSAs generate invalid adversarial samples. We reveal that four widely used word substitution methods generate a large fraction of invalid substitution words that are ungrammatical or do not preserve the original sentence's semantics. Next, we show that the semantic and grammatical constraints used in SSAs for detecting invalid word replacements are highly insufficient in detecting invalid adversarial samples.
## 1 Introduction
Deep learning-based natural language processing models have been extensively used in different tasks in many domains and have shown strong performance in different realms. However, these models seem to be astonishingly vulnerable in that their predictions can be misled by some small perturbations in the original input (Gao et al., 2018; Tan et al., 2020). These *imperceptible* perturbations, while not changing humans' predictions, can make a well-trained model behave worse than random.
One important type of adversarial attack in natural language processing (NLP) is the **synonym**
substitution attack (SSA). In SSAs, an adversarial sample is constructed by substituting some words in the original sentence with their synonyms (Alzantot et al., 2018; Ren et al., 2019; Garg and Ramakrishnan, 2020; Jin et al., 2020; Li et al., 2020; Maheshwary et al., 2021). This ensures that the adversarial sample is semantically similar to the original sentence, thus fulfilling the imperceptibility requirement of a valid adversarial sample. While substituting words with their semantic-related counterparts can retain the semantics of the original sentence, these attacks often Hung-yi Lee National Taiwan University, Taiwan [email protected] utilize constraints to further guarantee that the generated adversarial samples are grammatically correct and semantically similar to the original sentence. These SSAs have all been shown to successfully bring down well-trained text classifiers' performance.
However, some recent works observe, by human evaluations, that the quality of the generated adversarial samples of those SSAs is fairly low and is highly perceptible by human (Morris et al., 2020a; Hauser et al., 2021). These adversarial samples often contain grammatical errors and do not preserve the semantics of the original samples, making them difficult to understand. These characteristics violate the fundamental criteria of a *valid adversarial* sample: preserving semantics and being imperceptible to humans. This motivates us to investigate what causes those SSAs to generate invalid adversarial samples. Only by answering this question can we move on to design more realistic SSAs in the future.
In this paper, we are determined to answer the following question: Are synonym substitution attacks in the literature really *synonym* substitution attacks? We explore the answer by scrutinizing the key components in several important SSAs and why they fail to generate valid adversarial samples.
Specifically, we conduct a detailed analysis of how the word substitution sets are obtained in SSAs, and we look into the semantic and grammatical constraints used to filter invalid adversarial samples.
We have the following astonishing observations:
- When substituting words by WordNet synonym sets, current methods neglect the word sense differences within the substitution set.
(Section 3.1)
- When using counter-fitted GloVe embedding space or BERT to generate the substitution set, the substitution set only contains a teeny-tiny fraction of synonyms. (Section 3.2)
- Using word embedding cosine similarity or sentence embedding cosine similarity to filter words in the substitution set does not necessarily exclude semantically invalid word substitutions. (Section 4.1 and Section 4.2)
- The grammar checker used for filtering ungrammatical adversarial samples fails to detect most erroneous verb inflectional forms in a sentence. (Section 4.3)
## 2 Backgrounds
In this section, we provide an overview of SSAs and introduce some related notations that will be used throughout the paper.
## 2.1 Synonym Substitution Attacks (Ssas)
Given a victim text classifier trained on a dataset D*train* and a clean testing data xori sampled from the same distribution of Dtrain; xori =
{x1, · · · , xT } is a sequence with T tokens. An SSA attacks the victim model by constructing an adversarial sample xadv = {x
′
1
, · · · , x
′
T} by swapping the words in xori with their semantic-related counterparts. For xadv to be considered as a **valid**
adversarial sample of xori, a few requirements must be met (Morris et al., 2020a): (0) xadv should make the model yield a wrong prediction while the model can correctly classify xori. (1) xadv should be semantically similar with xori. (2) xadv should not induce new grammar errors compared with xori.
(3) The word-level overlap between xadv and xori should be high enough. (4) The modification made in xadv should be natural and non-suspicious. In our paper, we will refer to the adversarial samples that fail to meet the above criteria as invalid adversarial samples.
SSAs rely on heuristic procedures to ensure that xadv satisfies the preceding specifications. Here, we describe a canonical pipeline of generating xadv from xori (Morris et al., 2020b). Given a clean testing sample xori that the text classifier correctly predicts, an SSA will first generate a candidate word substitution set Sxi for each word xi. The process of generating the candidate set Sxi is called transformation. Next, the SSA will determine which word in xori should be substituted first, and which word should be the next to swap, etc. After the word substitution order is decided, the SSA will iteratively substitute each word xiin xori using the candidate words in Sxi according to the predetermined order. In each substitution step, an xi is replaced by a word in Sxi
, and a new x*swap* is obtained. When an x*swap* is obtained, some constraints are used to verify the validity of x*swap*.
The iterative word substitution process will end if the model's prediction is successfully corrupted by a substituted sentence that sticks to the constraints, yielding the desired xadv eventually.
Clearly, the transformations and the constraints are critical to the quality of the final xadv. In the remaining part of the paper, we will look deeper into the transformations and constraints used in SSAs and their role in creating adversarial samples1. Next, we briefly introduce the transformations and constraints that have been used in SSAs.
## 2.2 Transformations
Transformation is the process of generating the substitution set Sxi for a word xiin xori. There are four representative transformations in the literature.
WordNet Synonym Transformation constructs Sxi by querying a word's synonym using WordNet (Miller, 1995; University, 2010), a lexical database containing the word sense definition, synonyms, and antonyms of the words in English. This transformation is used in PWWS (Ren et al., 2019) and LexicalAT (Xu et al., 2019).
## Word Embedding Space Nearest Neighbor
Transformation constructs Sxi by looking up the word embedding of xiin a word embedding space, and finding its k nearest neighbors
(kNN) in the word embedding space. Using kNN
for word substitution is based on the assumption that semantically related words are closer in the word embedding space. Counter-fitted GloVe embedding space (Mrkšic et al. ´ , 2016) is the embedding space obtained from post-processing the GloVe embedding space (Pennington et al., 2014).
Counter-fitting refers to the process of pulling away antonyms and narrowing the distance between synonyms. This transformation is adopted in TextFooler (Jin et al., 2020), Genetic algorithm attack (Alzantot et al., 2018), and TextFoolerAdj (Morris et al., 2020a).
Masked Language Model (MLM) MaskInfilling Transformation constructs Sxi by masking xiin xori and asking an MLM to predict the masked token; MLM's top-k prediction of the masked token forms the word substitution set of xi. Widely adopted MLMs includes BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).
Using MLM mask-infilling to generate a candidate set relies on the belief that MLMs can generate fluent and semantic-consistent substitutions for xori.
This method is used in BERT-ATTACK (Li et al.,
2020) and CLARE (Li et al., 2021).
MLM Reconstruction Transformation also uses MLMs. When using MLM reconstruction transformation to generate the candidate set, one just feeds the MLM with the original sentence xori without masking any tokens in the sentence. Here, the MLM is not performing mask-infilling but reconstructs the input tokens from the unmasked inputs. For each word xi, one can take its top-k token reconstruction prediction as the candidates. This transformation relies on the intuition that reconstruction can generate more semantically similar words than using mask-infilling. This method is used in BAE (Garg and Ramakrishnan, 2020).
## 2.3 Constraints
When an xori is perturbed by swapping some words in it, we need to use some constraints to check whether the perturbed sentence, x*swap*, is semantically or grammatically valid or not. We use x*swap* instead of xadv here as x*swap* does not necessarily flip the model's prediction and thus not necessarily an adversarial sample.
Word Embedding Cosine Similarity requires a word xi and its perturbed counterpart x
′
i to be close enough in the counter-fitted GloVe embedding space, in terms of cosine similarity. A substitution is valid if its word embedding's cosine similarity with the original word's embedding is higher than a pre-defined threshold. This is used in Genetic Algorithm Attack (Alzantot et al., 2018)
and TextFooler (Jin et al., 2020).
Sentence Embedding Cosine Similarity demands that the sentence embedding cosine similarity between x*swap* and xori are higher than a pre-defined threshold. Most previous works (Jin et al., 2020; Li et al., 2020; Garg and Ramakrishnan, 2020; Morris et al., 2020a) use Universal Sentence Encoder (USE) (Cer et al., 2018) as the sentence encoder; A2T (Yoo and Qi, 2021) use a DistilBERT (Sanh et al., 2019) fine-tuned on STS-B (Cer et al., 2017) as the sentence encoder.
In some previous work (Li et al., 2020), the sentence embedding is computed using the whole sentence xori and x*swap*. But most previous works (Jin et al., 2020; Garg and Ramakrishnan, 2020) only extract a context around the currently swapped word in xori and x*swap* to compute the sentence embedding. For example, if xiis substituted in the current substitution step, one will compute the sentence embedding between xori[i − w : i + w + 1]
and xadv[i − w : i + w + 1], where w determines the window size. w is set to 7 in Jin et al. (2020)
and Garg and Ramakrishnan (2020).
LanguageTool (language-tool python, 2022)
is an open-source grammar tool that can detect spelling errors and grammar mistakes in an input sentence. It is used in TextFooler-Adj (Morris et al.,
2020a) to evaluate the grammaticality of the adversarial samples.
## 3 Problems With The Transformations In Ssas
In this section, we show that the transformations introduced in Section 2.2 are largely to blame for the invalid adversarial samples in SSAs. This is because the substitution set Sxi for xiis mostly invalid, either semantically or grammatically.
## 3.1 Wordnet Synonym Substitution Set Ignores Word Senses
In WordNet, each word is associated with one or more word senses, and each word sense has its corresponding synonym sets. Thus, the substitution set Sxi proposed by WordNet is the union of the synonym sets of different senses of xi. When swapping xi with its synonym using WordNet, it is more sensible to first identify the word sense of xiin xori, and use the synonym set of the very sense as the substitution set. However, current attacks using WordNet synonym substitution neglect the sense differences within the substitution set (Ren et al.,
2019), which may result in adversarial samples that semantically deviate from the original input.
As a working example, consider a movie review that reads "I highly recommend it". The word "recommend" here corresponds to the word sense of
"*express a good opinion of* " according to WordNet and has the synonym set {recommend, commend}.
Aside from the above word sense, "recommend" also have another word sense: "push for something", as in "The travel agent recommends not to travel amid the pandemic". This second word sense has the synonym set {recommend, urge, advocate}2. Apparently, the only valid substitution is "commend", which preserves the semantics of the original movie review. While "urge" is the synonym of "recommend", it obviously does not fit in the context and should not be considered as a possible substitution. We call substituting xi with a synonym that matches the word sense of xiin xori a *matched sense substitution*, and we use *mismatched sense substitution* to refer to swapping words with the synonym which belongs to the synonym set of a different word sense.
## 3.1.1 Experiments
To illustrate that mismatched sense substitution is a problem existing in practical attack algorithms, we conduct the following analysis. We examine the adversarial samples generated by PWWS (Ren et al., 2019), which substitutes words using WordNet synonym set. We use a benchmark dataset (Yoo et al., 2022) that contains the adversarial samples generated by PWWS against a BERT-based classifier fine-tuned on AG-News (Zhang et al., 2015).
AG-News is a news topic classification dataset, which aims to classify a piece of news into four categories: world, sports, business, and sci/tech news. The attack success rate on the testing set composed of 7.6K samples is 57.25%. More statistics about the datasets can be found in Appendix B.
We categorize the words replaced by PWWS into three disjoint categories: matched sense substitution, *mismatched sense substitution*, and *morphological substitution*. The last category, morphological substitution, refers to substituting words with a word that only differs in inflectional morphemes3 or derivational morphemes4 with the original word.
We specifically isolate *morphological substitution* since it is hard to categorize it into either matched or mismatched sense substitution.
The detailed procedure of categorizing a replaced word's substitution type is as follows: Given a pair of (xori, xadv), we first use NLTK (Bird et al., 2009) to perform word sense disambiguation on each word xiin xori. We use LemmInflect and NLTK to generate the morphological substitution set MLxi of xi. The matched sense substitution set Mxi is constructed using the WordNet synonym set of the word sense of xiin xori; since this synonym set includes the original word xi and may also include some words in the MLxi
, we remove xi and words that are already included in the MLxi from the synonym set, forming the final matched sense substitution set, Mxi
. The mismatched sense substitution set MMxi is constructed by first collecting all synonyms of xithat belong to the different word sense(s) of xiin xori using WordNet, and then removing all words that have been included in MLxi and Mxi
.
After inspecting 4140 adversarial samples produced by PWWS, we find that among **26600** words that are swapped by PWWS, only **5398 (20.2%)**
words fall in the category of matched sense substitution. A majority of **20055 (75.4%)** word substitutions are mismatched sense substitutions, which should be considered invalid substitutions since using mismatched sense substitution cannot preserve the semantics of xori and makes xadv incomprehensible. Last, about **3.8%** of words are substituted with their morphological related words, such as converting the part of speech (POS) from verb to noun or changing the verb tense. These substitutions, while maintaining the semantics of the original sentence and perhaps human readable, are mostly ungrammatical and lead to unnatural adversarial samples. The aforementioned statistics illustrate that only about 20% word substitutions produced by PWWS are *real* synonym substitutions, and thus the high attack success rate of 57.25%
should not be surprising since most word replacements are highly questionable.
## 3.2 Counter-Fitted Embedding K**Nn And** Mlm Mask-Infilling/Reconstruction Contain Few Matched Sense Synonym
As shown in Section 3.1.1, even when using WordNet synonyms as the candidate sets, the proportion of the valid substitutions is unthinkably low. This makes us more concerned about the word substitution quality of the other three heuristic transformations introduced in Section 2.2. These three word substitution methods mostly rely on assumptions about the quality of the embedding space or the
| Transformations | Syn. (matched) | Syn. (mismatched) | Antonyms | Morphemes | Others |
|---------------------|------------------|---------------------|------------|-------------|----------|
| GloVe-kNN | 0.22 | 1.01 | 0 | 1.55 | 27.22 |
| BERT mask-infill | 0.08 | 0.36 | 0.06 | 0.57 | 28.93 |
| BERT reconstruction | 0.14 | 0.58 | 0.09 | 1.19 | 27.99 |
ability of the MLM and require setting a hyperparameter k for the size of the substitution set. To the best of our knowledge, no previous work has systematically studied what the candidate sets proposed by the three transformations are like; still, they have been widely used in SSAs.
## 3.2.1 Experiments
To understand what those substitution sets are like, we conduct the following experiment. We use the benchmark dataset generated by Yoo et al. (2022) that attacks 7.6k samples in the AG-News testing data using TextFooler. For each word xiin xori that is perturbed into another x
′
i in xadv, we use the following three transformations to obtain the candidate substitution set: counter-fitted GloVe embedding space, BERT mask-infilling, and BERT
reconstruction. 5 We only consider the substitution set of xithat are perturbed in xadv because not all words in xori will be perturbed by an SSA,
and it is thus more reasonable to consider only the words that are really perturbed by an SSA. We set the k in kNN of counter-fitted GloVe embedding space transformation and top-k prediction in BERT
mask-infilling/reconstruction to 30, a reasonable number compared with many previous works.
We categorize the candidate words into five disjoint word substitution types. Aside from the three word substitution types discussed in Section 3.1.1, we include two other substitution types. The first one is *antonym substitution*, which is obtained by querying the antonyms of a word xi using WordNet.
Different from synonym substitutions, we do not separate antonyms into antonyms that matched the word sense of xiin xori and the sense-mismatched antonyms, since neither of them should be considered a valid swap in SSAs. The other substitution type is *others*, which simply consists of the candidate words not falling in the category of synonyms, antonyms, or morphological substitutions.
In Table 1, we show how different substitution types comprise the 30 words in the candidate set 5For BERT mask-infilling and reconstruction substitution, we remove punctuation and incomplete subword tokens.
for different transformations on average. It is easy to tell that only a slight proportion of the substitution set is made up of synonym substitution for all three transformation methods, with counter-fitted GloVe embedding substitution containing the most synonyms among the three methods, but still only a sprinkle of about 1 word on average. Moreover, synonym substitution is mostly composed of mismatched sense substitution. When using BERT
mask-infilling as a transformation, there are only 0.08 matched sense substitutions in the top 30 predictions. While using BERT reconstruction for producing the candidate set, the matched sense substitution slightly increases, compared with maskinfilling, it still only accounts for less than 1 word in the top-30 reconstruction predictions of BERT.
Within the substitution set, there is on average about 1 word which is the morphological substitution of the original word. Surprisingly, using MLM
mask-infilling or reconstruction as transformation, there is a slight chance that the candidate set consists of antonyms of the original word. It is highly doubtful whether the semantics is preserved when swapping the original sentence with antonyms.
The vast majority of the substitution set composes of words that do not fall into the previous four categories. We provide examples of how the substitution sets proposed by different transformations are like in Table 6 in the Appendix, showing that the candidate words in the *others* substitution types are mostly unrelated words that should not be used for word replacement. It is understandable that words falling to the *other* substitution types are invalid candidates; this is because the core of SSAs is to replace words with their semantically close counterparts to preserve the semantics of the original sentence. If a substitution word does not belong to the synonym set proposed by WordNet, it is unlikely that swapping the original word with this word can preserve the semantics of xori.
We also show some randomly selected adversarial samples generated by different SSAs that use different transformations in Table 5 in the Appendix, which also show that when a word substitution is not a synonym nor a morphological swap, there is a high chance that it is semantically invalid. Hauser et al. (2021) uses human evaluation to show that the adversarial samples generated from TextFooler, BERT-Attack, and BAE do not preserve the meaning of xori, which also backs up our statement.
When decreasing the number of k, the number of invalid substitution words may possibly be reduced. However, a smaller k often leads to lower attack success rates, as shown in Li et al. (2020), so it is not very common to use a smaller k to ensure the validity of the words in the candidate sets. In practical attacks, whether these words in the candidate sets can be considered valid depends on the constraints. But can those constraints really filter invalid substitutions? We show in the next section that, sadly, the answer is no.
## 4 Problems With The Constraints In Ssas
In this section, we show that the constraints commonly used in SSAs cannot fully filter invalid word substitutions proposed by the transformations.
## 4.1 Word Embedding Similarity Cannot Distinguish Valid/Invalid Swaps Well
Setting a threshold on word embedding cosine similarity to filter invalid word substitutions relies on the hypothesis that valid word swaps indeed have higher cosine similarity with the word to be substituted, compared with invalid word replacements. We investigate whether the hypothesis holds with the following experiment. We reuse the 7.6K AG-News testing samples attacked by TextFooler used in Section 3.2, and we gather all pairs of (xori, xadv). For each word xiin xori that is perturbed in xadv, we follow the same procedure in Section 3.2 to obtain the morphological substitution set, matched sense substitution set, mismatched sense substitution set, and the antonym set.
We then query the counter-fitted GloVe embedding space to obtain the word embeddings of all those words and calculate their cosine similarity with the word embedding of xi. As a random baseline, we also randomly sample high-frequency words and low-frequency words in the training dataset of AGNews, and compute the cosine similarity between those words and xi. How these high-frequency and low-frequency words are sampled is detailed in Appendix D.2.
To quantify how hard it is to use the word em-
| Substitution Type | AUPR |
|-----------------------|--------|
| Synonyms (mismatched) | 0.627 |
| Antonym | 0.980 |
| Morpheme | 0.433 |
| Random high-freq | 0.900 |
| Random low-freq | 0.919 |
![5_image_0.png](5_image_0.png)
bedding cosine similarity to distinguish a valid substitution (the matched sense substitution) from another type of invalid substitution, we calculate the area under the precision-recall curve (AUPR) of the threshold-based detector that predicts whether a perturbed x
′
i is a valid substitution based on its cosine similarity with xi. Given an xi and a perturbed x
′
i
, a threshold-based detector measures the word embedding cosine similarity between xi and x
′
i
, and assigns it as positive (valid substitution) if the cosine similarity is higher than the threshold.
A perfect detector should have an AUPR of 1.0, while a random detector will have an AUPR of 0.5.
Note that the detector we discuss here will only be presented with two types of substitution, one is the matched sense substitution and the other is a substitution type other than the matched sense substitution.
We show the AUPR in Table 2. First, we notice that when using the word embedding cosine similarity to distinguish matched sense substitutions from mismatched ones, the AUPR is as low as 0.627. While this is better than random, this is far from a useful detector, showing that word embedding cosine similarity constraints are not useful to remove invalid substitutions like unmatched sense words. The AUPR for morpheme substitutions is even lower than 0.5, implying that the word embedding cosine similarity between xi and its morphological similar words is higher than the similarity score between matched sense synonyms. This means that when we set a higher cosine similarity threshold, we are keeping more morphological swaps instead of valid matched sense substitutions.
While morphological substitutions have meanings similar to or related to the original word, as we previously argued, they are mostly ungrammatical.
The AUPR when using a threshold-based detector to separate matched sense substitutions from antonym substitutions is almost perfect, which is 0.980. This should not be surprising since the counter-fitted word embedding is designed to make synonyms and antonyms have dissimilar word embeddings. Last, the AUPR of separating random substitutions from matched sense substitutions is also high, meaning that it is possible to use a detector to remove random and unrelated substitutions based on word embedding cosine similarity. Based on the result in Table 2, setting a threshold on wordembedding cosine similarity may only filter out the antonyms and random substitutions but still fails to remove the other types of invalid substitutions.
## 4.2 **Sentence Encoder Is Insensitive To Invalid** Word Substitutions
To test if sentence encoders really can filter invalid word substitutions in SSA, we conduct the following experiment. We use the same attacked AG-News samples that were used in Section 3.2.1.
For each pair of (xori, xadv) in that dataset, we first collect the swapped indices set I = {i|xi ̸= x
′
i}
that represents the positions of the swapped words in xadv. We shuffle the elements in I to form an ordered list O. Using xori and O, we construct a sentence x n swap by swapping n words in xori. The n positions where the substitutions are made in x n swap are the first n elements in the ordered list O;
at each substitution position, the word is replaced by a word randomly selected from a type of candidate word set. All the n replaced words in x n swap are the same type of word substitution. We conduct experiments with six types of candidate word substitution sets: matched sense, mismatched sense, morphological, antonym, random high-frequency, and random low-frequency word substitutions. After obtaining x n swap, we compute the cosine similarity between the sentence embedding between x n swap with xori using USE and set the window size w to 7, following Jin et al. (2020) and Garg and Ramakrishnan (2020). We vary the number of replaced words from 1 to 10.6 This experiment helps us know how the cosine similarity changes when the words are swapped using different types of candidate word sets. More details on this experiment are in Appendix D.3 and Figure 2 in the Appendix.
The results are shown in Figure 1. While replacing more words in xori does decrease its cosine similarity with xori, the cosine similarity when substituting random high-frequency words is still 6Attacking AG-News using TextFooler perturbs about 9 out of 38.6 words in a benign sample on average.
![6_image_0.png](6_image_0.png)
roughly higher than 0.80. Considering that practical SSAs often set the cosine similarity threshold to around 0.85 or even lower7, depending on the SSAs and datasets, it is suspicious whether the constraint and threshold can really filter invalid word substitution. We can also observe that when substituting words with antonyms, the sentence embedding cosine similarity with the original sentence closely follows the trend of substituting words using a synonym, regardless of whether the synonym substitution matches the word sense or not. Recalling that we have revealed that the candidate set proposed by BERT can contain antonyms in Table 1, the results here indicates that sentence embedding similarity constraint cannot filter this type of faulty word substitution. For the two different types of synonym substitutions, only matched sense substitutions are valid replacement that follows the semantics of the original sentence. However, the sentence embedding of xori and the sentence embedding of the two types of different synonym substitutions are equally similar. The highest cosine similarity is obtained when the words in xori are swapped using their morphological substitutions, and this is expected since morphological substitutions merely change the semantics.
In Figure 1, we only show the average cosine similarity and do not show the variance of the cosine similarity of each substitution type. In Figure 3 in the Appendix, we show the distribution of the cosine similarity of different substitution types.
The main observation from Figure 3 is that the cosine similarity distributions of different substitution types (for the same n) are highly overlapped, and it is impossible to distinguish valid word swaps from 7We include the sentence embedding cosine similarity threshold of prior works in Table 4 in Appendix C.
the invalid ones simply by using a threshold on the sentence embedding cosine similarity.
Overall, the results in Figure 1 demonstrate that USE tends to generate similar sentence embeddings when two sentences only differ in a few tokens, no matter whether the replacements change the sentence meaning or not. While we only show the result of USE, we show in Appendix E that different sentence encoders have similar behavior.
Moreover, when we use the whole sentence instead of a windowed subsentence to calculate the sentence embedding, the cosine similarity is even higher than that shown in Figure 1, as shown in Appendix E. Again, these sentence encoders fail to separate invalid word substitutions from valid ones.
While frustrating, this result should not be surprising, since most sentence encoders are not trained to distinguish sentences with high word overlapping.
## 4.3 Languagetool Cannot Detect False Verb Inflectional Form
LanguageTool is used in TextFooler-Adj (TFAdj) (Morris et al., 2020a) to prevent the attack to induce grammar errors. TF-Adj also uses stricter word embedding and sentence embedding cosine similarity constraints to ensure the semantics in xori are preserved in xadv. However, when browsing through the adversarial samples generated by TF-Adj, we observe that the word substitutions made by TF-Adj are often ungrammatical morphological swaps that convert a verb's inflectional form.
This indicates that LanguageTool may not be capable of detecting a verb's inflectional form error.
To verify this hypothesis, we conduct the following experiment. For each sample in the test set of AG-News that LanguageTool reports no grammatical errors, we convert the inflectional form of the verbs in the sample by a hand-craft rule that will always make a grammatical sentence ungrammatical; this rule is listed in Appendix D.4. We then use LanguageTool to detect how many grammar errors are there in the verb-converted sentences.
We summarize the experiment results as follows.
For the 1039 grammatical sentences in AG-News, the previous procedure perturbed **4.37** verbs on average. However, the average number of grammar errors identified by LanguageTool is **0.97**, meaning that LanguageTool cannot detect all incorrect verb forms. By this simple experiment and the results from Table 2 and Figure 1, we can understand why the attack results of TF-Adj are often ungrammatical morphological substitutions: higher cosine similarity constraints prefer morphological substitutions, but those often ungrammatical substitutions cannot be detected by LanguageTool. Thus, aside from showing that the text classifier trained on AG-News is susceptible to inflectional perturbations, TF-Adj actually exposes that LanguageTool itself is vulnerable to inflectional perturbations.
## 5 Related Works
Some prior works also discuss a similar question that we study in this paper. Morris et al.
(2020a) uses human evaluation to reveal that SSAs sometimes produce low-quality adversarial samples. They attribute this to the insufficiency of the constraints and use stricter constraints and LanguageTool to generate better adversarial samples.
Our work further points out that the problem is not only in the constraints; we show that the transformations are the fundamental problems in SSAs.
We further show that LanguageTool used by Morris et al. (2020a) cannot detect ungrammatical verb inflectional forms, and reveal that the adversarial samples generated by TF-Adj exploit the weakness of LanguageTool and are often made up of ungrammatical morphological substitutions. Hauser et al.
(2021) uses human evaluations and probabilistic statements to show that SSAs are low quality and do not preserve original semantics. Our work can be seen as an attempt to understand the cause of the observations in Hauser et al. (2021).
Morris (2020) also questions the validity of using sentence encoders as semantic constraints.
They attack sentence encoders by swapping words in a sentence with their antonyms and the attack goal is to maximally preserve the swapped sentence's sentence embedding cosine similarity with the original sentence. This is related to our experiments in Section 4.2. The main differences between our experiments and theirs are: (1) When swapping words, we only swap the words that are really swapped by TextFooler; on the contrary, the words swapped in Morris (2020) are not necessarily words that are actually substituted in an SSA. The words swapped when attacking a sentence encoder and attacking a text classifier can be significantly different. Since our goal is to verify how sentence encoders behave when used *in SSAs*, it makes more sense to only swap the words that are really replaced by an SSA. (2) Morris (2020) only uses antonyms for word substitution.
## 6 Discussion And Conclusion
This paper discusses how the elements in SSAs lead to invalid adversarial samples. We highlight that the candidate word sets generated by all four different word substitution methods contain only a small fraction of semantically matched and grammatically correct word replacements. While these transformations produce inappropriate candidate words, this alone will not contribute to the invalid adversarial samples. The inferiority of those adversarial samples should be largely attributed to the deficiency of the constraints that ought to guarantee the quality of the perturbed sentences: word embedding cosine similarity is not always larger for valid word substitutions, sentence encoder is insensitive to invalid word swaps, and LanguageTool fails to detect grammar mistakes. These altogether bring about the adversarial samples that are human distinguishable, unreasonable, and mostly inexplicable.
These adversarial samples are not suitable for evaluating the vulnerability of NLP models because they are not reasonable inputs.
The results and observations shown in the main content of our paper are not unique for BERT finetuned on AG-News, which is the only attacked model shown in Section 3 and Section 4. We include supplementary analyses in Appendix F for different model types and datasets, which supports all the claims and observations in the main contents. In this paper, we follow previous papers on SSAs to only show the result of attacking the victim model once and not reporting the performance variance due to random seed and hyperparameters used during the fine-tuning of victim model (Ren et al., 2019; Li et al., 2020; Jin et al., 2020). This is because conducting SSA is very time-consuming.
In our preliminary experiments, we used TextAttack to attack three BERT models fine-tuned on AG-News and we crafted the adversarial samples for 100 samples in the testing data for each model The three models were fine-tuned with three different sets of hyperparameters. We find that our observation in Section 3.2 and Section 4 do not change for the three victim models. Overall, the observation shown in the paper is not an exception but rather a general phenomenon in SSAs.
By the analyses in the paper, we show that we may still be far away from *real* SSAs, and how to construct valid synonym substitution adversarial samples remains an unresolved problem in NLP.
While there is still a long way to go, it is essential to recognize that the prior works have contributed significantly to constructing valid SSAs. Although prior SSAs may not always produce reasonable adversarial samples, they are still valuable since they pave the way for designing better SSAs and help us uncover the inadequacy of the transformations and constraints for constructing *real* synonym substitution adversarial samples. As an initiative to stimulate future research, we provide some possible directions and guidelines for constructing better SSAs, based on the observation in our paper.
1. Simply consider the word senses when making a replacement with WordNet.
2. Use better sentence encoders that are sensitive to token replacements that change the semantics of the original sentence. For example, DiffCSE (Chuang et al., 2022) is shown to be able to distinguish the tiny differences between sentences.
3. When designing transformations, one should always verify the validity of the proposed method through well-controlled experiments.
These experiments include recruiting human evaluators to check the quality of the transformations or using experiments as in Section 3 to check what the candidate sets proposed by the transformations are like. It is perilous to solely rely on heuristics or black-box models such as sentence encoders to guarantee the quality of the transformation.
4. Since the sentences crafted by SSAs may largely deviate from normal sentences, one should test if constraint models, e.g., grammar checkers or sentence encoders, work as expected when faced with those abnormal sentences. For example, one can perform stress tests (Ribeiro et al., 2020) to test the behavior of the constraint models. This prevents us from exploiting the vulnerability of the constraints when attacking the text classifier.
The problems outlined in this paper may be familiar to those with experience in lexical substitution (Melamud et al., 2015; Zhou et al., 2019),
but they have not yet been widely recognized in the field of SSAs. Our findings on why SSAs fail can serve as a reality check for the field, which has been hindered by overestimating prior SSAs.
We hope our work will guide future researchers in cautiously building more effective SSAs.
## Limitations
In this paper, we only discuss the SSAs in English, as this has been the most predominantly studied in adversarial attacks in NLP. The authors are not sure whether SSAs in a different language will suffer from the shortcomings discussed in this paper.
However, if an SSA in a non-English language uses the transformations or constraints discussed in this paper, there is a high chance that this attack will produce low-quality results for the same reason shown in this paper. Still, the above claim needs to be verified by extensive human evaluation and further language-specific analyses.
In our paper, we use WordNet as the gold standard of the word senses since WordNet is a widely adopted and accepted tool in the NLP community.
Chances are that some annotations in WordNet, while very scarce, are not perfect, and this may be a possible limitation of our work. It is also possible that the matched sense synonyms found by WordNet may not always be a valid substitution even if the annotation of WordNet is perfect. For example, the collocating words of the substituted word may not match that of the original word, and the substitution word may not fit in the original context. However, if a word is not even a synonym, it is more unlikely that it is a valid substitution.
Thus, being a synonym in WordNet is a minimum requirement and we use WordNet synonym sets to evaluate the validity of a word substitution.
Last, we do not conduct human evaluations on what the *other substitution types* in Table 1 are. As stated in Section 3.2.1, while we do not perform human evaluations on this, the readers can browse through Table 6 in the Appendix to see what the others substitutions are. It will be interesting to see what human evaluators think about the *other* substitutions in the future.
## Ethics Statement And Broader Impacts
The goal of our paper is to highlight the overlooked details in SSAs that cause their failures. By mitigating the problems pointed out in our paper, there are two possible consequences:
1. One may find that there exist no *real* synonym substitution adversarial samples, and the NLP
models currently used are robust. This will cause no ethical concerns since this indicates that no harm will be caused by our work. Previous observations on the vulnerability are just
the product of low-quality adversarial samples.
2. There exists *real* synonym substitution adversarial samples, and excluding the issues mentioned in this paper will help malicious users easier to find those adversarial samples. This will become a potential risk in the future. The best way to mitigate the above issue is to construct new defenses for *real* SSAs.
While our goal is to raise attention to whether SSAs are really SSAs, we are not advocating malicious users to attack text classifiers using better SSAs.
Instead, we would like to highlight that there is still an unknown risk, the *real* SSAs, against text classifiers, and we researchers should devote more to studying this topic and developing defenses against such attacks before they are adopted by adversarial users.
Another major ethical consideration in our paper is that we challenge prior works on the quality of the SSAs. While we reveal the shortcomings of previously proposed methods, we still highly acknowledge their contributions. As emphasized in Section 6, we do not and try not to devalue those works in the past. We scientifically and objectively discuss the possible risks of those transformations and constraints, and our ultimate goal is to push the research in adversarial attacks in NLP a step forward; from this perspective, we believe that we are in common with prior works.
## Acknowledgements
We thank the reviewers for their valuable feedback and actionable suggestions. We've made major revisions based on the reviews and we list the main modification in Appendix A. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics.
## References
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang.
2018. Generating natural language adversarial examples. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics.
Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural language processing with Python: analyzing text* with the natural language toolkit. " O'Reilly Media, Inc.".
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings of* the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. *arXiv* preprint arXiv:1803.11175.
Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljaciˇ c, Shang- ´
Wen Li, Wen-tau Yih, Yoon Kim, and James Glass. 2022. Diffcse: Difference-based contrastive learning for sentence embeddings. arXiv preprint arXiv:2204.10298.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50–56. IEEE.
Siddhant Garg and Goutham Ramakrishnan. 2020.
BAE: BERT-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 6174–6181, Online. Association for Computational Linguistics.
Jens Hauser, Zhao Meng, Damián Pascual, and Roger Wattenhofer. 2021. Bert is robust! a case against synonym-based adversarial examples in text classification. *arXiv preprint arXiv:2109.07403*.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pages 8018–8025.
language-tool python. 2022. language_tool_python: a grammar checker for python.
Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2021. Contextualized perturbation for textual adversarial attack.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5053–5069, Online. Association for Computational Linguistics.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial attack against bert using bert. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193–6202.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. A strong baseline for query efficient attacks in a black box setting. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 8396–8409.
Oren Melamud, Omer Levy, and Ido Dagan. 2015. A
simple word embedding model for lexical substitution. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1–7, Denver, Colorado. Association for Computational Linguistics.
George A Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41.
John Morris. 2020. Second-order nlp adversarial examples. In *Proceedings of the Third BlackboxNLP*
Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 228–237.
John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020a. Reevaluating adversarial examples in natural language. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 3829–3839, Online. Association for Computational Linguistics.
John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020b. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126.
Nikola Mrkšic, Diarmuid Ó Séaghdha, Blaise Thomson, ´
Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young.
2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–148.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* on empirical methods in natural language processing
(EMNLP), pages 1532–1543.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics.
Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che.
2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085–
1097, Florence, Italy. Association for Computational Linguistics.
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902–
4912, Online. Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin'time! combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2920–
2935.
Princeton University. 2010. About wordnet. *WordNet*.
Jingjing Xu, Liang Zhao, Hanqi Yan, Qi Zeng, Yun Liang, and Xu Sun. 2019. LexicalAT: Lexical-based adversarial reinforcement training for robust sentiment classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5518–5527, Hong Kong, China. Association for Computational Linguistics.
Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of nlp models. In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 945–956.
KiYoon Yoo, Jangho Kim, Jiho Jang, and Nojun Kwak.
2022. Detection of adversarial examples in text classification: Benchmark and baseline via robust density estimation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3656–3672, Dublin, Ireland. Association for Computational Linguistics.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *NIPS*.
Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. BERT-based lexical substitution.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3368–
3373, Florence, Italy. Association for Computational Linguistics.
## A Different From The Pre-Review Version
We list the main difference between this version and the pre-review version of our paper (the prereview version is similar to the previous arXiv version). Most modifications are made based on the reviewers' suggestions. We thank the reviewers for their valuable feedback that help us polish and strengthen this paper.
- We change how we present our result in Section 3.2 from a bar chart to a table for easier interpretation.
- We largely reformulate Section 4.1. We change how we present the experiment results: in the previous version, we only qualitatively plot the distribution of the word embedding cosine similarity of different substitution types. In this version, we adopt the reviewers' suggestion to quantitatively show that some types of invalid substitutions cannot be easily detected by the word embedding cosine similarity. We also correct the result of antonym substitutions.
- We add Section 5 to discuss relevant works.
- We discuss the performance variance due to different fine-tuning hyperparameters and random seeds in Section 6.
- We add the links to the victim text classifiers in Appendix B.
- We remove the FAQ section in the Appendix, which is mainly used for rebuttal.
- In this revision, we incorporate some of the answers to the reviewers' questions in the rebuttal.
## B Dataset
In our paper, we use benchmark adversarial datasets generated by Yoo et al. (2022). Yoo et al. (2022) generates adversarial samples using the TextAttack (Morris et al., 2020b) module. Yoo and Qi (2021) release the dataset with a view to facilitating the detection of adversarial samples in NLP and reducing the redundant computation resources to re-generate adversarial samples. They thus generate adversarial samples using PWWS (Ren et al., 2019), TextAttack (Jin et al.,
2020), BAE (Garg and Ramakrishnan, 2020) and TextFooler-Adj (Morris et al., 2020a) on LSTM,
CNN, BERT, and RoBERTa trained/fine-tuned on SST-2 (Socher et al., 2013), IMDB (Maas et al.,
2011), and AG-News (Zhang et al., 2015).
In the main content of our paper, we only use two datasets: the adversarial samples obtained using PWWS to attack BERT fine-tuned on AG-News, and the adversarial samples obtained by attacking TextFooler on BERT fine-tuned on AG-News. The testing set of AG-News contains 7.6K samples; the adversarial samples obtained by attacking these datasets will be less than 7.6K since the attack success rates of the two SSAs are not 100%. We summarize the detail of these two datasets in Table 3.
The models they used as victim model to generate classifiers are the fine-tuned by the TextAttack (Morris et al., 2020b) toolkit and are publicly available at https://textattack.readthedocs. io/en/latest/3recipes/models.html and Huggingface models. For example, the BERT finetuned on AG-News is at https://huggingface.
co/textattack/bert-base-uncased-ag-news.
The hyperparameters used to fine-tune those models can be found from the model cards and config.json and we do not list them here to save the space.
## C Synonym Substitution Attacks
We list the transformations and constraints of the SSAs that are discussed or mentioned in our paper in Table 4. We only include the semantic and grammaticality constraints in Table 4 and omit other constraints such as the word-level overlap constraints.
The "window" in the sentence encoder cosine similarity constraint indicates whether use a window around the current substitution word or use the whole sentence. The "compare with xori" indicates that x n swap will be compared against the sentence embedding of xori, and "compared with x n−1 swap" means that x n swap will be compared against the sentence embedding of x n−1 swap, that is, the sentence before the current substitution step.
## C.1 Random Adversarial Samples
To illustrate that the adversarial samples generated by SSAs are largely made up of invalid word replacements, we randomly sample two adversarial samples generated by PWWS (Ren et al., 2019),
TextFooler (Jin et al., 2020), BAE (Garg and Ramakrishnan, 2020), and TextFooler-Adj (Morris et al., 2020a). To avoid the suspicion of cherrypicking the adversarial samples to support our claims, we simply select the first and the last successfully attacked samples in AG-News using the four SSAs in the dataset generated by Yoo et al.
(2022). Since the dataset is not generated by us, we cannot control which sample is the first one and which sample is the last one in the dataset, meaning that we will not be able to cherry-pick the adversarial samples that support our claims.
The adversarial samples are listed in Table 5.
The blue words in xori are the words that will be perturbed in xadv. The red words are the swapped words. The readers can verify the claims in our paper using those adversarial samples. We recap some of our claims as follows:
- PWWS uses mismatched sense substitution:
This can be observed in all the word substitutions of PWWS in Table 5. For example, the word "world" in the second example of PWWS have the word sense "the 3rd planet from the sun; the planet we live on". But it is swapped with the word "cosmos", which is the synonym of the word sense "everything that exists anywhere".
- Counter-fitted embedding substitution set contains a large proportion of *others* substitution types, which are mostly invalid: This can be observed in literally all word substitutions in TextFooler.
- BERT reconstruction substitution set contains a large proportion of *others* substitution types, which are mostly invalid: This can be observed in literally all word substitutions in BAE.
- Morphological substitutions are mostly ungrammatical: This can be observed in the first adversarial sample of TextFooler-Adj.
| PWWS | TextFooler | |
|------------------------------------|--------------|--------|
| Success attacks | 4140 | 5885 |
| Attack success rate | 57.25% | 81.39% |
| Average words per sample | 38.57 | 38.57 |
| Average perturbed words percentage | 17.63% | 23.38% |
- TextFooler-Adj prefers morphological swap due to its strict constraints: This can be observe in almost all substitutions in TextFoolerAdj, excluding goods→wares.
## C.1.1 Example Of The Word Substitution Sets Of Different Transformations
In this section, we show the substitution sets using different transformations. We only show one example here, and this example is the second successful attack example in the adversarial sample datasets (Yoo et al., 2022) that attacks a BERT
fine-tuned classifier trained on AG-News using TextFooler. We do not use the first sample in Table 5 because we would like to show the readers a different adversarial sample in the datasets.
xori: The Race is On: Second Private Team Sets Launch Date for Human Spaceflight (SPACE.com)
SPACE.com - TORONTO, Canada - A second team of rocketeers competing for the \#36;10 million Ansari X Prize, a contest for privately funded suborbital space flight, has officially announced the first launch date for its manned rocket.
xadv: The Race is Around: Second Privy Remit Set Lanza Timeline for Hummanitarian Spaceflight (SEPARATION.com) SEPARATION.com -
CANADIENS, Countries - para second squad of rocketeers suitors for the \#36;10 billion Ansari X
Nobel, a contestant for convertly championed suborbital spaceship plane, had solemnly proclaim the first began timeline for its desolate bomb.
We show the substitution set for the first four words that are substituted by TextFooler in Table 6. We do not show that substitution set for all the attacked words simply because it will occupy too much space, and our claim in the main content that "*others* substitution sets of counterfitted embedding substitution and BERT maskinfilling/reconstruction mostly consist of invalid swaps" can already be observed in Table 6.
## D Implementation Details D.1 Experiment Details Of Section 3
In this section, we give details on how we obtain different word substitution types for a xori. The whole process is summarized in Algorithm 1. In Algorithm 1, the reader can also find how the perturbed indices list I used in Section 4.2 is obtained.
An important detail that is not mentioned in the main content is that when computing how many synonyms are in the substitution set of BERT MLM substitution, we actually perform lemmatization on the top-30 predictions of BERT. This is because, for example, if BERT proposes to use the word
"defines" to replace the original word "sets" (the third person present tense of the verb "set") in the original sentence, and the word "define" happens to a synonym according to WordNet; in this case, the word "defines" will not be considered as a synonym substitution. But "defines" should be considered as a synonym substitution since it is the third person present tense of "define". Lemmatizing the prediction of BERT can partially solve the problem. However, if the lemmatized word is already in the top-30 prediction of BERT, we do not perform lemmatization. This process is detailed on Line 6 on Algorithm 2. This can ensure that words can be considered as synonyms while words that should be considered as morphological swaps are mostly not affected.
## D.2 Experiment Details Of Section 4.1
Here, we explain how the random high/lowfrequency words are sampled in Section 4.1. First, we use the tokenizer of BERT-base-uncased to tokenize all the samples in the training dataset of AG-News. Next, we count the occurrence of each token in the vocabulary of the BERT-base-uncased, and sort the tokens based on their occurrence in the training set in descending order. The vocabulary size of BERT-base-uncased is 30522, including five special tokens, some subword tokens, and some unused tokens. We define the high-frequency
| Attack | Transformation | Constraints | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|-----------------------------------------------|------|----------------------|
| Genenetic Algorithm Attack (Alzantot et al., 2018) | Word embedding mean square error distance with threshold 0.5; language model perplexity (as a grammaticality constraint) | | | |
| PWWS (Ren et al., | WordNet | synonym | None | |
| 2019) | set substitution Counter-fitted GloVe embedding kNN substitution with k = 8 | | | |
| TextFooler (Jin et al., | Counter-fitted GloVe | | | |
| 2020) | embedding kNN substitution with k = 50 | USE sentence embedding cosine similarity with threshold 0.878, window size w = 7, compare with xori; word embedding cosine similarity with threshold 0.5; disallow swapping words with different POS but allow swapping verbs with nouns or the reverse | | |
| BERT-Attack | (Li | BERT mask-infilling | | |
| et al., 2020) | substitution with k = 48 | Sentence embedding cosine similarity with different thresholds for different dataset, and the highest threshold is 0.7, no window, compare with xori | | |
| BAE (Garg and Ramakrishnan, 2020) | BERT reconstruction | USE sentence embedding cosine similarity with | | |
| substitution | threshold 0.936, window size w = 7, compare with x n−1 swap | | | |
| TextFoolerAdj (Morris | et | al., | | |
| 2020a) | USE sentence embedding cosine similarity with threshold 0.98, window size w = 7, compare with xori; word embedding cosine similarity with threshold 0.9; disallow swapping words with different POS but allow swapping verbs with nouns or the reverse; adversarial sample should not introduce new grammar errors, checked by LanguageTool | | | |
| A2T | (Yoo | and | Qi, | Counter-fitted GloVe |
| 2021) | embedding kNN substitution with k = 20 or BERT reconstruction with k = 20 Counter-fitted GloVe embedding kNN substitution with k = 50 | Word embedding cosine similarity with threshold 0.8; DistilBERT fine-tuned on STS-B sentence embedding cosine similarity with threshold 0.9, window size w = 7, compare with xori; disallow swapping words with different POS | | |
| CLARE | (Li | et | al., | DistilRoBERTa |
| 2021) | mask-infilling | sub | | |
| stitution, instead of using top-k, they select the predictions whose probability is larger than 5 × 10−3 ; this set contains 42 tokens on average | USE sentence embedding cosine similarity with threshold 0.7, window size w = 7, compare with xori | | | |
| Table 4: Detailed transformations and constraints of different SSAs mentioned in our paper. | | | | |
| Attack | xori | xadv | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|----------|----------|----|----------|
| PWWS | Ky. Company Wins Grant to Study Peptides (AP) AP - A company founded by a chemistry researcher at the University of Louisville won a grant to develop a method of producing better peptides, which are short chains of amino acids, the building blocks of proteins. | Ky. | Company profits yield to bailiwick | | | | |
| Peptides (AP) AP - amp company founded by a chemistry researcher at the University of Louisville won a grant to develop a method of producing better peptides, which are short chains of amino acids, the building blocks of proteins. | | | | | | | |
| PWWS | Around | the world | Ukrainian presiden | | | | |
| tial | candidate | Viktor | Yushchenko | was | | | |
| poisoned with the most harmful known dioxin, which is contained in Agent Orange, a scientist who analyzed his blood said Friday. | Around the cosmos Ukrainian presidential candidate Viktor Yushchenko was poisoned with the most harmful known dioxin, which is contained in Agent Orange, a scientist who analyzed his lineage said Friday. | | | | | | |
| TextFooler | Fears for T N pension after talks Unions representing workers at Turner Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul. | Fears for T percent pension after debate Syndicates portrayal worker at Turner Newall say they are 'disappointed' after chatter with bereaved parenting corporations Canada Mogul. | | | | | |
| TextFooler | 5 of arthritis patients in Singapore take Bextra or Celebrex < b>...</b> SINGAPORE : Doctors in the United States have warned that painkillers Bextra and Celebrex may be linked to major cardiovascular problems and should not be prescribed. | 5 | of | bursitis | patients | in | Malaysia |
| taken | Bextra | or | Celebrex | | | | |
| <seconds>...&lieutenants;/iii> SINGAPORE : Medecine in the United Nations get reminding that sedatives Bextra and Celebrex may pose link to enormous cardiovascular woes and planned not be planned. | | | | | | | |
| BAE | Fears for T N pension after talks Unions representing workers at Turner Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul. | Fears for T pl pension after talks Unions representing workers at Turner network say they are 'disappointed' after talks with stricken parent firm Federal Mogul. | | | | | |
| BAE | 5 of arthritis patients in Singapore take Bextra or Celebrex <b>...</b> SINGAPORE : Doctors in the United States have warned that painkillers Bextra and Celebrex may be linked to major cardiovascular problems and should not be prescribed. | 5 of arthritis patients in Singapore take cd or i &m;x>...</b> SINGAPORE : doctors in the United state have warned that painkillers used and Celebrex may be linked to major cardiovascular harm and should not be prescribed. | | | | | |
| TextFooler -Adj | Venezuela Prepares for Chavez Recall Voted Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil marketed. | | | | | | |
| TextFooler -Adj | Venezuela Prepares for Chavez Recall Vote Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil market. EU to Lift U.S. Sanctions Jan. 1 BRUSSELS (Reuters) - The European Commission is sticking with its plan to lift sanctions on $4 billion worth of U.S. goods on Jan. 1 following Washington's repeal of export tax subsidies in October, a spokeswoman said on Thursday. | EU to Lift U.S. Sanctions Jan. 1 BRUSSELS (Reuters) - The European Commission is sticking with its plan to lift sanctions on $4 billion worth of U.S. wares on Jan. 1 following Washington's repeal of export taxation subsidies in October, a spokeswoman said on Thursday. | | | | | |
| Table 5: Adversarial samples from the benchmark dataset generated by Yoo and Qi (2021). | | | | | | | |
| xi | Counter-fitter GloVe embedding | BERT MLM | BERT reconstruction | | | |
|--------------------------------------------------------------------------------------------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|--------------|-------|-----|
| On | Orn, Pertaining, Per, Toward, Dated, Towards, Circa, Dates, Relating, Pour, Relative, Sur, Into, Date, Concerning, Onto, Around, About, In, To, Sobre, Relate, During, Respecting, For, Regarding, At, Days, Throughout, Relation | following, | completed, | | | |
| ongoing, over, in, included, contested, followed, this, now, below, announced, after, split, for, therefore, concluded, titled, currently, follows, planned, listed, thus, held, on, to, that, scheduled, called, where | around, round, a, here, ongoing, over, in, the, involved, pending, at, next, now, under, for, ahead, set, off, currently, onto, given, considered, about, held, on, of, to, by, time, with | | | | | |
| Private | Confidentiality, Camera, Personal, Clandestine, Privately, Hoc, Undercover, Confidential, Secretive, Secrets, Dedicated, Secret, Surreptitiously, Confidentially, Belonged, Peculiar, Personally, Specially, Fenced, Owned, Covert, Particular, Especial, Covertly, Own, Deprived, Secretly, Privy, Soldier, Special | google, my, o, a, from, hs, the, 1, chapter, 1st, in, this, mv, md, ukrainian, le, facebook, baltimore, hr, of, th, to, that, donald, and, by, gma, where, with | personal, | vr, | 2012, | my, |
| a, from, own, official, local, the, vc, small, for, national, billionaire, social, private, 2014, 2010, pv, facebook, public, independent, of, privately, to, new, family, and, by | | | | | | |
| Team | Panels, Grouping, Machine, Equipments, Tasks, Task, Devices, Pc, Group, Appliance, Cluster, Computers, Groups, Teams, Tooling, Accoutrements, Remit, Pcs, Appliances, Grupo, Teamwork, Chore, Apparatus, Squad, Computer, Device, Machines, Panel, Squads, Equipment | fund, | label, | launch, | | |
| google, | team, | spon | | | | |
| sor, | investor, | project, | | | | |
| citizen, | investigator, | | | | | |
| sector, | plane, | foun | | | | |
| dation, | company, | | | | | |
| helicopter, | website, | | | | | |
| line, platform, rocket, and, group, blog, planet, computer, charity, to, jet, pilot, party, fan | firm, one, weekend, partnership, round, team, committee, teams, number, couple, country, site, button, company, line, side, crew, ballot, group, nation, winner, division, club, boat, of, to, family, party, time | | | | | |
| Sets | Defines, Stake, Matches, Provides, Prescribes, Determine, Set, Betting, Establishes, Stipulates, Jeu, Gambling, Staking, Stipulated, Toys, Determines, Defined, Game, Defining, Playing, Gaming, Games, Determining, Define, Jeux, Gamble, Identifies, Stipulate, Plays, Play | google, a, from, estimated, first, larsen, the, 1, 1st, 3, at, next, announced, top, named, def, or, possible, predicted, 3rd, facebook, 000, online, about, on, of, to, and, no, with | reaches, | established, | | |
| announce, places, records, official, announcing, begins, forms, indicates, announced, declares, sets, starts, estimates, determines, set, details, draws, lays, lists, specifies, calls, setting, stages, of, gives, establishes, announces, names | | | | | | |
| Table 6: Candidate substitutions proposed by different transformations. We use green to denote matched sense | | | | | | |
Table 6: Candidate substitutions proposed by different transformations. We use green to denote matched sense substitution, orange to denote mismatched sense substitution, brown to denote morpheme substitution, and purple to denote antonyms. The *other* type substitution is denoted using the default black.
Algorithm 1 Process of obtaining the substitution set Require: xori, xadv 1: I ← [] ▷ Initialize the perturbed indices list 2: for xi ∈ xori do 3: if xi = x
′
i then 4: **continue** 5: **end if**
6: xi ← xi.lower() ▷ Get the lower case of xi 7: x
′
i ← x
′
i
.lower() ▷ Get the lower case of x
′
i 8: Sml ← **GetMorph**(xi, xori) ▷ Get morphological substitutions 9: Sms ← **GetMatchedSense**(xi, xori) ▷ Get matched sense synonym by first using word sense disambiguation then WordNet synonym sets 10: Smms ← **GetMismatchedSense**(xi, xori) ▷ Get mismatched sense synonym by first using word sense disambiguation then WordNet synonym sets 11: A ← **GetAntonym**(xi) ▷ Get antonyms by WordNet 12: Sml ← Sml \ {xi}
13: Sms ← Sms \ Sml \ {xi}
14: Smms ← Smms \ Sms \ Sml \ {xi} ▷ Remove overlapping elements to make Sml, Sms, Smms disjoint 15: S*embed* ← **GetEmbeddingSwaps**(xi)
16: SMLM ← **GetMLMSwaps**(xi, xori)
17: S*recons* ← **GetReconsSwaps**(xi, xori)
18: if x
′
i ∈ Sml **then**
19: The substitution is a morphological substitution 20: **else if** x
′
i ∈ Sms **then**
21: The substitution is a matched sense substitution 22: **else if** x
′
i ∈ Smms **then**
23: The substitution is a mismatched sense substitution 24: **else if** x
′
i ∈ A **then**
25: The substitution is an antonym substitution 26: **else**
27: This substitution is a *other* type 28: **end if**
29: Check the substitution types of each word in S*embed* by comparing with Sml, Sms, Smms, A 30: Check the substitution types of each word in SMLM by comparing with Sml, Sms, Smms, A 31: Check the substitution types of each word in S*recons* by comparing with Sml, Sms, Smms, A
32: if Sml, Sms, Smms, A ̸= ∅ **then**
33: I.append(i) ▷ We only include the words whose have morphological substitutions, matched sense substitutions, mismatched sense substitutions 34: **end if** 35: **end for**
36: O ←shuffle.(I)
Algorithm 2 GetMLMSwapsxi, xori Require: xi, xori, BERT, Lemmatizer 1: xmask ← {x1, · · · , xi−1, [MASK], xi+1, · · · , xn} ▷ Get masked input 2: Candidates← Top-k prediction of x*mask* using BERT
3: New_Candidates ← []
4: for w ∈Candidates do 5: w*lemmatized* ← Lemmatizer(w) 6: if w*lemmatized* ∈/Candidates and w*lemmatized* ∈/New_Candidates **then**
7: New_Candidates.append(w*lemmatized*)
8: **else**
9: New_Candidates.append(w)
10: **end if** 11: **end for**
12: **return** New_Candidates
words as the top-50 to top-550 words in the training
![18_image_0.png](18_image_0.png)
dataset. The reason that we omit the top 50 words as the high-frequency token is that these words are often stop words, and they are seldom used as word substitutions in SSAs. The low-frequency words are the top-10K to top-10.5K occurring words in AG-News' training set.
## D.3 Experiment Details Of Section 4.2
Here, we give more details on the sentence embedding similarity experiment in Section 4.2. The readers can refer to Algorithm 1 to see how we obtain the different types of word substitution sets, the substituted indices set I and the ordered list O
from a pair of (xori, xadv).
We also use a figurative illustration to show how we obtain x n swap in Figure 2. In Figure 2, we show how to use the *same sense substitution set* to replace the words in xori based on the ordered list O.
As can be seen in the figure, we swap the words in xori according to the order determined by O; since the first element in O is 5, we will first replace x5 in xori with one of the same sense synonyms of x5. We thus obtain the x 1 swap. In order to compute the sentence embedding similarity between x 1 swap and xori, we extract a context around the word just replaced; in this case, we will extract the context around the fifth word in x 1 swap and xori. Different from what we really use in our experiment, we set the window size w to 1 in Figure 2; this is because using w = 7 is too large for this example. Thus, we should extract x 1 swap[4 : 7] and xori[4 : 7]; however, since the sentences only have 5 words, the context to be extracted will exceed the length of the sentences. In this case, we simply extract the context until the end of both sentences.8 The parts that will be used for computing the sentence embeddings in each sentence are outlined with a dark blue box in Figure 2. Next, we follow a similar process to obtain x 2 swap and x 3 swap and compare their sentence embedding cosine similarity with xori.
## D.4 Experiment Details Of Section 4.3
In this experiment, we usethe POS tagger in NLTK
to identify the verb form of the verbs. The inflectional form of the verbs are obtained using LemmInflect. Here, we list the verb inflectional form conversion rules:
- For each third-person singular present verb, it is converted to the verb's base form.
- For each third past tense verb, it is converted to the verb's gerund or present participle form
(V+ing).
- For all verbs whose form is not third-person singular present and is not past tense verb, we convert them into the third-person singular present. We provide three random examples from the test set in AG-News that are perturbed using the above rules in Table 7. It can be easily seen that all the perturbed sentences are ungrammatical. Interestingly, LanguageTool detects no grammar errors in all the six samples in Table 7.
8Similarly, if the context to be extracted starts from a position that is on the left-hand side of the sentence, we simply extract the context starting from the first word in the sentence.
![19_image_0.png](19_image_0.png)
## E Supplementary Materials For Experiments Of Sentence Encoders E.1 Distribution Of The Sentence Embedding Cosine Similarity Of Different Substitution Types
In Figure 3, we show the distribution of the USE sentence embedding cosine similarity of different word replacement types using different numbers of word replacements n. The left subfigure shows the distribution of the cosine similarity between xori and x 1 swap and the right subfigure is the similarity distribution between xori and x 8 swap. While in Figure 1, we can see that the sentence embedding cosine similarity of different word substitution types is sometimes separable on average, we still cannot separate valid and invalid word substitution simply using one threshold. This is because the word embedding cosine similarity scores of different word substitution types are highly overlapped, which is evident from Figure 3. This is true for different n of x n swap, and we only show n = 1 and n = 8 for simplicity.
## E.2 Different Methods For Computing Sentence Embedding Similarity
In this section, we show some supplementary figures of the experiments in Section 4.2. Recall that in the main content, we only show the sentence embedding cosine similarity results when we compare x n swap with xori around a 15-word window around the n-th substituted word. But we have mentioned in Section 2.3 that this is not what is always done.
In Figure 4, we show the result when we compare x n swap with xori using **the whole sentence**. It can be easily observed that it is still difficult to separate valid swaps from the invalid ones using a threshold on the cosine similarity. One can also observe that the similarity in Figure 4 is a lot higher than that in Figure 1.
Another important implementation detail about sentence encoder similarity constraint is that some previous work does not calculate the similarity of the current x*swap* with xori. Instead, they calculate the similarity between the current x*swap* and the x*swap* in the previous substitution step (Garg and Ramakrishnan, 2020). That is, if in the previous substitution step, 6 words in xori are swapped, and in this substitution step, we are going to make the 7th substitution. Then the sentence embedding similarity is computed between the 6-word substituted
| Original sentence | Verb-perturbed sentence |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Storage, | servers bruise HP earnings update |
| Earnings per share rise compared with a year ago, but company misses analysts' expectations by a long shot. | Storage, servers bruises HP earnings update Earnings per share rise compares with a year ago, but company miss analysts' expectations by a long shot. |
| IBM to hire even more new workers By the end of the year, the computing giant plans to have its biggest headcount since 1991. | IBM to hires even more new workers By the end of the year, the computes giant plans to has its biggest headcount since 1991. |
| Giddy Phelps Touches Gold for First Time Michael Phelps won the gold medal in the 400 individual medley and set a world record in a time of 4 minutes 8.26 seconds. | Giddy Phelps Touches Gold for First Time Michael Phelps winning the gold medal in the 400 individual medley and sets a world record in a time of 4 minutes 8.26 seconds. |
Table 7: Examples of the verb-perturbed sentences. The perturbed verbs are highlighted in red, and their unperturbed counterparts are highlighted in blue.
sentence and the 7-word substituted sentence.
In Figure 5, we show the result when we we compare x n swap with x n−1 swap around a 15-word window around the n − th substituted word. This is adopted in Garg and Ramakrishnan (2020), according to TextAttack (Morris et al., 2020b). Last, we show the result when we compare x n swap with x n−1 swap with the whole sentence; this is not used in any previous works, and we include this for completeness of the results. All the sentence encoders used in Figure 1, 4, 5, 6 are USE.
## E.3 Different Sentence Encoders
We show in Figure 7 the result when we compare x n swap with xori around a 15-word window around the n-th substituted word using a DistilBERT finetuned on STS-B, which is the sentence encoder used in Yoo and Qi (2021). Figure 7 shows that DistilBERT fine-tuned model better distinguishes between antonyms and synonym swaps, compared with the USE in Figure 1. However, it still cannot distinguish between the matched and mismatched synonym substitutions very well. Interestingly, this model is flagged as deprecated on huggingface for it produces sentence embeddings of low quality. We also show the result when we use a DistilRoBERTa fine-tuned on STS-B in Figure 8.
Interestingly, this sentence encoder can also better distinguish antonym substitutions and synonym substitutions on average. This might indicate that the models only fine-tuned on STS-B can have the ability to distinguish valid and invalid swaps.
In Figure 9, we show the result when we compare x n swap with xori around a 15-word window around the n − th substituted word using sentencetransformers/all-MiniLM-L12-v2. This model has 110M parameters and is the 4th best sentence encoder in the pre-trained models on sentencetransformer package (Reimers and Gurevych, 2019). It is trained on 1 billion text pairs. We report the result when using this sentence encoder because it is the best model that is smaller than USE, which has 260M parameters. We can see that the trend in Figure 9 highly resembles that in Figure 1, indicating that even a very strong sentence encoder is not suitable to be used as a constraint in SSAs.
We also include the result when we use the best sentence encoder on sentence-transformer package, the all-mpnet-base-v2. It has 420M parameters.
The result is in Figure 10, and it is obvious that it is still quite impossible to use this sentence encoder to filter invalid swaps.
## F Statistics Of Other Victim Models And Other Datasets
In this section, we show some statistics on adversarial samples in the datasets generated by Yoo et al. (2022). The main takeaway in this is part is:
Our observation in Section 3 holds across different types of victim models (LSTM, CNN, BERT,
RoBERTa), different SSAs, and different datasets.
## F.1 Proportion Of Different Types Of Word Replacement
First, we show how different word substitution types consist of the adversarial samples of AGNews. We show the result of four models and four SSAs in Table 8, 9, 10, 11. This is done by a similar procedure as in Section 3.1.1.
![21_image_0.png](21_image_0.png)
| Model | Matched sense | Mismatched sense | Morphological | Antonym | Others |
|---------|-----------------|--------------------|-----------------|-----------|--------------|
| CNN | 5449 (16.8%) | 23727 (73.2%) | 788 (2.43%) | 0 (0.0%) | 2434 (7.51%) |
| LSTM | 5185 (15.7%) | 24621 (74.5%) | 788 (2.38%) | 0 (0.0%) | 2467 (7.46%) |
| BERT | 4319 (16.2%) | 19467 (73.2%) | 1026 (3.86%) | 0 (0.0%) | 1788 (6.72%) |
| RoBERTa | 5057 (16.3%) | 21741 (70.2%) | 1253 (4.05%) | 0 (0.0%) | 2905 (9.38%) |
## F.2 Statistics Of Different Datasets
In this section, we show the statistics of types of word substitution of another two datasets in Yoo et al. (2022). The result is in Table 12. Clearly, our observation that valid word substitutions are scarce can also be observed in both SST-2 and IMDB.
![21_image_1.png](21_image_1.png)
Model Matched sense Mismatched sense Morphological Antonym Others
CNN 319 (0.891%) 897 (2.5%) 1464 (4.09%) 0 (0.0%) 33138 (92.5%)
LSTM 304 (0.752%) 1125 (2.78%) 1662 (4.11%) 0 (0.0%) 37350 (92.4%)
BERT 399 (0.806%) 1632 (3.3%) 2471 (4.99%) 0 (0.0%) 45008 (90.9%)
RoBERTa 391 (0.783%) 1613 (3.23%) 2276 (4.56%) 2 (0.004%) 45656 (91.4%)
Table 9: Attack statistics of other models on AG-News. The SSA use to attack the models is TextFooler.
Model Matched sense Mismatched sense Morphological Antonym Others
CNN 34 (1.21%) 73 (2.6%) 232 (8.25%) 5 (0.178%) 2468 (87.8%)
LSTM 30 (0.998%) 62 (2.06%) 234 (7.78%) 7 (0.233%) 2674 (88.9%)
BERT 21 (0.88%) 39 (1.6%) 184 (7.7%) 8 (0.34%) 2128 (89.4%)
RoBERTa 25 (0.755%) 61 (1.84%) 304 (9.18%) 6 (0.181%) 2914 (88.0%)
Table 10: Attack statistics of other models on AG-News. The SSA use to attack the models is BAE.
![22_image_0.png](22_image_0.png)
![22_image_2.png](22_image_2.png)
![22_image_1.png](22_image_1.png)
![22_image_3.png](22_image_3.png)
| Model | Matched sense | Mismatched sense | Morphological | Antonym | Others |
|---------|-----------------|--------------------|-----------------|-----------|-------------|
| CNN | 65 (3.86%) | 176 (10.5%) | 706 (42.0%) | 0 (0.0%) | 735 (43.7%) |
| LSTM | 70 (3.9%) | 208 (11.6%) | 698 (38.9%) | 0 (0.0%) | 820 (45.7%) |
| BERT | 53 (4.32%) | 118 (9.62%) | 530 (43.2%) | 0 (0.0%) | 526 (42.9%) |
| RoBERTa | 59 (4.21%) | 137 (9.79%) | 581 (41.5%) | 0 (0.0%) | 623 (44.5%) |
Table 11: Attack statistics of other models on AG-News. The SSA use to attack the models is TextFooler-Adj.
| Model | Matched sense | Mismatched sense | Morphological | Antonym | Others |
|---------|-----------------|--------------------|-----------------|--------------|----------------|
| SST-2 | 34 (0.945%) | 118 (3.28%) | 206 (5.72%) | 0 (0.0%) | 3241 (90.1%) |
| IMDB | 1881 (1.43%) | 4825 (3.66%) | 8708 (6.6%) | 21 (0.0159%) | 116479 (88.3%) |
Table 12: Attack statistics of other BERT fine-tuned on other datasets. The SSA use to attack the models is TextFooler.
![23_image_0.png](23_image_0.png)
Cosine Similarity
![23_image_2.png](23_image_2.png)
![23_image_1.png](23_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Limitations, Ethical Statement and Broader Impacts
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sec 3.1.1, App D
✓ B1. Did you cite the creators of artifacts you used?
Sec 3.1.1, App B, App D
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
They do not provide licenses
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Sec 3.1.1, App B
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Removing name entities in AG-News causes the news to be unreadable.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sec 3.1.1, App B
## C ✓ **Did You Run Computational Experiments?** Sec 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
App F.3.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
App E
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
e-etal-2023-divhsk | {D}iv{HSK}: Diverse Headline Generation using Self-Attention based Keyword Selection | https://aclanthology.org/2023.findings-acl.118 | Diverse headline generation is an NLP task where given a news article, the goal is to generate multiple headlines that are true to the content of the article but are different among themselves. This task aims to exhibit and exploit semantically similar one-to-many relationships between a source news article and multiple target headlines. Toward this, we propose a novel model called DIVHSK. It has two components:KEYSELECT for selecting the important keywords, and SEQGEN, for finally generating the multiple diverse headlines. In KEYSELECT, we cluster the self-attention heads of the last layer of the pre-trained encoder and select the most-attentive theme and general keywords from the source article. Then, cluster-specific keyword sets guide the SEQGEN, a pre-trained encoder-decoder model, to generate diverse yet semantically similar headlines. The proposed model consistently outperformed existing literature and our strong baselines and emerged as a state-of-the-art model. We have also created a high-quality multi-reference headline dataset from news articles. | # Div**Hsk: Diverse Headline Generation Using Self-Attention Based** Keyword Selection
Venkatesh E, Kaushal Kumar Maurya, Deepak Kumar and **Maunendra Sankar Desarkar**
Indian Institute of Technology Hyderabad, India
{venkateshelangovan.tce, deepak.soe.cusat}@gmail.com, [email protected], [email protected]
## Abstract
Diverse headline generation is an NLP task where given a news article, the goal is to generate multiple headlines that are true to the content of the article, but are different among themselves. This task aims to exhibit and exploit semantically similar one-to-many relationships between a source news article and multiple target headlines. Towards this, we propose a novel model called DIVHSK. It has two components:
KEYSELECT for selecting the important keywords, and SEQGEN, for finally generating the multiple diverse headlines. In KEYSELECT, we cluster the self-attention heads of the last layer of the pre-trained encoder and select the mostattentive *theme* and *general* keywords from the source article. Then, cluster-specific keyword sets guide the SEQGEN, a pre-trained encoderdecoder model, to generate diverse yet semantically similar headlines. The proposed model consistently outperformed existing literature and our strong baselines and emerged as a stateof-the-art model. Additionally, We have also created a high-quality multi-reference headline dataset from news articles1.
## 1 Introduction
Generating diverse and semantically similar multiple outputs in natural language generation (NLG)
is an important and challenging task (Tevet and Berant, 2021). The traditional single headline generation task is formulated as a sequence-to-sequence learning problem and has been extensively studied for more than a decade now (Banko et al., 2000; Zajic et al., 2002; Dorr et al., 2003; Lopyrev, 2015; Takase et al., 2016; Gavrilov et al., 2019). Recently, researchers are also interested towards diverse output sequence generation tasks. This falls into the one-to-many generation category and is being studied for multiple tasks such as paraphrase generation (Yu et al., 2021; Gupta et al., 2018), machine 1Our code and dataset are available at https://github.
com/kaushal0494/DivHSK
translation (Shen et al., 2019), question generation
(Shen et al., 2022) and summarization (Cho et al.,
2019). In this work, we consider the problem of generating diverse headlines given a single news article. Diverse headlines present the theme of the article in semantically related yet lexically different short sentences, which may attract different sets of audiences and increase the consumption of the news.
The existing approaches for diverse sequence generation mostly diversify the decoding steps through alternative search algorithms (Vijayakumar et al., 2018; Fan et al., 2018) or mixture decoder approaches (Shen et al., 2019; Maurya and Desarkar, 2020) where different decoders generate difference output sequences. Recently, Cho et al. (2019) proposed a two-stage modeling involving a *diversification stage* to extract diversifying attributes and a *generation stage* to guide the encoder-decoder model for diverse generations. The diversifying attributes are keywords extracted from the input text with the expectation-maximization algorithm.
They consider text summarization and questiongeneration tasks. In similar lines, Yu et al. (2022)
leverage external knowledge graph, i.e., ConceptNet (Speer et al., 2017) to extract diverse yet relevant keywords at *diversification stage* and generate diverse common sense reasoning texts. These models are not directly applicable for diverse headline generation tasks because the headlines are mostly oriented toward a single common theme (event, person, etc.) in a short sentence, and these models distract the semantics of generated headlines.
Our empirical experiments (Section-5) validate this point. Liu et al. (2020) used manually extracted keywords with a multi-source transformer for diverse headline generation. The model is not scalable to other datasets/tasks because keyword extraction requires a human annotator. Unlike these, we used an automated self-attention-based approach to obtain the most attentive keywords from the article automatically.
To overcome the limitations of the existing models, we propose DIVHSK, a simple yet effective model for diverse headline generation using a selfattention-based keyword selection. The model has two modules/components: (a) KEYSELECT - a pretrained encoder model to extract diversifying attributes i.e. *theme* and *general* keywords from input news article and (b) SEQGEN - a regular pre-trained encoder-decoder architecture guided by diversifying attributes for generating multiple diverse yet semantically similar headlines.
Overall, our main contributions are as follows:
(1) We propose a novel model DIVHSK- Diverse Headline Generation using Self Attention based Keyword Selection to generate diverse yet semantically similar headlines. (2) We release a high quality MRHEAD: Multi-Reference *Headline Dataset* for diverse headline generation task. (3) The performance of the proposed model is compared with several strong baselines using both automated and human evaluation metrics.
## 2 Problem Formulation
Given a news article, the goal is to generate semantically similar, grammatically coherent, *fluent* and diverse headlines. Formally, given a news article x, the goal is to model the conditional distribution for k target outputs p(yk|x) with valid mappings x → y1*, . . . , x* → yk where {y1, y2*, . . . , y*k}
should be diverse. Here we consider k = 3, i.e.,
the task is to generate three diverse headlines.
## 3 Methodology
The proposed DIVHSK model has two components
(1) pre-trained encoder, i.e., KEYSELECT and (2)
regular pre-trained encoder-decoder, i.e., SEQGEN.
As per Liu et al. (2020), multiple headlines should convey the common theme, differing on a lexical level and the headline tokens should be uniformly distributed across the source article. Towards these goals, in KEYSELECT, we first cluster the encoders' last-layer self-attention heads to find the most attentive keywords for each cluster from the input news article. We observe that: (a) all the clusters have a few most-attentive common keywords called as theme and (b) cluster-specific most attentive keywords called as *general* (i.e., non-theme) keywords.
We combine *theme* with cluster-specific *general* keywords to create diversifying attributes. For each of the k clusters, there is a corresponding diversifying attribute. Table-4, in Appendix, presents a few sample themes and general keywords.
The input news article, theme, and general keywords (from diversifying attributes) are concatenated with [SEP] tokens to create modified input for the SEQGEN module. In this way, different cluster leads to generate diverse headlines. The theme and general keywords in the cluster lead to semantically similar and theme-oriented headlines.
For pre-trained encoder and pre-trained encoderdecoder models, we use the 'encoder of T5-base'
(Raffel et al., 2020) and T5-base checkpoints, respectively. See Figure 1 for an overview of the proposed model. More details about each component are given below:
## 3.1 Keyselect**: Keyword Selection Module** 3.1.1 Self-Attention Heads Clustering
We take a pre-trained encoder model with l selfattention heads h1, h2*, . . . , h*l from the last layer.
Each self-attention head hi usually focuses on different parts of the inputs text (Peng et al.,
2020). We group these heads into k clusters C = {c1, c2*, . . . , c*k}; so each cluster has g =
l k heads. Here we cluster the heads in a sequential manner. Next, we identify the m most-attentive keywords (not BPE) from each head. As one keyword may get high attention values from multiple heads, it may result in overlap among the keyword sets obtained from each head. Consequently, we get a maximum of g ∗ m keywords from each cluster. Stop-words/function-words are not considered in keyword sets.
We have clustered the multiple heads of multihead attention of the last-hidden layer in a sequential manner. The adoption of this approach can be justified from two perspectives. Firstly, during the pre-training phase of a language model, the weights of each head within the multi-head attention mechanism are initialized with random values. Over the course of pre-training, these weights undergo the process of learning to acquire diverse values. The different heads aim to focus on different parts of the input and provide a diverse view, which is suitable for diverse keyword selection. Secondly, the proposed model is trained end-to-end, and the weights of the KEYSELECT module are consistently updated rather than being fixed. Moreover, the target headlines associated with different heads (clusters)
are different. Therefore, during back-propagation, the different heads learn to focus on the keywords
![2_image_0.png](2_image_0.png)
Figure 1: Overview of proposed *DivHSK* model. Where time-steps t1 > t2 > t3.
relevant to their respective target reference headlines. Based on these points, we conclude that clustering heads in any order does not have a significant impact, and we choose a simple sequential manner for the clustering of the attention heads.
## 3.1.2 Creating Diversifying Attributes
Suppose the total number of keywords to guide the SEQGEN module is n. We keep r keywords as theme keywords and the remaining n − r as general keywords. The r keywords are the mostattentive common keywords across all c clusters.
The rest of the n − r keywords are the mostattentive non-overlapping keywords specific to individual clusters ci. These n keywords form the diversifying attributes K
guide cifor cluster ci. r is a hyper-parameter and its value can be determined empirically. In case r common keywords can not be found2, then we can take the available r′common keywords that can be found, and the remaining n − r′ keywords can be taken from the individual clusters. See Algorithm-B in Appendix for more details.
## 3.2 Seqgen**: Pre-Trained Seq2Seq Module**
The diversifying attributes K
guide ciare concatenated with the source article x as:
theme-keywords [SEP] *general*-keywords [SEP] *article* to form the extended article x e ci
. Each cluster corresponds to specific attributes, resulting in different extended articles. We fine-tune a pre-trained encoder-decoder model with an extended article and a corresponding headline.
Additionally, we employed word-mover distance
(WMD; Kusner et al. (2015)) between predicted
(hp) and reference (hr) headlines token ids, as 2We have not encountered any scenario where the theme keywords are not present in one or more clusters.
an additional component in the loss function to control the diversity with λ. Finally, the KEYSELECT and SEQGEN modules are trained in end-to-end manner to minimize the loss L as:
L =
Xc
i=1
$$=\sum_{i=1}(1-\lambda)(-logP_{\theta}(y_{i}|x_{i}^{e}))+\lambda(\texttt{wMD}(h_{pi},h_{ri}))\tag{1}$$
## 4 Experimental Setup 4.1 Dataset
One of the essential elements of the proposed work is the inclusion of multiple reference headlines for each news article. Specifically, each example in the dataset will consist of a quadruple in the following format: <article, headline-1, headline-2, headline-3>. However, the proposed approach can be easily extended to a single reference setup.
Towards this, we have created a dataset that we refer to as MRHEAD: Multi-Reference Headline.
•DataSet Collection: To create the dataset, first, we scrape news articles and their headlines from Inshorts (https://www.inshorts.com/) news website and add them to a seed set. Articles under
'All News' category, i.e., politics, sports, technology, etc. were considered. Next, we identify news articles from other public news websites that are semantically similar to the articles in the seed set, and also note their headlines against the corresponding article in the seed set. To find semantically similar news articles we use sentence-BERT (Reimers and Gurevych, 2019) and cosine-similarity scores.
Then, human annotators verify the dataset content and remove the poor-quality headlines. Following this process, we obtained 3012 articles each with at least three parallel headlines. We split the data into training, validation, and test splits of sizes 2330, 100, and 582 respectively. Dataset creation, human verification, and other statistics are reported in Appendix-A.
## 4.2 Baselines
We have meticulously chosen six baseline models for our experimentation and analysis.
Our extensive observations have revealed that single-output generation models, such as textsummarization/headline generation models, do not perform well in multi-output generation settings.
The primary issue with such multiple generated outputs is their lack of lexical diversity. Therefore, we have selected three literature baselines:
Mixture-Decoder (MixD; Shen et al. (2019)), Mixture Content Selector (MixCS; Cho et al. (2019)),
and Knowledge Graph Experts (MoKGE; Yu et al.
(2022)). Additionally, we have designed three robust baselines based on diverse search algorithms and with modified loss functions: T5+DSA (diverse search algorithm), T5+WMD (Kusner et al.,
2015), and T5+Avg-Loss. More details about these baselines are provided in Appendix-C.
## 4.3 Evaluation Metrics
We use four automated evaluation metrics that rely on a lexical and semantic match in a one-to-many evaluation setup, as, for a given generation there are three reference headlines. We consider BLEU4 (BLEU; Papineni et al. (2002)) and ROUGE-L
(Lin, 2004) metrics as lexical-match metrics, and BERTScore (Zhang et al., 2020) and BARTScore
(Yuan et al., 2021) as semantic match based metrics.
To measure the diversity among the generated headlines, we use Pairwise-BLEU (self/P-BLEU; Ott et al. (2018)) metric similar to Shen et al. (2019).
As stated by Shen et al. (2019), there is always a trade-off between performance and diversity, i.e., if the generated headlines are correct but similar, then the performance (BLEU and ROUGE-L scores)
will be high due to large lexical overlap but the diversity will be low (high P-BLEU) and vice-versa.
Towards this concern, we consider the harmonic mean (HMean) between (1 − PBLEU) and BLEU
as a *combined* evaluation metric. For more certainty about model performance, we also conducted the human evaluation with four metrics, i.e., Fluency (Flu), Relatedness (Rel), *Correctness (Corr)*
and *Diversity* similar to (Cho et al., 2019). To manage the load on evaluators, we selected three baseline models for human evaluation. Two of the models were the best-performing (according to HMean) competitor models from literature (MixCS and MoKGE), and the other one was T5-Avg-Loss, the best-performing baseline model designed by us.
We randomly selected 50 generated headlines from the baselines and the proposed DIVHSK model as a human evaluation sample. Further, we employ two sets of annotators for human evaluation to avoid any biased evaluation. For *diversity* we asked an absolute evaluation score on a scale of 1
(lowest) to 5 (highest) and for other metrics a comparative evaluation. See more details about human evaluation guidelines in Appendix-D.
## 5 Results And Discussions 5.1 Diversity Vs. Accuracy Trade-Off
Table-1 displays the automated evaluation scores obtained for various baselines and the proposed DIVHSK models. The mixture decoder model, which employs multiple decoders, achieves the highest BLEU and ROUGE-L scores. However, the high P-BLEU score for this model indicates low diversity in the generated headlines, defeating the purpose of having multiple decoders. Similar observations are noted for the T5+DSA model. Additionally, the high scores obtained for BERTScore and BARTScore metrics suggest that the DIVHSK
model exhibits superior semantic similarity with the reference headlines. This is one of the key constraints that ensure the generated outputs are semantically coherent. The ideal model should obtain reasonable BLEU and ROUGE-L scores, high BERTScore and BARTScore (high semantic similarity), low P-BLEU (high diversity), and high HMean scores. The proposed DIVHSK model satisfies these ideal conditions and emerges as a state-of-the-art model. The necessary ablation experimental results are added in Table-5.
## 5.2 Comparison With State-Of-The-Art
We have compared the performances of DIVHSK
with MixD, MixCS, and MoKGE, which are stateof-the-art literature models. Although these models perform well for other tasks, they exhibit poor performance for the diverse headline generation task. As discussed in Section 1, recent models like MoKGE perform poorly for diverse headline generation tasks due to the inclusion of tokens/keywords from the knowledge graph that may not align with the headline's theme and distract the learning process. Overall, it is evident from the performances of MixCS and MoKGE that existing text summarization models do not perform well for headline generation tasks. This could be due to the fact that summaries are generally long, while headlines are
![4_image_0.png](4_image_0.png)
Table 1: Automated evaluation results of the models. Where R-L, BES and BAS indicate ROUGE-L, BERTScore and BARTScore metrics, respectively. Additionally, HMean indicates the harmonic mean between p-BLEU and BLEU metrics. High HMean and low P-BLEU desirable.
![4_image_1.png](4_image_1.png)
![4_image_3.png](4_image_3.png)
## 5.3 Human Evaluation Results
For more reliable evaluation, we also conducted human evaluation and results are reported in Tables 2 and 3. For Fluency, *Relatedness* and *Correctness* metrics, the DIVHSK model most of the time either wins or ends up with tie versus all considered baselines. Similar trends are observed across both the annotator sets. The human evaluation scores correlate well with automated evaluation scores.
The average absolute diversity scores are reported in Table-3 and it is found that generated text are more diverse for proposed DIVHSK model. Considering decent automated and human evaluation scores, we conclude that our model performs reasonably well and outperforms the other methods consistently.
## 5.4 Effect Of N And R **Parameters**
In Figure 2, we investigate the effect of varying the values of n (the total number of selected keywords) and r (the number of theme keywords) on the performance of the DIVHSK model. As n and r increase, we observe a decrease in the P-BLEU
scores, indicating an increase in diversity (headlines are lexically diverse). However, the BLEU
![4_image_2.png](4_image_2.png)
and ROUGE-L scores also decrease due to high diversity as these metrics are based on lexical matching. Therefore, the optimal values of n and r are important to maintain the diversity and performance trade-off.
## 6 Conclusion
In this work, We present a novel task and dataset for diverse headline generation. We also propose a strong neural architecture for the task. The model, referred to as DIVHSK, uses self-attentionbased clustering to create diversifying attributes that guide the pre-trained encoder-decoder model to generate diverse headlines. We empirically demonstrate that the DIVHSK consistently outperforms all baseline models on both automated and human evaluation metrics, while maintaining diversity as a key criterion.
## Limitations
- We are unable to test the proposed model's performance on other datasets due to the unavailability of public multi-reference headline generation datasets.
- Our dataset is created over a period of 6 months and contains around 3000 examples.
Although there are several commonly used benchmark datasets with a similar number of examples: e.g., R4C reading comprehension dataset (6.4K examples) (Inoue et al.,
2020), FIRE-LID (3357 examples), IIITHNER (3084 examples) datasets in GLUECoS
benchmark (Khanuja et al., 2020), WNLI
(634 examples), RTE (2500 examples) and MRPC (3700 examples) datasets in GLUE
benchmark (Wang et al., 2018), NOPE Corpus (around 2.7K examples) (Parrish et al.,
2021), we believe that it will be better to have a larger dataset for this challenging task. We plan to create a larger version of the dataset in future work.
## References
Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In *Association for Computational Linguistics*, ACL '00, page 318–325, USA.
Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi.
2019. Mixture content selection for diverse sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 3121–3131, Hong Kong, China. Association for Computational Linguistics.
Bonnie Dorr, David Zajic, and Richard Schwartz. 2003.
Hedge trimmer: A parse-and-trim approach to headline generation. In Association for Computational Linguistics, HLT-NAACL-DUC '03, page 1–8, USA.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018.
Hierarchical neural story generation. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 889–898, Melbourne, Australia. Association for Computational Linguistics.
Christiane Fellbaum. 1998. *WordNet: An Electronic* Lexical Database. Bradford Books.
Daniil Gavrilov, Pavel Kalaidin, and Valentin Malykh.
2019. Self-attentive model for headline generation.
In *ECIR*.
Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In *AAAI Press*.
Benjamin D Horne, Sara Khedr, and Sibel Adali. 2018.
Sampling the news producers: A large news and feature data set for the study of the complex media landscape. In *Twelfth International AAAI Conference* on Web and Social Media.
Naoya Inoue, Pontus Stenetorp, and Kentaro Inui. 2020.
R4C: A benchmark for evaluating RC systems to get the right answer for the right reason. In *Proceedings* of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 6740–6750, Online. Association for Computational Linguistics.
Simran Khanuja, Sandipan Dandapat, Anirudh Srinivasan, Sunayana Sitaram, and Monojit Choudhury.
2020. GLUECoS: An evaluation benchmark for code-switched NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3575–3585, Online. Association for Computational Linguistics.
Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, page 957–966. JMLR.org.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Dayiheng Liu, Yeyun Gong, Yu Yan, Jie Fu, Bo Shao, Daxin Jiang, Jiancheng Lv, and Nan Duan. 2020. Diverse, controllable, and keyphrase-aware: A corpus and method for news multi-headline generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 6241–6250, Online. Association for Computational Linguistics.
Konstantin Lopyrev. 2015. Generating news headlines with recurrent neural networks. *ArXiv*,
abs/1512.01712.
Kaushal Kumar Maurya and Maunendra Sankar Desarkar. 2020. Learning to distract: A hierarchical multi-decoder network for automated generation of long distractors for multiple-choice questions for reading comprehension. In Proceedings of the 29th ACM International Conference on Information &
Knowledge Management, pages 1115–1124.
Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. *ArXiv*,
abs/1803.00047.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Alicia Parrish, Sebastian Schuster, Alex Warstadt, Omar Agha, Soo-Hwan Lee, Zhuoye Zhao, Samuel R Bowman, and Tal Linzen. 2021. Nope: A corpus of naturally-occurring presuppositions in english. *Proceedings of the 25th Conference on Computational* Natural Language Learning (CoNLL).
Hao Peng, Roy Schwartz, Dianqi Li, and Noah A. Smith.
2020. A mixture of h - 1 heads is better than h heads. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6566–6577, Online. Association for Computational Linguistics.
Lianhui Qin, Lemao Liu, Wei Bi, Yan Wang, Xiaojiang Liu, Zhiting Hu, Hai Zhao, and Shuming Shi. 2018.
Automatic article commenting: the task and dataset.
In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 2: Short Papers), pages 151–156.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. *ArXiv*,
abs/1902.07816.
Xinyao Shen, Jiangjie Chen, Jiaze Chen, Chun Zeng, and Yanghua Xiao. 2022. Diversified query generation guided by knowledge graph. In *Proceedings of* the Fifteenth ACM International Conference on Web Search and Data Mining, page 897–907, New York, NY, USA. Association for Computing Machinery.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. In *Proceedings of the AAAI conference on artificial intelligence*, volume 31.
Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on Abstract Meaning Representation. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1054–
1059, Austin, Texas. Association for Computational Linguistics.
Guy Tevet and Jonathan Berant. 2021. Evaluating the evaluation of diversity in natural language generation.
In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, pages 326–346, Online.
Association for Computational Linguistics.
Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for
improved description of complex scenes. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 32.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Wenhao Yu, Chenguang Zhu, Lianhui Qin, Zhihan Zhang, Tong Zhao, and Meng Jiang. 2022. Diversifying content generation for commonsense reasoning with mixture of knowledge graph experts. In Findings of the Association for Computational Linguistics:
ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 1896–1906. Association for Computational Linguistics.
Wenhao Yu, Chenguang Zhu, Tong Zhao, Zhichun Guo, and Meng Jiang. 2021. Sentence-permuted paragraph generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 5051–5062. Association for Computational Linguistics.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems, volume 34, pages 27263–27277. Curran Associates, Inc.
David Zajic, Bonnie Dorr, and Richard Schwartz. 2002.
Automatic headline generation for newspaper stories.
In *Workshop on automatic summarization*, pages 78–
85.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
## A Mrhead **Dataset Creation Strategy**3
One of the key requirements of our work is to have multiple reference headlines for a news article i.e., <article, headline-1, headline-2, headline-3>4. Towards this requirement, we have created a dataset MRHEAD: Multi-Reference Headline Dataset. First, we scrape news articles and their headlines from Inshorts (https://www.
3We will publicly release dataset, code, model checkpoints and generated text 4Nevertheless, the proposed approach can be easily extended to single reference setup with modification in the loss function.
![7_image_0.png](7_image_0.png)
inshorts.com/) news website. We keep all news categories in consideration. Once the headline from Inshorts is collected, we try to collect multiple similar headlines from other news websites with following steps:
- Make a google search with news headline text as the search query.
- Parse the google search response and retrieve the list of URLs from the search result.
- From the URL list obtained, remove URLs that belong to Wikipedia, Facebook, Twitter, etc.
- Remove URLs that correspond to docx, pdf or ppt files.
- Make a HTTP call to the remaining URLs.
Retrieve similar headlines by parsing the response.
Next, we use Sentence-BERT (Reimers and Gurevych, 2019) to get the similarity scores and pick two headlines from the list of similar headlines based on similarity scores. Therefore, each entry in our dataset consists of 4 features: <article, headline-1, headline-2, headline-3>. Further, we ask human annotators to verify the quality
![7_image_1.png](7_image_1.png)
of the dataset and filter/modify the records accordingly. This exercise carried out over a period of 6 months resulted in around 3000 records in total.
The available data was split into 2330, 100, and 582 samples of training, validation and test splits respectively. The dataset statistics are shown in figure 3. Table 4 displays a few samples from our dataset.
As part of the dataset, we have released the URLs to news articles (these articles are already in the public domain) and the reference headlines.
Sharing of the urls/news articles is done in several existing datasets, e.g. NELA2017 dataset (Horne et al., 2018), Article Commenting Dataset (Qin et al., 2018).
## B Keyselect **Module**
Algorithm 1 Keyword Selection Algorithm
Require: l self-attention heads h1, h2*, . . . , h*l Require: c clusters c1, c2*, . . . , c*c Require: m, n, r: Keyword-Selection hyper-parameters 1: Initialize g =
l c 2: for i ∈ {0, c − 1} do 3: Assign g heads (hig+1-h(i+1)g)) to the cluster ci 4: Initialize set wi← ∅ to store the keywords of ci 5: for each hj in ci do 6: Select top m attentive words from hj and update the set wi 7: **end for**
8: \# ci will contain at most g ∗ m keywords 9: **end for**
10: for i ∈ {0, c − 1} do 11: Select r (or r
′) theme keywords from overlapping keywords across c clusters based on attention scores 12: Select n − r (or n − r
′) general keywords from nonoverlapping keywords specific to the cluster ci based on attention scores 13: Cluster ci have corresponding diverse keywords set K
guide iof size n 14: **end for**
15: Use K*guide* consists list of selected keywords for c clusters in *SeqGen* module
## C Baselines
We compare the proposed model performance with three literature and three other strong baselines.
Details of the baselines are mentioned below:
1. **Mixture-decoder:** In the mixture decoder
(Shen et al., 2019) approach, three different decoders are used to generate the diverse headlines. Each decoder is trained with a different headline and we take the average crossentropy loss for the particular news article.
![8_image_2.png](8_image_2.png)
S.No. Example-1 **Example-2**
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
tied staged 'kidnap': PoliceChandigarh traffic constable reports for duty with
'kidnap' say policeVideo of Chandigarh cop holding baby while on duty Headline 3 Kidnapping victim found tied up in backseat after police stop wrong way driver in Olympia Table 4: Sample examples from MR-Head dataset
![8_image_3.png](8_image_3.png)
Figure 4: Examples to illustrate theme and general keywords selected with KEYSELECT module. Here, the general keywords set is a subset of the keyword set.
2. **Mixture Content Selector:** In MixCS (Cho et al., 2019), the authors introduced a selection module SELECTOR to perform the diversification process. The SELECTOR module generates three different sets of keywords which were concatenated by input news and fed into the standard encoder-decoder model for headline generation.
3. **Experts(MoKGE):** In MoKGE (Yu et al.,
2022) approach, apart from keyword extraction from the input news, the authors leverage the use of knowledge graph, i.e., ConceptNet
(Speer et al., 2017) to extract the diverse set of keywords to guide an encoder-decoder model to generate diverse headlines.
4. **T5+ DSA (Diverse Search Algorithm):** We fine-tune the T5-base checkpoint to return the three sequences with a combination of top-k and top-p sampling.
5. **T5+WMD (Word Mover Distance):** Similar to T5+ DSA but additionally we added WMD
along with standard cross-entropy loss. The loss function is given as follows.
L = (1 − λ) × LCE + λ × LWMD (2) LWMD = WMD(hp, hr) (3)
Here, LCE indicates the standard crossentropy loss, and LWMD indicates word mover distance as a loss, where hp and hr are predicted and reference headlines. λ is a hyperparameter. For the best-performing model, λ is 0.5.
6. **T5+Avg Loss:** Similar to T5+DSA, but additionally the final loss is an average crossentropy loss for the same news article. The loss function is given as follows.
$$L={\frac{L_{1C E}+L_{2C E}+L_{3C E}}{3}}$$
$\left|\right\rangle$.
The losses L1CE, L2CE and L3CE are calculated with respect to each headline-1, headline2 and headline-3 respectively.
## D Human Evaluation Setup
We conducted a human evaluation with four metrics i.e., Fluency (Flu), Relatedness (Rel), *Correctness*
(Corr), and Diversity. *Fluency* measures how fluent and grammatical the generated text is. *Relatedness* indicates how much the generated outputs are in the context with input(s), *Correctness* measures semantics and meaningfulness. Finally, *Diversity* measures how diverse the generated headlines are.
A human evaluation task was conducted to compare the results of our proposed model with baselines. The evaluations were carried out by 20 human evaluators, each of whom held at least a Masters's degree and possessed a good knowledge of the English language. We selected 50 input news articles randomly from the dataset and generated three headlines for each article using the selected models. For each input, we randomly selected the k th generated headline (k ∼ 1, 2, 3) from the models (both baselines and proposed). For example, if k = 2, we selected the second generated headline from the proposed model as well as from all the other baselines. This process was repeated for all 50 input news articles. For the first task, the dataset consists of 3-tuples containing the news article, headline from the proposed model, and headline from the baseline model. The annotators were asked to provide relative scores based on fluency, relatedness, and correctness between the two headlines. They were given three options
(0, 1, 2), where 1 indicated that headline-1 was better, 2 indicated that headline-2 was better, and 0 indicated a tie. The annotators were not informed about the baseline and proposed model results.
The second task aims to ensure the diversity of generated headlines. Similar to the first task, we selected 50 samples from the proposed model and other baselines for the same news articles. The dataset consists of a news article and three headlines. The annotators were asked to provide diversity scores ranging from 1 to 5, where 1 indicated headlines with the least diversity or unacceptable quality and 5 indicated diverse headlines along with good quality.
## E Implementation Detail
In our proposed model, we utilized pre-trained weights of the T5-base encoder for the pre-trained encoder used in the KEYSELECT module during training. The model was trained for 20 epochs, and the best checkpoint was selected based on the validation loss. We used l = 12 self-attention heads from the pre-trained encoder of the *KeySelect* module. As we aimed to generate three diverse headlines, we set c = 3, which implies g = 4. The optimal values for our best-performing model were m = 10, n = 3, r = 2, and λ = 0.5. The total number of parameters was 3 × 108. We utilized the Adam optimizer technique with a learning rate of 1e − 4. During the test phase, we used the combination of Top-K and Top-p sampling decoding strategies, where K = 50 and p = 0.95. The batch size was 32. We implemented all the models using PyTorch (Hugging-face). Model training was performed on a V100, 32GB single GPU.
## F Ablation Study
We conducted an ablation study to analyze the effect of different model components on the performance of our proposed model. The experimental results are presented in Table-5. First, we added a plug-and-play module called WordNet (Fellbaum, 1998) to our model, which is used to obtain related keywords from the input text. Specifically, if n keywords are extracted from the input text in a cluster ci, then the final set of keywords after using the WordNet module would be at least 2n keywords for that particular cluster ci. However, in this experiment, we observed a significant drop in quality across all generated headlines. Next, we experimented with removing the Word Mover Distance component from the loss function and observed a drop in performance in terms of BLEU and PBLEU scores compared to our proposed DIVHSK
model. We also experimented with different values of the hyperparameter λ used in the loss function and found that our proposed model outperforms all other variations of the model. Overall, the ablation study demonstrates the importance of the different model components in achieving the best performance for headline generation.
## G Model Generated Headlines
In this section, we present the results generated by our proposed model, along with the results of baseline models. The generated headlines, along with input news and reference headline, are tabulated in Tables 6 and 7.
| Experiments | Headline-1(⇑) | Headline-2 (⇑) | Headline-3 (⇑) | P-BLEU | | | |
|----------------------------------------------------------------------------------------------|-----------------|------------------|------------------|----------|---------|--------|--------|
| BLEU | ROUGE-L | BLEU | ROUGE-L | BLEU | ROUGE-L | (⇓) | |
| DivHSK without WMD | 15.10 | 0.2552 | 14.55 | 0.2419 | 15.88 | 0.2541 | 0.6488 |
| DIVHSK with WordNet | 15.05 | 0.2671 | 14.71 | 0.2673 | 14.62 | 0.2699 | 0.6087 |
| DIVHSK Model (λ= 0.1) | 14.39 | 0.2763 | 13.97 | 0.2795 | 13.45 | 0.2722 | 0.5897 |
| DIVHSK Model (λ = 0.2) | 15.31 | 0.2864 | 15.12 | 0.2824 | 16.31 | 0.2882 | 0.6211 |
| DIVHSK Model(Ours) (λ = 0.5) | 16.83 | 0.2896 | 17.95 | 0.2954 | 17.72 | 0.2955 | 0.6477 |
| Table 5: Different ablation experiments that provide clarification for model design choices. | | | | | | | |
| Actress Raveena Tandon who will be making her digital debut with the crime thriller series Aranyak said that her kids are excited to see her on OTT. She added My kids...tell me | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|--------------------------------|
| News | Mom you re going to be on Netflix It s a cool thing for them. Speaking about her character as a cop in the series Raveena said She has incredible strength. My kids feel it's a cool thing | | |
| Reference | to be on OTT: Raveena on | | |
| Headlines | her digital debut | My kids feel being on Netflix is a cool thing Raveena Tandon on digital debut with Aranyak | |
| Model | Generated Headline 1 | Generated Headline 2 | Generated Headline 3 |
| My kids are excited to see | My kids are excited to see me on Netflix: Raveena | My kids are excited to see me | |
| Mixture | me on Netflix: Raveena | | |
| Selector | on OTT: Raveena | | |
| Tandon | Raveena Tandon on her digital debut with Aranyak My kids feel it's a cool thing to be on Netflix | | |
| Raveena Tandon Says Her | My kids are excited to see | My kids are excited to see | |
| MoKGE | Kids Are Excited To See | me on OTT: Raveena | her on Netflix |
| Her On Netflix Tell me mom you're going | Tell me mom you're going | Tell me mom you're going | |
| T5-Avg | to be on Netflix it's a cool | to be on Netflix it's cool | to be on Netflix it's a cool |
| thing for kids: Raveena | for kids: Raveena | thing for kids, Raveena | |
| My kids are excited: Raveena | Kids excited to see me on | Kids excited to see me on | |
| Mixture | on making digital debut in | Netflix: Raveena on | Netflix: Raveena on making |
| Decoder | 'Aranyak' | 'Aranyak': Tandon | digital debut with 'Aranyak' |
| I am super excited for my | Mom is to be on Netflix. It's | | |
| Ours | Actress Raveena to play as | kids to see me on Netflix: | a cool thing for kids: Raveena |
| cop in a thriller on Netflix | Raveena | on her OTT debut | |
| Table 6: Sample generated headlines with different baselines and proposed model | | | |
| China filed the highest number of patent applications globally in 2020 retaining its top position for the second consecutive year the UN s World Intellectual Property Organization | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|-------------------------------------------------------------|
| News | WIPO said. China filed 68,720 applications last year while the US filed 59,230. In 2019 China had replaced the US as the top patent application filer for the first time in over four decades. China files highest patents China becomes world's top China extends lead over U.S. | | |
| Reference | globally for 2nd year in a | patent filer after four decades | in global patents filings |
| Headlines | row: UN | with US on top | U.N. says |
| Model | Generated Headline 1 | Generated Headline 2 | Generated Headline 3 |
| China tops the list of top | China files highest number | China files highest number | |
| Mixture | patent filers for 2nd | of patent applications globally | of patent applications globally |
| Selector | consecutive year | for 2nd consecutive year | for 2nd consecutive year |
| MoKGE | China tops the list of top patent filers globally in 2020 | China retains top spot for 2nd consecutive year: UN | China tops the list of world's top patent exporters in 2020 |
| China files highest number of | China files highest number of | china files highest number of | |
| T5-Avg | patent applications globally | patent applications globally in | patent applications in 2020 |
| in 2020 retains top position | 2020 retains top position: UN | retains top position: UN says | |
| China retains top ranking in | China retains top position in | China retains top position in | |
| Mixture | 2020, file the highest patent | 2020, filed highest number of | 2020, filed highest number of |
| Decoder | applications globally | patent applications | patent applications |
| China retains top position in | China files highest number | China replaces US as top | |
| Ours | global patent filings for | of patents globally in 2020, | |
| second consecutive year. | retains top spot: UN | patent applicant: UN | |
| Table 7: Sample generated headlines with different baselines and proposed model | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After the conclusion as stated in the call for the main conference paper.
✗ A2. Did you discuss any potential risks of your work?
We use clean dataset after human verification and validation.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
section 1 and 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
I have used all the publicly available artifacts which don't have any research restrictions.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 1 and 4
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? section 4 and appendix A
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 4 and appendix A
## C ✓ **Did You Run Computational Experiments?** Section 4 And Appendix E
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix E
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4 and appendix E
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Due to the large set of experiments and computationally constrained
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix E
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4 and A
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
The dataset is created by two of the co-authors and they were well aware of the risk and other details. They have considered the expected policies.
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
The dataset is created by two of the co-authors.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
The dataset comprises of data points (news articles) available in the public domain. The urls are part of the dataset.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
The created data source is reviewed by the multiple stockholders
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
The dataset is created by two of the co-authors. |
bexte-etal-2023-similarity | Similarity-Based Content Scoring - A more Classroom-Suitable Alternative to Instance-Based Scoring? | https://aclanthology.org/2023.findings-acl.119 | Automatically scoring student answers is an important task that is usually solved using instance-based supervised learning. Recently, similarity-based scoring has been proposed as an alternative approach yielding similar perfor- mance. It has hypothetical advantages such as a lower need for annotated training data and better zero-shot performance, both of which are properties that would be highly beneficial when applying content scoring in a realistic classroom setting. In this paper we take a closer look at these alleged advantages by comparing different instance-based and similarity-based methods on multiple data sets in a number of learning curve experiments. We find that both the demand on data and cross-prompt performance is similar, thus not confirming the former two suggested advantages. The by default more straightforward possibility to give feedback based on a similarity-based approach may thus tip the scales in favor of it, although future work is needed to explore this advantage in practice. | # Similarity-Based Content Scoring - A More Classroom-Suitable Alternative To Instance-Based Scoring?
Marie Bexte1and Andrea Horbach**1, 2** and **Torsten Zesch**1 1CATALPA, FernUniversität in Hagen, Germany 2Hildesheim University, Germany
## Abstract
Automatically scoring student answers is an important task that is usually solved using instance-based supervised learning. Recently, similarity-based scoring has been proposed as an alternative approach yielding similar performance. It has hypothetical advantages such as a lower need for annotated training data and better zero-shot performance, both of which are properties that would be highly beneficial when applying content scoring in a realistic classroom setting. In this paper we take a closer look at these alleged advantages by comparing different instance-based and similarity-based methods on multiple data sets in a number of learning curve experiments. We find that both the demand on data and cross-prompt performance is similar, thus not confirming the former two suggested advantages. The by default more straightforward possibility to give feedback based on a similarity-based approach may thus tip the scales in favor of it, although future work is needed to explore this advantage in practice.
## 1 Introduction
Approaches in automatic content scoring can be classified into two paradigms: *instance-based scoring* and *similarity-based scoring* (Horbach and Zesch, 2019). Figure 1 gives a schematic overview of the two, with most work in the area of content scoring falling into the instance-based paradigm, where an algorithm is trained on learner answers as the only information source and learns about properties of correct and incorrect answers directly from these answers. In similarity-based scoring, in contrast, learner answers are compared to one or more target answers and correctness judgments are based on either the similarity to a correct answer
(such as a sample solution) or on the label of the closest answer(s) to a given learner answer.
In comparison to the instance-based paradigm, similarity-based scoring is substantially less well researched (see e.g. Sakaguchi et al. (2015)). Recent work by Bexte et al. (2022) shows that similarity-based content scoring methods can yield comparable results to instance-based scoring if a similarity metric is substantially fine-tuned. However, it also showed that more research is needed to understand when it can be successful and how it compares to instance-based scoring. To do this, we first identify three possible advantages of similaritybased scoring: reduced **data hunger**, better **crossprompt performance** and **explainability**. These aspects would be highly beneficial when it comes to the application of automatic scoring in a realistic classroom setting: A typical classroom (ideally)
does not consist of hundreds of students, meaning that collecting large amounts of answers to a question from students is unrealistic. Since stateof-the-art content scoring builds on prompt-specific models, it would be highly desirable for a model to either be able to work well on this smaller amount directly or at least by making use of larger already existing cross-prompt data in training a promptspecific model. Finally, feedback has been identified as one of the major influence factors for learning success (Hattie and Timperley, 2007), but oneon-one student-teacher time is limited, so a model that can justify why it awarded a certain number of points would be preferred over a performance-wise comparable one that simply returns a score.
We perform a comparison of the two paradigms on different data sets typically used in one but not the other, focusing on a setup with limited data and also assessing to what extent using cross-prompt data can help overcome these limitations. We find that while overall highly-dependent on the choice of cross-prompt data, instance-based scoring benefits more. For a more encompassing comparison of the two paradigms, we also compute learning curves extending over a wider range of training data sizes and while we find that there is no one best method for smaller amounts of data, there is a
![1_image_0.png](1_image_0.png)
point where similarity-based deep learning starts to consistently outperform all other methods, closely followed by instance-based deep learning. In comparing how much predictions vary based on the choice of training data, we find an overall smaller standard deviation for similarity-based predictions.
We make all our code publicly available.1
## 2 Instance-Based Vs. Similarity-Based Scoring
Instance-based scoring has become the de facto state of the art in automated scoring. Recent experiments however showed that, with the emergence of deep learning, similarity-based models can keep up with instance-based ones:
For essay scoring, Xie et al. (2022) use a BERT
model in a pairwise contrastive regression setup to score an essay in comparison to a reference, thereby outperforming the instance-based state of the art.
For content scoring, Bexte et al. (2022) reach comparable performance to an instance-based BERT
model by using fine-tuned SBERT embeddings in a knn-like search for the most similar answer(s).
Tunstall et al. (2022) introduce Sentence Transformer Finetuning (SETFIT), which successfully uses SBERT in a few-shot setting by using the finetuned embeddings to train a classification head.
In line with this low-resource setup, similaritybased scoring is often applied to data sets containing only few answers per prompt. This includes work on computer science questions (Mohler and Mihalcea, 2009; Mohler et al., 2011), English and German reading comprehension data (Bailey and Meurers, 2008; Meurers et al., 2011) and several 1https://github.com/mariebexte/
sbert-learning-curves approaches on the Student Response Analysis data set (Dzikovska et al., 2013), such as Levy et al.
(2013) or more recently Willms and Padó (2022).
Even though in contrast, research on data with hundreds of answers per prompt or more is often associated with instance-based methods, such as most work on the ASAP data set (e.g., Higgins et al.
(2014); Heilman and Madnani (2015); Kumar et al.
(2019)), this does not necessarily mean that the data hunger is smaller for similarity-based models than for instance-based models as the former are often used to train a classifier across prompts.
Still, also considering the recent success of SETFIT in a few-shot setting , we address the perceived dichotomy in data sets by contrasting the performance of both paradigms on both kinds of data sets. This gives insight into the difference regarding their **data hunger**. To investigate the supposed advantage of similarity-based scoring on limited data, we focus on learning curve experiments on smaller amounts of training data.
Previous work comparing instance-based to similarity-based scoring however showed similarity-based performance to be close to the respective best-performing instance-based model on both small (Logistic Regression) or larger amounts of training data (BERT). (Bexte et al.,
2022), whereas Logistic Regression and BERT
have their strengths towards the lower and higher end of the training size spectrum, respectively. To further investigate this, we extend our learning curves beyond the low-resource spectrum and include a wider range of training sizes.
Another aspect Tunstall et al. (2022) already touched on the influence of the reference answer choice on scoring performance, thus asking how
![2_image_0.png](2_image_0.png)
(un)lucky one can be when selecting these and whether it is worth investing time to carefully pick them. To investigate this, we compare the standard deviation of different training data samples for instance-based and similarity-based scoring.
As mentioned above, the dichotomy of similarity-based and instance-based data sets is accompanied by instance-based scoring typically training one model per prompt, while similaritybased approaches often make use of data across different prompts, suggesting a possible superiority of similarity-based methods regarding **crossprompt transfer**. Further supporting this notion is the fact that a similiarity-based model won the cross-prompt track of the 2021 NAEP Automatic Scoring challenge2, although the overall performance level of submissions lagged behind stateof-the-art instance-based models in within-prompt settings. It is however unclear how well a stateof-the-art instance-based model would fare on the same cross-prompt data, as such comparisons are lacking. Condor et al. (2021) use different ways of encoding answers to train a cross-prompt model in an instance-based fashion. They find SBERT
embeddings to be superior over Word2Vec embeddings or a bag of words approach, leaving open the question of whether using the SBERT embeddings in a similarity-based fashion would have yielded even better performance. Since the similarity-based zero-shot cross-prompt experiments by Bexte et al.
(2022) showed mixed results, we undertake a comparison of the non-zero-shot cross-prompt performance of instance-based and similarity-based methods.
A third possible advantage of similarity-based scoring that requires user studies to investigate and is thus beyond the scope of this paper is that one can show which reference answer(s) led to a certain classification decision, by default lending it a certain degree of **explainability** that could serve as pedagogical **feedback** to students. This feedback is mainly aimed at students or teachers as opposed to AI experts, since we do not directly disclose the inner workings of the algorithm, but rather provide some rationale about why a score has been assigned. A similar direction is addressed by clustering approaches for automatic scoring (such as Basu et al. (2013); Wolska et al. (2014); Zehner et al. (2016)) with clustering essential also being a similarity-based method bearing the advantage of structured output that can be used to provide human feedback to learners efficiently.
To summarize, we identified three potential benefits of similarity-based models: a reduced training data hunger, the ability to abstract across prompts and the possibility of giving feedback based on reference answers, the latter of which we leave for future work.
## 3 Experimental Setup 3.1 Scoring Approaches
Similarity-based approach We use the similarity-based approach described in Bexte et al. (2022), where a pre-trained Sentence-BERT
(SBERT) model (All-miniLM-L6-v2) is fine-tuned on sentence pairs formed from the training data.
These sentence pairs are labeled with a similarity score of 1 (0), if both answers in the pair have the same (a different) label. In this manner, we create as many pairs as possible. Figure 2 gives an overview of this fine-tuning setup, and also shows how the fine-tuned model is then used to obtain predictions on the test data: With the training data serving as a set of reference answers, each answer from the test set is compared against every answer from the training set, and the label of the most similar training answer is then used as prediction.
We train for 5 epochs with batch size 8 and without warmup, using an OnlineContrastiveLoss and an EmbeddingSimilarityEvaluator, otherwise keeping all values at their defaults. Validation is done after each epoch and we use the model with minimal validation loss for evaluation on the test data. Similarity-based baselines Since similaritybased scoring also works without any finetuning, we include similarity-based baselines that essentially perform only the inference step described in the above SBERT setup. An answer from the test set is thus compared to all answers from the reference (i.e. training) set, predicting the scoring label of the most similar reference answer.
While we also ran experiments using overlap and cosine similarity of word count vectors3, we for the sake of brevity only report results for **edit** distance, as an example of surface similarity, and the **pretrained** SBERT model without any adaptation to the respective prompt, as an example of working on vectorized representations.
Instance-based approaches Experimenting with a number of shallow algorithms4showed **Logistic**
Regression (LR) to perform best, which is why we only report results for this method. We used the scikit-learn implementation in standard configuration (apart from setting max_iter to 1000) with token uni- to trigram features. As a representation of instance-based deep learning, we also fine-tune a **BERT** model (bert_base_uncased) from huggingface5. We train this model for 20 epochs with a batch size of 8, running evaluation after each epoch and keeping the model with the lowest validation loss for evaluation on testing data. Other than that, parameters are kept at their default values.
3Results using these methods were in the same ballpark as edit distance and pre-trained SBERT model.
4We used SVMs, random forests and logistic regression.
5https://huggingface.co/
bert-base-uncased
| ASAP | SRA-Beetle | SRA-SEB | |
|--------------------------|--------------|--------------|-------------|
| Domains | Science, | Electricity, | Science |
| Bio, ELA∗ | Electronics | | |
| # Prompts | 10 | 47 | 135 |
| # Answers/prompt - Train | 1704 | 84 | 37 |
| - Test | 522 | 9 | 4 |
| Label set - # Labels | 2-3 | 2 or 5 | 2 or 5 |
| - Scale | numerical | categorical | categorical |
Table 1: data sets used in our experiments. ∗English Language Arts We trained on NVIDIA Quadro RTX 6000 and A100 GPUs for a total of close to 4000 GPU hours.
## 3.2 Data
We perform experiments on two widely used English content scoring data sets that are freely available for research purposes: **ASAP**6, which is typically used for instance-based scoring, and the **Student Response Analysis (SRA)** corpus
(Dzikovska et al., 2013), which has often been used for similarity-based experiments and consists of the two subsets **Beetle** and SciEntsBank (SEB). Since these data sets consist of answers to factual questions, they do not contain identifying information of students or offensive content.
While labels in ASAP are numerical (0 to either 2 or 3 points), answers in SRA are labeled nominally following a textual entailment view on automatic scoring with 5 possible outcomes: correct, contradictory, *partially_correct_incomplete*,
irrelevant or *non_domain*. We refer to this data set as **5-way**. In addition, we also use the **2-way**
version, where labels other than *correct* are merged into an *incorrect* class.
We use the default split into training and test data as provided in the respective data set. In all deep learning setups (i.e. fine-tuning BERT & SBERT),
we use parts of the training data for each prompt as a separate validation data set, whereas in shallow learning all training instances are used in the actual learning process. The rationale behind this is that we want to compare the overall amount of human annotation effort required to train a model, regardless how exactly this annotated data is used.
We randomly chose 4 answers per prompt for validation. Picking just 4 answers might seem a low number, but is reasonable since our experiments specifically target the use of limited training data.7
## 3.3 Evaluation
We compare the instance-based and similaritybased methods in a learning curve setup to examine the influence of different training set sizes. For ASAP with numeric labels, we use quadratically weighted kappa (QWK) (Cohen, 1968) as evaluation metric, whereas we use weighted F1 measure for the categorical labels in SRA.
Depending on the number of labels present in a data set, we consider different training sizes for the learning curve. For ASAP and SRA with 5-way labels, we start with five instances and go up to 50 in steps of five. For SRA with 2-way labels, we start with two instances, and also go up to 50, but first in steps of 2 (until 14 instances) and then in steps of four. For each training size, we train with 20 different randomly taken training data samples to mitigate sampling effects.
Due to the low number of on average 37 answers per prompt in SEB, we for this data set cut off results at a maximum training size of 30, as results for larger training sizes would only rely on the few prompts with enough answers to compute these results. Also note that the limited number of training answers to sample from allows for little variance between the 20 randomly sampled subsets.
## 4 Data Hunger
In comparing instance-based and similarity-based scoring methods, we focus on the amount of training data needed (i.e. how data hungry the approaches are). We focus on the low-resource setting, as (i) it is more realistic in a classroom setting, and (ii) the fact that similarity-based and instancebased perform on par has already been established when training data is abundant (Bexte et al., 2022).
Results in Figure 3 show that SBERT has the upper hand on SEB and ASAP, while it is outperformed by LR on Beetle. Other than on ASAP,
baseline similarity-based methods are often surprisingly strong on both Beetle and SEB. We speculate that this might be due to shorter and simpler answers, which is also indicated by a higher overall performance. As expected, performance is overall higher on the 2-way-labeled data, but apart from this, relative results of the different methods are 7We also validated on a few random prompts that this split is a good trade-off to save as many instances as possible for the actual training process.
similar on the five-way-labeled data. Note that results are averaged across all prompts of the respective data set and that individual performances per prompt again vary tremendously.
One application that would benefit from models that are doing well on small amounts of data is the use of automated scoring in a realistic classroom setting, since the average number of students in a class does not allow collecting larger amounts of answers to any given question. If a teacher were however to make up exemplary answers for the different possible outcomes, they might produce a more balanced sample of reference answers than what we use in our random sampling of training data. In Figure 4(a), we therefore also show learning curves using balanced sampling of ASAP data, which means that samples will contain the same amount of answers for each label.8 Averaged for LR, BERT and SBERT over all training sizes, this yields a .09 increase in QWK
compared to random sampling. The order of performance for individual methods does however vary substantially between the two settings and across different training sizes, with a tendency in most cases of SBERT outperforming other methods and the baseline methods (pre-trained and edit) being inferior. A curious exception to this observation is the curve for BERT on randomly drawn data.
Previous work on ASAP had found that both BERT and SBERT outperform LR on larger amounts of training data, while LR was superior on smaller data sizes (Bexte et al., 2022). Although our results do not find a general superiority of LR,
we take a closer look at how the different methods compare for larger training sizes. We therefore extend the ASAP learning curve (with random sampling) to include up to 1000 training instances (Figure 4(b)).9 We observe that soon after 100 training instances, there is a clear advantage of neural over shallow methods, with SBERT outperforming LR
much earlier. Overall, SBERT consistently outperforms or is at least on par with all other methods.
## 4.1 Potential For Combining Approaches
As the different methods sometimes show widely differing performance, one idea towards improving overall performance is to combine their predictions.
SEB (5-way) Beetle (5-way) **ASAP**
![5_image_0.png](5_image_0.png)
SEB (2-way) **Beetle (2-way)** Oracle
![5_image_1.png](5_image_1.png)
## We Do This In Two Different Ways:
In the **voting** condition, we employ a majority voting strategy over the predictions of all methods, i.e. take the most frequently predicted label. In case of ties, we randomly decide on one of them.
In the **oracle** condition, we predict the correct label whenever at least one of the methods is able to do so. If none of them is, we use the prediction that is closest to the ground truth. This is of course a hypothetical, idealized setting, as we in practice do not know beforehand which method gives the correct prediction, and can therefore be seen as the ceiling performance on combining all methods.
Results for both settings are included in Figures 3 and 4. The only setting where the voting condition tops all individual methods is ASAP
with balanced sampling. In all other cases there is enough disagreement between the individual predictions that there is always one method that is on par with and in many cases even outperforming the voting condition. Combining predictions of all methods into an oracle condition, however, yields a pronounced performance increase of around .2 in weighted F1 for SRA and an even more pronounced one of around .4 in QWK for ASAP, suggesting that future experiments might build a stacked classifier to test how much of this potential can be realized.
To dissect the cause for these performance in-
![5_image_2.png](5_image_2.png)
creases, we perform two further analyses: In the unique condition, we for each of the methods evaluate which proportion of the answers in a data set was scored correctly by the respective method alone, i.e. misclassified by all other methods. In the all condition, we evaluate which proportion of answers was scored correctly by all methods, i.e.
misclassified by none of them.
Table 2 shows the results, with the percentage of answers falling into the all condition indicating how many are easy to predict correctly, which is of course varying in line with the overall performance level on the different data sets. We observe the highest proportion of 'easy' answers .44 for SRA
with 2-way labeling and the lowest of .23 for ASAP.
![6_image_0.png](6_image_0.png)
Oracle Voting BERT Logistic Regression S-BERT Pretrained Edit While this proportion tells us how many answers are reasonably easy to score correctly, it also tells us that the remainder of the answers it mislabeled by at least one of the methods. Taking this to the extreme and looking at the fraction of answers that is scored correctly by only one of them, i.e. looking towards unique condition, the per-method percentages are highest for ASAP and lowest for the SBERT methods (both pre-trained and fine-tuned).
Even though the individual numbers may overall not seem that high, note that in the oracle condition it is actually the sum of all these proportions that contributes to the observed high performance.
## 4.2 Influence Of Reference Answer Selection
The choice of the specific training answers (which are the reference answers in similarity-based scoring) influences performance beyond the balanced/random dichotomy. To highlight this variability, Figure 5 plots the distribution of performances across the 20 runs for ASAP for both balanced and randomly sampled data.10 In general, we see that standard deviation is lower and varies less for SBERT than for BERT.
Notably, for SBERT it shows a further decline for larger training sizes when using balanced sampling, which we do not see for BERT. A similarly pronounced decline in standard deviation was observed for the similarity-based baselines. Overall, this indicates that the choice of reference answers for the similarity-based approach introduces less 10We limit this analysis to ASAP, as its larger pool of training instances allows for more sampling variance. For the sake of brevity we only report results for BERT and SBERT.
![6_image_1.png](6_image_1.png)
## 5 Cross-Prompt Scoring
Another claim often implicitly attached to similarity-based methods is that they might have greater capabilities of learning a cross-prompt model. This intuitively makes sense as instancebased approaches rely on the presence or absence of certain lexical material while similarity-based approaches can exploit the closeness to a model answer. Bexte et al. (2022) did however find that in some cases fine-tuning an SBERT model to one prompt before adapting it to another was actually detrimental to performance, with an off-the-shelf pre-trained SBERT model sometimes even outperforming the fine-tuned one. Since they did a zeroshot application to the new prompt, no data from the target prompt was used adapt the model to it.
We therefore first fine-tune a model on 1000 answers from a base prompt, and then use a smaller amount (again building the learning curves from Figure 3) to adapt this model to the target prompt.11 Figure 6 shows the change in performance for each combination of prompts in ASAP12 compared to a prompt-specific setup without pre-training (i.e. the results from Experimental Study 1). To gain a better overview, results are not only averaged over all prompts but also all training sizes.
Like Bexte et al. (2022), we group prompts according to the underlying topics Science, Biology ELA, as a transfer within the same topic group might be more successful than one across topic 11We again only report results for the SOTA models BERT
and SBERT for the sake of brevity.
12As only this data set provides a large enough amount of answers, we only perform this experiment on ASAP.
groups. We see that - contrary to the implied superiority of similarity-based scoring - , the largest
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
performance increases of up to .3 in QWK happen for the instance-based BERT model. These relatively pronounced increases mostly occur for transfers within topic groups, but there are also instances of (albeit less) successful cross-prompt transfer, thus partially confirming the hypothesis - at least for BERT. There seems to be a systematic detrimental effect of using a Biology base prompt for a target ELA prompt, which does however not occur when prompts are used the other way round.
Apart from this, there is quite some symmetry to the results, meaning that if using prompt A as base for target prompt B helps (harms), the same is true for using B as a base for A.
## 6 Summary & Future Work
We compared instance-based and similarity-based methods for content scoring, examining whether properties that are often implicitly attributed to the latter are in fact empirically observable. In a set of learning curve experiments directed at the claim of similarity-based methods being less data hungry, we find that a fine-tuned SBERT model does often yield the best results, but not for Beetle, where this method was outperformed by the instance-based logistic regression. The suggested superiority of similarity-based scoring when it comes to smaller training sizes could thus not be confirmed.
When running experiments with larger training sizes on ASAP, SBERT remains the bestperforming method up until using 750 training instances, from when on it is joined by Bert. In a comparison of how much performance varies depending on the choice of training data, SBERT had the upper hand, especially when a relatively large amount of balanced training data that is sampled.
Another proposed property of similarity-based scoring is the ability to transfer across prompts.
This could however not be confirmed by our experiments, where the largest performance increases were observed for the instance-based BERT model.
Examining performance of a hypothetical oracle condition showed that it might be worthwhile to learn a stacked classifier, thus combining the strengths of the different (both similarity- and instance-based) methods. Other possible avenues of future work are topics that have been researched in the context of instance-based scoring but not, or at least not to the same extent, for similarity-based scoring. These include the importance of spelling errors or the vulnerability to adversarials.
## 7 Limitations
Since our results regarding a fine-tuned similarity method are limited to the SBERT fine-tuning introduced by Bexte et al. (2022), our findings are limited to this specific similarity-based setup and cannot exclude that other similarity-based methods might behave differently. We also did not consider training sizes larger than 1000 instances of ASAP,
and can therefore not speak for how the relative performance of the different methods would be affected by using even more training data. Regarding the experiment on larger training data sizes, we also limited our analysis to ASAP, so it is necessary to compare the observed effects to those that occur on other data sets. The same goes for our cross-prompt experiments, which were also limited to ASAP . Other data sets cover other content domains and can thus produce different effects. Finally, while we do discuss the advantage of a more straightforward explainability of similarity-based models regarding feedback, this is an entirely theoretical argument that goes beyond the scope of this paper and would therefore have to be investigated further in future work.
## 8 Ethical Considerations
Automatic scoring can foster great efficiency over manual scoring, and can thus, especially considering limitations regarding human scoring resources, be a highly useful addition to the educational world.
It enables instantaneous teacher-indepedent feedback and frees up teacher resources.
Nonetheless, automatically scoring student answers brings about a number of concerns regarding when it may be more or less appropriate.
While automated scoring in general can, depending on model implementation and quality, both contribute to and reduce fairness, similarity-based scoring at least provides model introspection at the level of being able to return the answers that lead to a certain classification outcome as feedback. In general, automatic scoring puts a certain pressure of conformity on answers: An answer that differs in style from what was observed during training, irrespective of whether it is in fact correct, is at risk of being misclassified.
Regarding such biases, it should be noted that humans are not perfect either - but an English teacher is biased against a particular student, they still have the option of switching classes. The same may not be possible if a widely used scoring model is negatively biased against the kinds of answers they give.
Finally, whether to use automatic or manual scoring does not have to be a question of one or the other - it may be worthwhile to have a model only perform a first grouping, in hopes that this would speed up the human grading process (Pado and Kiefer, 2015) or return answers it is unsure about for manual reassessment. Another option that is already employed in practice (for example by the Educational Testing Service) is to have the same set of answers graded by both a human and a scoring model, only requiring a second humand annotator when there is too much disagreement between the two. This ensures that the high-stakes TOEFL test can benefit from more efficient, machine-supported scoring while also putting a layer of quality control on its predictions. In a lower-stakes scoring setup, for example in an optional training exercise for students, one may want to be more lenient towards the model predictions, employing a scoring approach without human involvement at the risk of getting a certain percentage of erroneous predictions.
## Acknowledgements
This work was partially conducted at "CATALPA
- Center of Advanced Technology for Assisted Learning and Predictive Analytics" of the FernUniversität in Hagen, Germany, and partially within the KI-Starter project "Explaining AI Predictions of Semantic Relationships" funded by the Ministry of Culture and Science, Nordrhein-Westfalen, Germany.
## References
Stacey Bailey and Detmar Meurers. 2008. Diagnosing meaning errors in short answers to reading comprehension questions. In Proceedings of the third workshop on innovative use of NLP for building educational applications, pages 107–115.
Sumit Basu, Chuck Jacobs, and Lucy Vanderwende.
2013. Powergrading: A clustering approach to amplify human effort for short answer grading. *Transactions of the Association for Computational Linguistics*, 1:391–402.
Marie Bexte, Andrea Horbach, and Torsten Zesch. 2022.
Similarity-based content scoring - How to make SBERT keep up with BERT. In *Proceedings of the*
17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), pages 118–
123.
J. Cohen. 1968. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. *Psychol Bull.*, 70(4):213–220.
Aubrey Condor, Max Litster, and Zachary Pardos. 2021.
Automatic short answer grading with SBERT on outof-sample questions. International Educational Data Mining Society.
Myroslava O Dzikovska, Rodney D Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa T Dang.
2013. Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge. Technical report, North Texas State Univ Denton.
John Hattie and Helen Timperley. 2007. The power of feedback. *Review of educational research*, 77(1):81–
112.
Michael Heilman and Nitin Madnani. 2015. The impact of training data on automated short answer scoring performance. In *Proceedings of the Tenth Workshop* on Innovative Use of NLP for Building Educational Applications, pages 81–85.
Derrick Higgins, Chris Brew, Michael Heilman, Ramon Ziai, Lei Chen, Aoife Cahill, Michael Flor, Nitin Madnani, Joel Tetreault, Daniel Blanchard, et al. 2014. Is getting the right answer just about choosing the right words? The role of syntacticallyinformed features in short answer scoring. arXiv preprint arXiv:1403.0801.
Andrea Horbach and Torsten Zesch. 2019. The influence of variance in learner answers on automatic content scoring. In *Frontiers in Education*, volume 4, page 28. Frontiers.
Yaman Kumar, Swati Aggarwal, Debanjan Mahata, Rajiv Ratn Shah, Ponnurangam Kumaraguru, and Roger Zimmermann. 2019. Get it scored using autosas —
An automated system for scoring short answers. In Proceedings of the AAAI conference on artificial intelligence, pages 9662–9669.
Omer Levy, Torsten Zesch, Ido Dagan, and Iryna Gurevych. 2013. Recognizing partial textual entailment. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 451–455.
Detmar Meurers, Ramon Ziai, Niels Ott, and Janina Kopp. 2011. Evaluating answers to reading comprehension questions in context: Results for German and the role of information structure. In Proceedings of the TextInfer 2011 Workshop on Textual Entailment, pages 1–9.
Michael Mohler, Razvan Bunescu, and Rada Mihalcea.
2011. Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In Proceedings of the 49th annual meeting of the association for computational linguistics:
Human language technologies, pages 752–762.
Michael Mohler and Rada Mihalcea. 2009. Text-totext semantic similarity for automatic short answer grading. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 567–575.
Ulrike Pado and Cornelia Kiefer. 2015. Short answer grading: When sorting helps and when it doesn't.
In Proceedings of the fourth workshop on NLP for computer-assisted language learning, pages 42–50.
Keisuke Sakaguchi, Michael Heilman, and Nitin Madnani. 2015. Effective feature integration for automated short answer scoring. In *Proceedings of the* 2015 conference of the North American Chapter of the association for computational linguistics: Human language technologies, pages 1049–1054.
Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, and Oren Pereg. 2022. Efficient few-shot learning without prompts. In *Advances in Neural Information Processing Systems*.
Nico Willms and Ulrike Padó. 2022. A transformer for sag: What does it grade? In Proceedings of the 11th Workshop on NLP for Computer Assisted Language Learning, pages 114–122.
Magdalena Wolska, Andrea Horbach, and Alexis Palmer. 2014. Computer-assisted scoring of short responses: The efficiency of a clustering-based approach in a real-life task. In *International Conference* on Natural Language Processing, pages 298–310.
Springer.
Jiayi Xie, Kaiwei Cai, Li Kong, Junsheng Zhou, and Weiguang Qu. 2022. Automated essay scoring via pairwise contrastive regression. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2724–2733.
Fabian Zehner, Christine Sälzer, and Frank Goldhammer. 2016. Automatic coding of short text responses via clustering in educational assessment. *Educational and psychological measurement*, 76(2):280–
303.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.2
✓ B1. Did you cite the creators of artifacts you used?
3.2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3.2
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3.2
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
3.2
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3.2
## C ✓ **Did You Run Computational Experiments?** 4, 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
3.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4, 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ou-etal-2023-pragmatic | Pragmatic Inference with a {CLIP} Listener for Contrastive Captioning | https://aclanthology.org/2023.findings-acl.120 | We propose a simple yet effective and robust method for contrastive captioning: generating discriminative captions that distinguish target images from very similar alternative distractor images. Our approach is built on a pragmatic inference procedure that formulates captioning as a reference game between a speaker, which produces possible captions describing the target, and a listener, which selects the target given the caption. Unlike previous methods that derive both speaker and listener distributions from a single captioning model, we leverage an off-the-shelf CLIP model to parameterize the listener. Compared with captioner-only pragmatic models, our method benefits from rich vision-language alignment representations from CLIP when reasoning over distractors. Like previous methods for discriminative captioning, our method uses a hyperparameter to control the tradeoff between the informativity (how likely captions are to allow a human listener to discriminate the target image) and the fluency of the captions. However, we find that our method is substantially more robust to the value of this hyperparameter than past methods, which allows us to automatically optimize the captions for informativity {---} outperforming past methods for discriminative captioning by 11{\%} to 15{\%} accuracy in human evaluations. | # Pragmatic Inference With A Clip Listener For Contrastive Captioning
Jiefu Ou1 Benno Krojer2 **Daniel Fried**1 Carnegie Mellon University1 Mila/McGill University2 [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
We propose a simple yet effective and robust method for contrastive captioning: generating discriminative captions that distinguish target images from very similar alternative distractor images. Our approach is built on a pragmatic inference procedure that formulates captioning as a reference game between a *speaker*, which produces possible captions describing the target, and a *listener*, which selects the target given the caption. Unlike previous methods that derive both speaker and listener distributions from a single captioning model, we leverage an offthe-shelf CLIP model to parameterize the listener. Compared with captioner-only pragmatic models, our method benefits from rich visionlanguage alignment representations from CLIP
when reasoning over distractors. Like previous methods for discriminative captioning, our method uses a hyperparameter to control the tradeoff between the informativity (how likely captions are to allow a human listener to discriminate the target image) and the fluency of the captions. However, we find that our method is substantially more robust to the value of this hyperparameter than past methods, which allows us to automatically optimize the captions for informativity - outperforming past methods for discriminative captioning by 11% to 15% accuracy in human evaluations.1
## 1 Introduction
Discriminative captioning provides a challenging testbed for generating context-sensitive grounded language. In this task, a model must produce a description of a *target image* (e.g., the green highlighted image in Figure 1) that allows a person to correctly identify the target image from among a set of similar *distractor images* (e.g., the red highlighted images). Good captions must strike a balance between two criteria: (1) being fluent 1The code is available at https://github.com/
JefferyO/prag_clip_contra_caption Figure 1: Illustration of the contrastive captioning task
![0_image_1.png](0_image_1.png)
with a random example from the ImageCoDe dataset.
Models are tasked with generating captions that distinguish the target image (a) from other very similar distractors images (b) to (d). (There are a total of 9 distractors in each set of images, we omit the rest of them for simplicity of illustration.) Compared with baselines from previous work, our proposed approach, PICL, generates informative captions that help clearly identify the target out of the distractors, while remaining natural and fluent.
descriptions of the target image and (2) being discriminative in context: allowing a person to pick out the target image from the set.
Past work on discriminative captioning has successfully applied techniques from computational pragmatics to trade off between the two criteria above (Andreas and Klein, 2016; Vedantam et al.,
2017; Cohn-Gordon et al., 2018). Possible captions are selected using a combination of two scoring functions: (1) the caption's probability under a standard image captioning model, or *base speaker* score, which measures the caption's fluency and faithfulness to the image, and (2) a base listener score, which predicts how likely a human listener would be to correctly identify the target image given the caption, i.e. measuring discriminativeness. These past works typically obtain the listener scores from the image captioning (speaker) model itself, for example using Bayesian inference over the set of possible images (Cohn-Gordon et al.,
2018). The relative weight of these two scores is controlled using a *informativity hyperparameter*,
2 whose value affects the tradeoff between producing captions that are predicted to be fluent and faithful, versus captions that are predicted to be discriminative. It is challenging to automatically choose a value for this hyperparameter, as captions that appear to be discriminative under a captioning model are frequently uninformative for people (Dessì et al., 2022).
Our approach, **PICL** (Pragmatic Inference with a CLIP Listener) follows this same pragmatic framework, but scores discriminativeness using a listener model separate from the speaker. We implement the listener model using CLIP (Radford et al., 2021). As shown in previous work, the rich vision-language representation learned in CLIP (1) provides robust assessments of modelgenerated captions that highly correlate with human judgments (Hessel et al., 2021), and (2) effectively quantifies the degree of discriminativeness/informativeness of visual referring expressions (Takmaz et al., 2022).
To evaluate PICL, we conduct experiments with sets of images from ImageCoDe (Krojer et al.,
2022), a challenging dataset originally designed for contrastive retrieval: retrieving target images among a set of distractors given contextual descriptions. We perform contrastive captioning on this dataset for the first time. We compare PICL to past work on two criteria: (1) informativeness and (2)
fluency, evaluating both metrics using automatic as well as human evaluations.
Results show that our approach typically outperforms past methods on both criteria, and is substantially more robust to the value of the informativity hyperparameter. In particular, we are able to choose this hyperparameter automatically by maximizing how informative the captions are predicted to be to human evaluators. In contrast, we find that maximizing predicted informativity leads past methods to produce captions that are so disfluent that they are misleading for people. In this automatic hyperparameter selection setting, our method produces captions that are 11% to 15% easier for human annotators to interpret correctly than past work.
## 2 Related Work
Contrastive Captioning A variety of methods for contrastive captioning generate captions that optimize for discriminative objectives, e.g., minimizing the textual similarity between captions for the target and distractor images (Wang et al., 2020), using generated captions as input to image retrieval models (Luo et al., 2018; Liu et al., 2018), and computing CLIP similarity scores between captions and target images (Cho et al., 2022). Other methods involve leveraging fine-grained image regional features to generate distinctive captions based on similar and/or unique objects among target and distractors (Wang et al., 2021; Mao et al., 2022), paraphrasing generic captions to enhance both diversity and informativeness (Liu et al., 2019), and finetuning RL-optimized caption models to encourage low-frequency words (Honda et al., 2022). Most of the methods above require training a discriminative captioning model - either by designing an discriminative captioning architecture that takes multiple images as input, or fine-tuning a model using discriminative rewards. In contrast, our proposed approach is *fully inference-time* - it requires no training, and is applicable to any off-the-shelf generic captioning model.
Our approach builds on a family of inferencetime pragmatic-based contrastive captioning methods which have taken one of two approaches: (1)
incrementally generating captions but using only a captioning model (our *speaker* model), where tokens are chosen that have high probability for the target image and low probability for the distractor (Vedantam et al., 2017; Cohn-Gordon et al.,
2018; Nie et al., 2020) or (2) using a separate discriminative model but selecting a discriminative caption from among a set of entire captions generated by the speaker model for the target image (Andreas and Klein, 2016; Luo and Shakhnarovich, 2017). Our work shows that these approaches can be productively combined, using a strong off-theshelf discriminative model (CLIP) to guide the incremental generation of captions. This allows us to tackle a more challenging dataset and task than previous discriminative captioning work, containing a large number (10) of highly-similar distractor images.
Pragmatics Our approach to contrastive generation follows a long line of work on computational pragmatics, particularly in the Rational Speech Acts framework (Frank and Goodman, 2012; Goodman and Frank, 2016) which models language generation as an interaction between speakers and listeners. Prior work has found that pragmatic generation can improve performance on a variety of NLP
tasks, including reference games (Monroe et al.,
2017), instruction generation (Fried et al., 2018),
summarization (Shen et al., 2019), machine translation (Cohn-Gordon and Goodman, 2019), and dialogue (Kim et al., 2020; Fried et al., 2021).
Tradeoff between discriminativeness and accuracy/fluency Assessing the quality of image captions requires multifaceted evaluation. Prior work on contrastive/discriminative captioning investigates the tradeoff of model performance between discriminativeness and accuracy/fluency (Wang et al., 2021; Liu et al., 2019; Honda et al., 2022; Cho et al., 2022; Vedantam et al., 2017; Andreas and Klein, 2016). In this paper, we also perform an extensive study on the tradeoff between informativeness and fluency. Specifically, we focus on analyzing the robustness of the proposed and baseline methods in the tradeoff according to the selection of hyperparameters.
## 3 Method
Our PICL approach conducts incremental pragmatic inference at the token level by combining a base speaker and a CLIP listener to derive a pragmatic speaker. At each step of decoding, the base speaker selects a set of candidate tokens and adds them to partial captions. Given candidate partial captions, the listener updates its beliefs on which is the target among the set of images based on CLIP
similarity measurement. In particular, it contrasts each partial caption to all the images by calculating the CLIP similarity scores of partial caption-image pairs and normalizes over all images to derive the listener likelihood. Finally, a pragmatic speaker reasons over both the base speaker and listener by combining their distribution to rerank partial captions, select a highly-scored subset and proceed to the next decoding step.
## 3.1 Incremental Pragmatic Inference Framework
Similar to Cohn-Gordon et al. (2018), we formulate the process of generating contrastive captions as a series of reference games between two agents, a *speaker* and a *listener*. Given a shared visual context I = i
+ ∪ I− consisting of a target image i
+ and a set of m similar distractors I− = {i
−
1
, . . . , i−m}, the speaker aims to produce a sequence of T tokens o1:T = (o1*, . . . o*T ) that could let the listener identify i from I. Such pragmatic inference is conducted *incrementally*: at each step t of the caption generation, the speaker selects the next token ot by playing the reference game with the listener based on the context I and the partial caption o<t obtained from the last step. In the following subsections, we will introduce the speaker and listener models as well as the incremental inference strategy in detail.
## 3.2 Speaker And Listener Models
Base Speaker At each step of generation, the *base speaker* S0 yields a distribution PS0
(ot|o<t, i+) over the token vocabulary for the next possible token ot, conditioning on the previous partial caption and the target image.
We parameterize PS0 with a context-agnostic captioning model. In particular, we use OFA3
(Wang et al., 2022), a unified sequence-to-sequence multimodal pretraining model and finetune it on MSCOCO Image Captioning dataset (Chen et al., 2015). Finetuned OFA is a strong base captioner; at the time of this work, it achieves state-of-the-art performance on MSCOCO Image Captioning.
Base Listener Given a candidate partial caption o1:t = (o<t, ot) generated by S0, the base listener L0 yields a distribution PL0
(i|o1:t, I) over all candidate images i ∈ I, modeling the likelihood of choosing each candidate given the partial caption at step t and the shared context I. We derive PL0 from a zero-shot CLIP model by normalizing its similarities between images and partial captions over all image candidates:
$$P_{L_{0}}(i|o_{1:t},{\mathcal{I}})={\frac{\exp(c(i,o_{1:t}))}{\Sigma_{i^{\prime}\in{\mathcal{I}}}\exp(c(i^{\prime},o_{1:t}))}}\quad(1)$$
where c(*i, o*1:t) denotes the cosine similarity between the CLIP visual encoding of i and textual 3We use the OFA-base configuration from https://
github.com/OFA-Sys/OFA
Pragmatic Speaker From the base speaker and listener, we derive a distribution for the pragmatic speaker S1 as
$$\begin{array}{c}{{P_{S_{1}}(o_{t}|o_{<t},i^{+},{\mathcal{I}})=}P_{L_{0}}(i^{+}|o_{1:t},{\mathcal{I}})^{\lambda}}\\ {{\qquad\qquad\cdot P_{S_{0}}(o_{t}|o_{<t},i^{+})^{1-\lambda}}}\end{array}\quad(2)$$
where λ ∈ [0, 1] is a "informativity" hyperparameter that trades off between producing fluent
(from S0) and informative (from L0) captions.
## 3.3 Decoding With Approximation
To iteratively generate captions with the pragmatic speaker S1, we perform beam search with beam width B, which involves solving
$$\arg\operatorname*{max}_{o_{t}}P_{S_{1}}(o_{t}|o_{<t},i^{+},{\mathcal{I}})$$
(ot|o<t, i+, I) (3)
for each beam item. However, it is computationally infeasible to obtain the exact solution to Equation 3 since it requires encoding all \#(vocabulary size) possible next partial captions with CLIP to calculate PL0 at each step. Thus, we adopt a subsampling approach similar to Andreas and Klein
(2016); Fried et al. (2018). At each step of decoding, a subset of N(*N > B*) candidate next partial captions o1:T are obtained via beam search from the base speaker distribution PS0
, and these N candidates are rescored with Equation 2 to approximate Equation 3. Finally, only the top B candidates after rescoring are retained to continue with.
## 4 Experimental Setup
We evaluate PICL on ImageCoDe (Krojer et al.,
2022), a dataset originally designed for image retrieval with contextual descriptions. Given the high visual similarity of the images in each problem in the dataset, we adopt it as a challenging testbed for discriminative captioning. We evaluate PICL and competitive baseline methods on two criteria, informativeness and fluency, using both automatic and human evaluation. For informativeness, we follow previous work (Cohn-Gordon et al., 2018; Newman et al., 2020) to automatically evaluate the performance of pragmatic models with an evaluating listener L*eval*. The discriminativeness of the method being evaluated is quantified by the retrieval accuracy of L*eval* with method-generated captions as input. For fluency, we score the well-formedness of generated captions with the perplexity (PPL) under GPT-2 (Radford et al., 2019).
In addition to the automatic evaluation, we conduct human evaluation where annotators are tasked to a) retrieve the target image given the caption and b) score the fluency of the caption.
## 4.1 Dataset
$$({\mathfrak{I}})$$
We use sets of images collected in ImageCoDe
(Krojer et al., 2022) to evaluate the proposed approach. Each image set in ImageCoDe consists of 10 visually similar images. The image sets are collected in two categories: *static pictures* and video frames. A random subset of images per set is selected as targets, for which human annotators write discriminative captions that are retained if other humans can successfully use it to retrieve the target.
In our experiments, we use the validation split of ImageCoDe for hyper-parameter selection and evaluate model performance on the test split. The valid and test sets contain 1,039 and 1,046 sets of images and 2,302 and 2,306 human written captions, respectively.
Table 1 shows the retrieval performance of several models on ImageCoDe test split, where **CLIPzero-shot** is the base listener used in PICL and ALBEF-finetuned is the evaluating listener used for automatic evaluation (see Section 4.2). Given the large performance gap of all models between static and video subsets, we believe the video frames are too challenging for current neural models to make pragmatic and contextual inferences for both captioning and retrieving. Therefore, we use only static images in our experiments.
## 4.2 Automatic Evaluation
Informativeness Following Cohn-Gordon et al.
(2018) and Newman et al. (2020), we evaluate the informativeness of captions generated by our method and baselines using a *listener test*: whether an *evaluative listener* model could identify the target image out of the distractors, given generated captions. However, an evaluative listener can only be an imperfect proxy for human listeners, and past work has found that utterances that are informative to an evaluative listener model can be uninterpretable to people, a phenomenon known as codebooking (Kim et al., 2019) or language drift (Lazaridou et al., 2020). This issue is particularly likely to complicate evaluation in a pragmatic framework like ours, where an explicit listener model (a frozen CLIP model, in our PICL
| All | Video | Static | |
|---------------------|---------|----------|------|
| CLIP-zero-shot | 22.4 | 15.6 | 47.8 |
| CLIP-finetuned-best | 29.9 | 22.0 | 59.8 |
| ALBEF-finetuned | 33.6 | 22.7 | 74.2 |
approach) is used to guide utterance generation.
To mitigate this codebooking issue in evaluation, past work has made the evaluative listener dissimilar by training it on separate data (Cohn-Gordon et al., 2018; Kim et al., 2019; Fried et al., 2021);
we additionally use a separate architecture for the evaluative listener, dissimilar from our CLIP listener: the ALBEF vision-language model (Li et al.,
2020). We finetune ALBEF on the human-written contextual captions for the retrieval task in ImageCode.4 As shown in Table 1, finetuned ALBEF
outperforms the best-performing retrieval model from previous work (Krojer et al., 2022) on ImageCoDe with human-written captions, so we use ALBEF-finetuned as our evaluating listener in automatic evaluations of informativeness.
Fluency While being informative, discriminative captions should also be natural and fluent.
Therefore, we additionally perform automatic evaluations of the fluency of generated captions by computing their perplexity using a GPT-2 language model (Radford et al., 2019).
## 4.3 Human Evaluation
Recent analysis on ImageCode (Dessì et al., 2022)
and in other reference game settings (Lazaridou et al., 2020) reveals that utterances generated by neural models can be discriminative enough for other neural models to retrieve the target image while being misleading to humans. This implies that the performance of a neural retriever evaluative listener (e.g., ALBEF) on model-generated captions might not correctly reflect the degree of informativeness of the captions from a human's perspective. Therefore, we further conduct a human evaluation for PICL and baseline methods on Amazon MTurk, where we present human workers with the same image retrieval task as for ALBEF,
and use the success rate of workers in identifying the correct target images (**retrieval accuracy**) to measure the informativeness of the given captions.
To obtain human judgments of caption fluency, we additionally ask workers to score the captions on a Likert scale ranging from 1 (nonsense) to 5 (completely natural). We randomly sampled 100 sets of static images from the ImageCoDe test split and select one image with the human-written caption as the target. For each target, we produce a caption with each model and, together with the original human caption, present each caption-set pair to 3 workers. More details about the human evaluation setup could be found in Section A.3.
## 4.4 Baselines
We compare PICL to three baselines:
Base Speaker We use the base speaker S0 introduced in Section 3. The base speaker takes only the target image as input and generates contextagnostic captions regardless of the distractors.
Incre-RSA We further implement the incremental RSA model (Incre-RSA) from Cohn-Gordon et al. (2018) as a competitive baseline. Specifically, we derive the Bayesian RSA model introduced in Cohn-Gordon et al. (2018) from our base speaker S0, which enables direct comparison with our proposed approach. Unlike PICL, Incre-RSA does not have a separate model as the listener. The listener probabilities are derived with Bayesian inference at each decoding step based on the speaker distribution and an image prior.
E-S Also based on S0, we implement the *emittersuppressor* (E-S) beam search introduced in Vedantam et al. (2017) for discriminative image captioning. Similar to Incre-RSA, the E-S approach differs from PICL mainly in that it does not contain a separate model to rescore partial captions from a listener's perspective. Instead, it incorporates contextual reasoning by selecting tokens that, under the base speaker, have high probability for the target image but low probability for the distractor images, using a weighted difference of scores. Since their task and model formulation considers only a single distractor image, we extend it to include all distractors in the set by calculating the suppressor distribution as the mean of the distribution of the
![5_image_0.png](5_image_0.png)
next token conditioned on each of the distractors.
For all three baselines, we use beam search at inference with the same beam width B as PICL.
## 4.5 Informativity Hyperparameter Selection
Both our PICL method and the Incre-RSA and ES baselines use an informativity hyperparameter5 to trade off between predicted informativity and fluency in generated captions. We describe two methods for choosing a value for this hyperparameter for each method.
Informativity Maximization In our primary set of experiments, we set the informativity hyperparameter for each method automatically to maximize the performance of our evaluating listener, ALBEF,
on the captions in the validation set. We refer to the models obtained under this scheme as **PICL**,
Incre-RSA, and E-S, respectively.
When maximizing predicted evaluative listener accuracy, we observe qualitatively that PICL typically generates captions which are fluent and human-understandable. In contrast, E-S and IncreRSA are less robust, and under this informativity maximization objective typically produce highly disfluent captions - identifying captions that are interpretable under our evaluating listener model, ALBEF, but potentially confusing to a human, consistent with past work identifying language drift in reference game setups (Lazaridou et al., 2020; Dessì et al., 2022). This trend is depicted in Figure 2, where optimizing for high ALBEF accuracy in E-S and Incre-RSA pushes the average GPT-2 perplexity of captions to extremely high values.
We will see in human evaluations in Section 5 that the disfluent captions obtained by maximizing predicted informativity in the Incre-RSA and E-S baselines, though "understandable" to the ALBEF
model, are often uninterpretable for humans.
Fluency Control Given the qualitative failures of E-S and Incre-RSA when maximizing automated proxies for informativity, we propose to improve these baselines using a fluency-controlled optimization scheme that pivots around PICL. In particular, we search for the informativity parameters for E-S
and Incre-RSA so that the average GPT-2 perplexity of the generated captions are as close as possible to that of PICL. We refer to the models obtained under this scheme as **ES (PPL)** and **Incre-RSA**
(PPL).
## 5 Results 5.1 Automatic Evaluation
We use automatic evaluations (Section 4.2) to evaluate the tradeoff between the predicted informativity
(using ALBEF) and predicted fluency (using GPT2) of captions over a wide range of values for the informativity hyper-parameter of each method.
Hyper-parameter Sensitivity Figure 2 depicts how each method trades off between discriminativeness and fluency by varying the informativity hyper-parameter. PICL demonstrates higher robustness to hyper-parameter selection than IncreRSA and ES in the trade-off: while optimizing for ALBEF-predicted informativity-maximization, Incre-RSA and ES produce more corrupted and disfluent captions with high perplexity whereas PICL's perplexity degrades less.
Informativeness As shown in Table 2, PICL
substantially outperforms the base speaker and the incremental RSA (Incre-RSA, Cohn-Gordon et al. 2018) methods on ALBEF retrieval accuracy, and achieves comparable results to emitter-suppressor
(E-S, Vedantam et al. 2017). The results demonstrate that our method could leverage CLIP as a
| ALBEF | GPT-2 | |
|-----------------------------|------------|--------|
| Accuracy | Perplexity | |
| Human | 74.2 | 138.4 |
| Base Speaker | 54.2 | 99.4 |
| Optimized for Informativity | | |
| Incre-RSA | 64.3 | 2703.0 |
| E-S | 77.5 | 4093.6 |
| PICL | 77.3 | 380.2 |
| Perplexity-Matched to PICL | | |
| Incre-RSA (PPL) | 62.9 | 446.5 |
| E-S (PPL) | 73.2 | 366.6 |
listener model in incremental pragmatic caption generation. For both E-S and Incre-RSA, controlling for fluency negatively affects ALBEF accuracy, which conforms with the trend in Figure 2.
Fluency Table 2 also shows the perplexity that GPT-2 assigns to the output of each model on the ImageCoDe test set. As discussed in Section 4.5, Incre-RSA and E-S are less robust when being optimized for informativity, which is reflected by their extremely high perplexity. In contrast, when controlling for the fluency to match PICL's validation perplexity, both Incre-RSA and E-S generate substantially more fluent captions with test perplexity similar to PICL, at the cost of predicted informativeness, as shown by a drop in ALBEF accuracy.
## 5.2 Human Assessment Performance
We perform human evaluations (Section 4.3) to validate these findings about the informativeness and fluency of the discriminative captioning methods.
Informativeness Human retrieval accuracies on model- and human-generated captions are depicted in Table 3. In the setting where models are automatically optimized for predicted informativity
(Section 4.5), PICL substantially outperforms the Incre-RSA and E-S methods, with gains in human
| Human | Fluency | |
|-----------------------------|-----------|--------|
| Method | Accuracy | Rating |
| Human | 81.7 | 4.76 |
| Base Speaker | 48.7 | 4.80 |
| Optimized for Informativity | | |
| Incre-RSA | 50.7 | 2.87 |
| E-S | 54.0 | 3.59 |
| PICL | 65.7 | 4.07 |
| Perplexity-Matched to PICL | | |
| Incre-RSA (PPL) | 53.3 | 4.23 |
| E-S (PPL) | 63.7 | 4.54 |
accuracy of 11% and 15% respectively. The results indicate that captions generated by PICL are more informative than by other approaches, judged by human annotators. When we control the disfluency of the other methods to be similar to PICL (as measured by GPT-2 perplexity in automatic evaluations), PICL still substantially outperforms IncreRSA (PPL) and slightly outperforms ES (PPL).
Moreover, for both E-S and RSA, controlling for PPL results in more informative captions, which is not reflected in the automatic evaluations using ALBEF (Table 2), implying that disfluency has a more significant negative effect on informativity for humans. While past work has often relied only on automated evaluations, our results indicate that human evaluations are important to accurately compare the performance of discriminative captioning systems.
Fluency Table 3 also shows the average fluency scored by human workers for model- and humangenerated captions. Similarly to Table 2 captions generated by E-S and Incre-RSA without controlling for perplexity are much more disfluent as scored by humans.
Informativity-Fluency Trade-off We further combine the human accuracy and fluency in Table 3 for each model and plot them in Figure 3.
![7_image_0.png](7_image_0.png)
To depict the informativity-fluency trade-off under human assessments, we also include a setting of informativity hyperparameters for each method with an intermediate level of automatically predicted fluency. Specifically, for each model, we search for its informativity parameter so that the average GPT-2 perplexity of generated captions are as close as possible to the average perplexity of the base speaker + PICL. We refer to the models obtained under this scheme as ES (mid PPL), **Incre-RSA**
(mid PPL) and **PICL (mid PPL)**.
With the resulting plot shown in Figure 3, PICL
outperforms Incre-RSA along both dimensions. In comparison with E-S, PICL achieves better discriminativeness with a loss in fluency. For E-S and Incre-RSA, the trade-off patterns are different from that under ALBEF (Figure 2). While optimizing for ALBEF accuracy consistently induces more disfluent generation, the optimal informativeness under human judgment is likely to be achieved with a moderate level of disfluency.
## 5.3 Automatic Vs. Human Evaluation
The analysis above reflects both agreement and mismatch between automatic evaluation and human judgments on different aspects. To further reveal the correlation between them, and lay a foundation for future work on discriminative captioning to make automatic evaluations more predictive of human performance, we conduct analysis along both axes of informativity and fluency.
ALBEF vs. Human Retrieval Accuracy Figure 4 plots ALBEF against human retrieval accuracy on the same 100 sets of images. ALBEF accuracy has a strong positive correlation with human
![7_image_1.png](7_image_1.png)
judgments except for having human, E-S, and IncreRSA as outliers. We posit that the performance mismatch on human written captions is because it is challenging for neural retrieval models like ALBEF to interpret human-written descriptions, which are highly nuanced and grammatically complex (Krojer et al., 2022). The high disfluency of the captions of E-S and and Incre-RSA hinders evaluators in interpreting them accurately, despite being discriminative to models.
GPT-2 Perplexity vs. Human Fluency Score As illustrated in Figure 5, on the 100 evaluation image sets, there is a strong correlation between the mean GPT-2 perplexity of captions and human fluency scores, implying that GPT-2 perplexity is a good proxy for human fluency judgments.
| Accuracy | |
|---------------|------|
| PICL | 77.3 |
| - incremental | 65.4 |
| - distractors | 57.5 |
## 5.4 Ablation Results
To further understand the performance of PICL, we conduct ablation studies to investigate the role of 1)
incremental pragmatic inference and 2) grounding language to distinguish from distractors.
For 1), we experiment with **PICL - incremental** that removes incremental inference by first using only the base speaker S0 to generate a set of complete and context-agnostic captions, and using CLIP to score these entire captions. For 2),
we evaluate **PICL - distractors**, excluding all distractors and providing only the target image during inference. At each decoding step, the listener distribution is derived by normalizing the CLIP
similarities between partial captions and the target image over all candidates. As shown in Table 4, the retrieval accuracy drops substantially on both variations, suggesting that both the incremental inference and grounding to distractors are vital components for pragmatic reasoning in PICL.
## 6 Conclusion
We propose an incremental pragmatic inference approach with a CLIP listener, which combines the strengths of previous approaches that conduct incremental pragmatic reasoning with a separately modeled listener. We identify strengths and weaknesses of automatic model-based evaluation of discriminative captioning systems, and suggest that future work 1) control for the disfluency of generated captions and not solely optimize for predicted informativity and 2) use human evaluations. In human evaluations, our approach outperforms previous discriminative captioning methods, and is substantially more robust than previous approaches in trading off between the fluency and informativity of the captions to human listeners.
## Acknowledgments
We would like to thank Google for providing funding for this work through a gift on Action, Task, and User Journey modeling, and Samsung Electronics Co., Ltd. for providing funding for BK.
## Limitations
We evaluate only on the "static" image partition of the ImageCoDe dataset. ImageCoDe contains another more challenging partition, containing frames from short temporal intervals in videos, which remains extremely difficult for all current discriminative captioning methods, including our PICL approach. (This partition, along with the static image partition that we use, has previously only been used in contrastive retrieval tasks, not in discriminative captioning.)
While we made a substantial effort to explore the tradeoff between informativity and fluency, we were limited in the number of human evaluations that we were able to do and could only evaluate a few settings of the informativity parameter for each method. We complement these human evaluations with automated evaluations on a much wider range of parameters, and analyze the correlations between human performance and judgements and the automated metrics.
## References
Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1173–
1182, Austin, Texas. Association for Computational Linguistics.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. *arXiv preprint*, arXiv:1504.00325.
Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. 2022.
Fine-grained image captioning with CLIP reward.
In Findings of the Association for Computational Linguistics: NAACL 2022, pages 517–527, Seattle, United States. Association for Computational Linguistics.
Reuben Cohn-Gordon and Noah Goodman. 2019. Lost in machine translation: A method to reduce meaning loss.
Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically informative image
captioning with character-level inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 439–443, New Orleans, Louisiana. Association for Computational Linguistics.
Roberto Dessì, Eleonora Gualdoni, Francesca Franzon, Gemma Boleda, and Marco Baroni. 2022. Communication breakdown: On the low mutual intelligibility between human and neural captioning. In *Proceedings of EMNLP*.
Michael C Frank and Noah D Goodman. 2012. Predicting Pragmatic Reasoning in Language Games.
Science, 336(6084):998–998.
Daniel Fried, Jacob Andreas, and Dan Klein. 2018. Unified pragmatic models for generating and following instructions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1951–1963, New Orleans, Louisiana. Association for Computational Linguistics.
Daniel Fried, Justin Chiu, and Dan Klein. 2021.
Reference-centric models for grounded collaborative dialogue. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2130–2147, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Noah D Goodman and Michael C Frank. 2016. Pragmatic Language Interpretation as Probabilistic Inference. *Trends in Cognitive Sciences*, 20(11):818–829.
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A
reference-free evaluation metric for image captioning.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7514–7528, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Ukyo Honda, Taro Watanabe, and Yuji Matsumoto.
2022. Switching to discriminative image captioning by relieving a bottleneck of reinforcement learning.
pages 1124–1134.
Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim.
2020. Will I sound like me? improving persona consistency in dialogues through pragmatic SelfConsciousness.
Jin-Hwa Kim, Nikita Kitaev, Xinlei Chen, Marcus Rohrbach, Byoung-Tak Zhang, Yuandong Tian, Dhruv Batra, and Devi Parikh. 2019. CoDraw: Collaborative drawing as a testbed for grounded goaldriven communication. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 6495–6513, Florence, Italy. Association for Computational Linguistics.
Benno Krojer, Vaibhav Adlakha, Vibhav Vineet, Yash Goyal, Edoardo Ponti, and Siva Reddy. 2022. Image retrieval from contextual descriptions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3426–3440, Dublin, Ireland. Association for Computational Linguistics.
Angeliki Lazaridou, Anna Potapenko, and Olivier Tieleman. 2020. Multi-agent communication meets natural language: Synergies between functional and structural language learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7663–7674, Online. Association for Computational Linguistics.
Junnan Li, Yongkang Wong, Qi Zhao, and M. Kankanhalli. 2020. Video storytelling: Textual summaries for events. *IEEE Transactions on Multimedia*,
22:554–565.
Lixin Liu, Jiajun Tang, Xiaojun Wan, and Zongming Guo. 2019. Generating diverse and descriptive image captions using visual paraphrases. pages 4239–4248.
Xihui Liu, Hongsheng Li, Jing Shao, Dapeng Chen, and Xiaogang Wang. 2018. Show, tell and discriminate:
Image captioning by self-retrieval with partially labeled data. In *Computer Vision - ECCV 2018 - 15th* European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XV, pages 353–369.
Ruotian Luo, Brian L. Price, Scott D. Cohen, and Gregory Shakhnarovich. 2018. Discriminability objective for training descriptive captions. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6964–6974.
Ruotian Luo and Gregory Shakhnarovich. 2017.
Comprehension-guided referring expressions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7102–7111.
Yangjun Mao, Long Chen, Zhihong Jiang, Dong Zhang, Zhimeng Zhang, Jian Shao, and Jun Xiao. 2022. Rethinking the reference-based distinctive image captioning.
Will Monroe, Robert X.D. Hawkins, Noah D. Goodman, and Christopher Potts. 2017. Colors in context:
A pragmatic neural model for grounded language understanding. *Transactions of the Association for* Computational Linguistics, 5:325–338.
Benjamin Newman, Reuben Cohn-Gordon, and Christopher Potts. 2020. Communication-based evaluation for natural language generation. In Proceedings of the Society for Computation in Linguistics 2020, pages 116–126, New York, New York. Association for Computational Linguistics.
Allen Nie, Reuben Cohn-Gordon, and Christopher Potts.
2020. Pragmatic issue-sensitive image captioning.
In *Findings of the Association for Computational* Linguistics: EMNLP 2020, pages 1924–1938, Online.
Association for Computational Linguistics.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text generation. In *Proceedings of NAACL*.
Ece Takmaz, Sandro Pezzelle, and Raquel Fernández.
2022. Less descriptive yet discriminative: Quantifying the properties of multimodal referring utterances via CLIP. In *Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics*, pages 36–42, Dublin, Ireland. Association for Computational Linguistics.
Ramakrishna Vedantam, Samy Bengio, Kevin P. Murphy, Devi Parikh, and Gal Chechik. 2017. Contextaware captions from context-agnostic supervision.
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1070–1079.
Jiuniu Wang, Wenjia Xu, Qingzhong Wang, and Antoni B. Chan. 2020. Compare and reweight: Distinctive image captioning using similar images sets.
In *Computer Vision - ECCV 2020 - 16th European* Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, pages 370–386.
Jiuniu Wang, Wenjia Xu, Qingzhong Wang, and Antoni B. Chan. 2021. Group-based distinctive image captioning with memory attention. In Proceedings of the 29th ACM International Conference on Multimedia.
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. Unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework. In *International Conference on Machine Learning*.
## A.3 Human Evaluation A Implementation Details A.1 Computational Resources A.2 Hyperparameter Searching A.2.1 Rationality Parameters
Searching Method We conduct all the hyperparameter searching via coarse-to-fine search, with step sizes 0.1, 0.01, and 0.001 respectively.
## A.2.2 Beam Search Parameters
For beam search parameters *B, N* discussed in Section 3.2, we set B = 16 and N = 256.
Figure 6 shows an example interface of the human evaluation. We have three MTurk workers evaluate each of the 100 instances of (images, caption) for each of the ten configurations of methods (including human-written captions) for informativity (by requiring them to choose the image referred to by the caption) and fluency (on a 1-5 Likert scale) .
Workers are paid with $0.15 per caption evaluation.
The finetuning of OFA model on COCO captions is run on 4 × Tesla V100 32GB GPUs.
All pragmatic inference experiments are run on 4
× GeForce RTX 2080 Ti GPUs.
Searching Range The search ranges of the rationality parameter for PICL, E-S, and Incre-RSA are
[0, 1], [0, 1], [0, 2] respectively.
Instructions: Given the description and the set of 10 images below, 1. Select the image that is best described by the description. 2. Score the uency of the description.
Click on the images to display them in full size. Description: White table and chairs behind bright green plants. Images:
![11_image_0.png](11_image_0.png)
1. Which image is best described by the description?
Select an option (Please select)
```
uent is the description?
5: Fluent
4
3: Slightly ungramatical or unnatural, but understandable
2
1: Totally ungramatical or unnatural
Submit
```
2. How
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation Section, after Section 6
✗ A2. Did you discuss any potential risks of your work?
No, we do not foresee potential risks of this work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract; Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1
✓ B1. Did you cite the creators of artifacts you used?
Section 4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The artifact is publicly available under the MIT license
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4.1
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data we used is publicly available
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4.1
## C ✓ **Did You Run Computational Experiments?** Section 4.2
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.2
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
All the experiments are done in single runs
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.2 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4.3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix A.3
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix A.3
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
We mention the data usage if for research purposes and could provide the emails to the workers if needed
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
We collect data using Amazon Mechanical Turk
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We could provide it later if needed |
yoffe-etal-2023-statistical | A Statistical Exploration of Text Partition Into Constituents: The Case of the Priestly Source in the Books of Genesis and Exodus | https://aclanthology.org/2023.findings-acl.121 | We present a pipeline for a statistical stylometric exploration of a hypothesized partition of a text. Given a parameterization of the text, our pipeline: (1) detects literary features yielding the optimal overlap between the hypothesized and unsupervised partitions, (2) performs a hypothesis-testing analysis to quantify the statistical significance of the optimal overlap, while conserving implicit correlations between units of text that are more likely to be grouped, and (3) extracts and quantifies the importance of features most responsible for the classification, estimates their statistical stability and cluster-wise abundance. We apply our pipeline to the first two books in the Bible, where one stylistic component stands out in the eyes of biblical scholars, namely, the Priestly component. We identify and explore statistically significant stylistic differences between the Priestly and non-Priestly components. | # A Statistical Exploration Of Text Partition Into Constituents: The Case Of The Priestly Source In The Books Of Genesis And Exodus
Gideon Yoffe [email protected] Axel Bühler [email protected] Nachum Dershowitz [email protected] Israel Finkelstein [email protected] Eli Piasetzky [email protected] Thomas Römer [email protected]
## Barak Sober
[email protected]
## Abstract
We present a pipeline for a statistical textual exploration, offering a **stylometry-based explanation and statistical validation of a hypothesized partition of a text**. Given a parameterization of the text, our pipeline: (1) detects literary features yielding the optimal overlap between the hypothesized and unsupervised partitions,
(2) performs a hypothesis-testing analysis to quantify the statistical significance of the optimal overlap, while conserving implicit correlations between units of text that are more likely to be grouped, and (3) extracts and quantifies the importance of features most responsible for the classification, estimates their statistical stability and cluster-wise abundance.
We apply our pipeline to the first two books in the Bible, where one stylistic component stands out in the eyes of biblical scholars, namely, the Priestly component. We identify and explore statistically significant stylistic differences between the Priestly and non-Priestly components.
## 1 Introduction
It is agreed among scholars that the extant version(s) of the Hebrew Bible is a result of various editorial actions (additions, redaction, and more).
As such, it may be viewed as a literary patchwork quilt - whose patches differ, for example, by genre, date, and origin - and the distinction between biblical texts as well as their relation to other ancient texts from the Near East is at the heart of biblical scholarship, theology, ancient Israel studies, and philology. Many debates in this field have been ongoing for decades, if not centuries, with no verdict
(e.g., the division of the biblical text into its "original" constituents; (Gunkel, 2006; von Rad, 2001; Wellhausen et al., 2009)). While several paradigms gained prevalence throughout the evolution of this discipline (Wellhausen et al., 2009; von Rad, 1972; Zakovitch, 1980), the jury is still out on many related hypotheses, such that these paradigms are prone to drastic (and occasionally abrupt) changes over time (e.g., Nicholson, 2002).
When scrutinized, the interrelation between various literary features within the text may shed light on its historical context. From it, one may infer the number of authors, time(s) and place(s) of composition, and even the geopolitical, social, and theological setting(s) (e.g., Wellhausen et al., 2009; von Rad, 1972; Knohl, 2010; Römer, 2015).
Such scrutiny thus serves a double purpose: (1)
to disambiguate and identify features in the text that are insightful of the lexical sources of the partition (e.g., Givón, 1991; Yosef, 2018; van Peursen, 2019); (2) to attempt to trace these features to the context of the text's composition (e.g., Koppel et al., 2011; Pat-El, 2021).
Works employing computational methods in text stylometry - a statistical analysis of differences in literary, lexical, grammatical or orthographic style between genres or authors (Holmes, 1998) –
have been introduced several decades ago (Tweedie et al., 1996; Koppel et al., 2002; Juola et al., 2006; Koppel et al., 2009; Stamatatos, 2009), with biblical exegesis spurring very early attempts at computerized authorship-identification tasks (Radday, 1970; Radday and Shore, 1985). Since then, these methods have proved useful also in investigating ancient (e.g., Kestemont et al., 2016; Verma, 2017; Kabala, 2020), and biblical texts as well (Koppel et al., 2011; Dershowitz et al., 2014; Roorda, 2015; van Peursen, 2019), albeit to a humbler extent.
Finally, statistical-learning-based research, which makes headway in an impressively diverse span of disciplines, is taking its first steps in the context of ancient scripts (Murai, 2013; FaigenbaumGolovin et al., 2016; Dhali et al., 2017; Popovic´
et al., 2021; Faigenbaum-Golovin et al., 2020). In the biblical context, Dershowitz et al. (2014, 2015)
addressed reproducing hypothesized partitions of various biblical corpora with a computerized approach as well, using features such as orthographic 1918 differences and synonyms. In the first work - the Cochran-Mantel-Haenszel (CMH) test was applied as a means of hypothesis testing, with the null hypothesis that the synonym features are drawn from the same distribution. While descriptive statistics were successfully applied to various classification and attribution tasks of ancient texts (see above),
uncertainty quantification has been insufficiently explored in NLP-related context (Dror et al., 2018),
and in particular - in that of text stylometry.
In this work, we introduce a novel exploratory text stylometry pipeline, with which we: (1) find a combination of textual parameters that optimizes the agreement between the hypothesized and unsupervised partitions, (2) test the statistical significance of the overlap, (3) extract features that are important to the classification, the proportion of their importance, and their relative importance in each cluster. Each stage of this analysis was crossvalidated (i.e., applied on many randomly-chosen sub-samples of the corpus rather than applied once on the entire corpus) and tested for statistical stability.
To perform (2) in a meaningful way for textual analysis, we had to overcome the fact that labelpermutation tests do not consider correlations between units of text, which affect their likelihood of being clustered together. This results in unrealistically optimistic p-values, as, in fact, the hypothesized partition *does* implicitly consider such correlations (by, e.g., grouping texts of a similar genre, subject, etc.). We overcome this by introducing a cyclic label-shift test which preserves the structure of the hypothesized partition, thus conserving the implicit correlations therein. Furthermore, we identify literary features that are *responsible* for the clustering, as opposed to intra-cluster feature selection techniques (e.g., Hruschka and Covoes, 2005; Cai et al., 2010; Zhu et al., 2015), which seek to detect significant features within each cluster. This is also a novel approach to text stylometry.
With this pipeline, we examined the hypothetical distinction between texts of Priestly (P) and non-Priestly (nonP) origin in the books of Genesis and Exodus. The Priestly source is the most agreed-upon constituent underlying the Pentateuch
(i.e., Torah). The consensus over which texts are associated with P (mainly through semantic analysis) stems from the stylistic and theological distinction from other texts in the Pentateuch, streamlined across texts associated therewith (e.g., Holzinger, 1893; Knohl, 2007; Römer, 2014; Faust, 2019).
Therefore, the distinction between P and nonP texts is considered a benchmark in biblical exegesis.
## 2 Methodology 2.1 Data - Digital Biblical Corpora
We use two digital corpora of the Masoretic variant of the Hebrew Bible in (biblical) Hebrew: (1) a version of the Leningrad codex, made freely available by STEPBible.1 This dataset comes parsed with full morphological and semantic tags for all words, prefixes, and suffixes. From this dataset, we utilize the grammatical representation of the text through phrase-dependent part-of-speech tags (POS). (2)
A digitally parsed version of the Biblia Hebraica Stuttgartensia (Roorda, 2018) (hereafter BHSA).2 In the BHSA database, we consider the lexematic
(i.e., words reduced to lexemes) and grammatical representation of the text through POS. The difference between the two POS-wise representations of the text is that (1) encodes additional morphological information within tags, resulting in several hundreds of unique tags, whereas (2) assigns one out of 14 more "basic" grammatical tags3to each word. We refer to the POS-wise representation of
(1) and (2) as "high-res POS" and "low-res POS",
respectively.
## 2.2 Manual Annotation Of Partition
From biblical scholars, we receive verse-wise labeling, assigning each verse in the books of Genesis
(1533 verses) and Exodus (1213 verses) to one of two categories: P or nonP, made available online4.
Hereafter, we refer to this labeling as "scholarly labeling".
While the dating of P texts in the Pentateuch remains an open, heavily-debated question (e.g.,
Haran, 1981; Hurvitz, 1988; Giuntoli and Schmid, 2015), there exists a surprising agreement amongst biblical scholars regarding what verses are affiliated to P (e.g., Knohl, 2007; Römer, 2014; Faust, 2019), amounting to an agreement of 96.5% and 97.3% between various biblical scholars for the books of Genesis and Exodus, respectively. We describe the computation of these estimates in Appendix A.
## 2.3 Text Parameterization
The underlying assumption in this work is that a significant stylistic difference between two texts of a roughly-similar genre (or, indeed, any number of distinct texts) should manifest in simple observables in NLP, such as the utilization of vocabulary
(i.e., distribution of words) and grammatical structure.
We consider three parameters whose different combinations emphasize different properties-, and therefore yield different classifications- of the text:
- **Lexemes, low-, and high-res POS**: we consider three representations of the text: words in lexematic form and low- and high-res POS (see §2.1). This parameter tests the ability to classify the text based on vocabulary or grammatical structure.
- n**-gram size**: we consider sequences of consecutive lexemes/POS of different lengths (i.e., ngrams). Different sizes of n-grams may be reminiscent of different qualities of the text (e.g., Suen, 1979; Cavnar and Trenkle, 1994; Ahmed et al.,
2004; Stamatatos, 2013). For example, a distinction based on a larger n-gram may indicate a semantic difference between texts or the use of longer grammatical modules in the case of POS (e.g.,
parallelisms in the books of Psalms and Proverbs
(Berlin, 1979)). In contrast, a distinction made based on shorter n-grams is indicative of more embedded differences in the use of language (e.g.,
Wright and Chin, 2014; Litvinova et al., 2015).
That said, both these examples indicate that a "false positive" distinction can be made where there is a difference in genre (e.g., Feldman et al., 2009; Tang and Cao, 2015). This degeneracy requires careful analysis of the resulting clustering or inclusion of genre-specific texts only for the clustering phase.
- **Verse-wise running-window width**: biblical verses have an average length of 25 words. Hence, a single verse may not contain sufficient context that can be robustly classified. This is especially important since our classification is based on statistical properties of features in the text (see §2.4).
Therefore, we define a running window parameter, which concatenates consecutive verses into a single super-verse (i.e., a running window of k would turn the ith verse to the sequence of the i − k : i + k verses) to provide additional context.
## 2.4 Text Embedding
We consider individual verses, or sequences of verses, as the atomic constituents of the text (see
§2.3). We use tf-idf (term frequency divided by document frequency) to encode each verse, assigning a relevance score to each feature therein (Aizawa, 2003). Works such as Fabien et al. (2020); Marcinczuk et al. ´ (2021) demonstrate that in the absence of a pre-trained neural language model, tf-idf provides an appropriate and often optimal embedding method in tasks of unsupervised classification of texts. For a single combination of an n-gram size and running-window width (using either lexemes, low- or high-res POS), the tf-idf embedding yields a single-feature matrix X ∈ R
n×d, where n is the number of verses and d is the number of unique n-grams of rank n.
It is important to note that this work aims to set a benchmark for future endeavors using strictly traditional machinery throughout our analysis. To ensure collaboration with biblical scholars, our methodology allows for full interpretability of the exploration process (see §D.4). This has threefold importance: (1) the field of text stylometry, especially that of ancient Hebrew texts, has hitherto been explored statistically and computationally to a limited extent, such that even when utilizing conservative text-embeddings, such as in this work, considerable insight can be gained concerning both the quality of the analysis and the philological question. (2) Obtaining benchmark results using traditional embeddings is a pre-condition for implementing more sophisticated yet convoluted embeddings, such as pre-trained language models (e.g.
Shmidman et al., 2022) or self-trained/calibrated language models (Wald et al., 2021), which we intend to apply in future works. (3) Finally, the interdisciplinary nature of this work and our desire to contribute to the field of biblical exegesis (and traditional philology in general) requires our results to be predominantly interpretable, such that they can be subjected to complementary analysis by scholars from the opposite side of the interdisciplinary divide (Piotrowski, 2012).
## 2.5 Clustering
We choose the k-means algorithm as our clustering tool of choice (Hastie et al., 2009, Ch. 13.2.1)
and hardwire the number of clusters to two, according to the hypothesized P/nonP partition. The justification for our choice of this algorithm is its simple loss-optimization procedure, which is vital to our feature importance analysis and is discussed in detail in §2.8.
We use the balanced accuracy (BA) score
(Sokolova et al., 2006) for our overlap statistic, a standard measure designed to address asymmetries between cluster sizes.
Due to the stochastic nature of k-means (Bottou, 2004), every time it is used in this work - it is run 50 times - (with different random initialization) –
and the result yielding the smallest k-means loss is chosen.
## 2.6 Optimizing For Overlap
We perform a 2D grid search over a pre-determined range of n-gram sizes and running-window widths to find the parameters combination yielding the optimal overlap for low- or high-res POS lexemes.
We test the statistical stability of each combination of these parameters (i.e., to ensure that the overlap reached by each combination is statistically significant) by cross-validating the 2D grid search over some number of randomly-chosen sub-sets of verses, from which we derive the average overlap value for each combination of parameters and the standard deviation thereof. We describe the optimization process in detail in Appendix B.
## 2.7 **Hypothesis Testing And Validating Results**
Through hypothesis testing, we establish the statistical significance of the achieved optimal overlap value between the unsupervised and hypothesized partitions. To derive a p-value from some empirical null distribution, we consider the assumption that the hypothesized partition, manifested in the scholarly labeling, exhibits a specific formulaic partition of chunks of the text. A formulaic partition, in turn, suggests that verses within each of the P
(nonP) blocks are correlated - a fact that the standard label-permutation test is intrinsically agnostic of, as it permutes labels without considering potential correlations between verses (Fig. 1 left).
Thus, the null distribution synthesized through a series of permutations would represent an overlyoptimistic scenario that does not correspond to any conceivable scenario in text stylometry.
To remedy this, we devise a more prohibitive statistical test. Instead of permuting the labels to have an arbitrary order, we perform a cyclic shift test of the scholarly labeling (Fig. 1 right). This procedure retains the scholarly labels' hypothesized *structure* but shifts them across different verses. We perform as many cyclic shifts as there are labels (i.e., number of verses) in each book by skips of twice the largest running-window width considered in the optimization procedure. For each shift, we perform the parameter optimization procedure (see §2.6),
where we now have the shifted scholarly labels instead of the original ("un-shifted") ones. Thus, we generate a distribution of our statistic under the null hypothesis, from which we derive a p-value.
In Fig. 1, we present an intuitive scheme where we demonstrate our rationale concerning the hypothesis-testing procedure in text stylometry.
## 2.8 Feature Importance And Interpretability Of Classification
Given a k-means labeling produced for the text, which was embedded according to some combination of parameters (§2.3), we wish to quantify the importance of individual n-grams to the classification, the proportion of their importance, and associate to which cluster they are characteristic of.
Consider the loss function of the k-means algorithm:
$$\operatorname*{argmin}_{S}\sum_{i=1}^{k}|S_{i}|\cdot v a r(S_{i}),$$
where k = 2 is the number of desired clusters, S is the group of all potential sets of verses, split into k clusters, |Si| is the size of the ith cluster (i.e., number of verses therein) and var(Si) is the variance of the ith cluster. That is, the k-means aims to minimize intra-class variance. Equivalently, we could optimize for the *maximization* of the *inter-cluster* variance (i.e., the variance between clusters), given by
$$\underset{S}{\operatorname{argmax}}\sum_{i\in S_1}\sum_{j\in S_2}\|x_i-x_j\|^2.\tag{1}$$ Let $D\in\mathbb{R}^{|S_1|\cdot|S_2|\times d}$ denote the matrix of inter
Let D ∈ R|S1|·|S2|×d denote the matrix of intercluster differences whose columns are Dℓ, which for i ∈ {1, . . . , |S1| − 1}, j ∈ {1*, . . .* |S2|} and ℓ = i · |S2| + j are defined by
$$(D)_{\ell}=x_{i}-x_{j}.$$
Then, applying PCA to D would yield the axis across which Eq. (2.8) is optimized as the first principal component. This component represents the axis of maximized variance, and each feature's contribution is given by its corresponding loading.
Finally, it can be shown that this principal axis could also be computed by subtracting the centroids of the two clusters. Therefore, to leverage this observation and extract the features' importance, we perform the following procedure:
![4_image_0.png](4_image_0.png)
- Compute the *principal separating axis* of the two clusters by subtracting the centroid of S1 from that of S2.
- The contribution of each n-gram feature to the cluster separation is given by its respective loading in the principal separating axis.
- Since tf-idf assigns strictly non-negative scores to each feature, the signs of the principal separating axis' loadings allow us to associate the importance of each n-gram to a specific cluster (see Appendix C).
- To determine the stability of the importance of n-grams across multiple sub-samples of each book, we perform a cross-validation routine (similar to the one performed in §2.6). Explicitly, we perform all the steps listed above for some number of simulations (i.e., randomly sample a sub-set of verses and extract the importance vector) and compute the mean and standard deviation of all simulations.
- Finally, we compute the relative uniqueness of each important feature w.r.t. both clusters - assigning it a score between −1 : 1. A score of 1 (−1)
indicates that the feature is solely abundant in its associated (opposite) cluster. Thus, a feature's abundance nearer zero indicates that its contribution to the clustering is rather in its *combination* with other features than a standalone indicator.
## 3 Results
We apply a cross-validated optimization analysis to the three representations of the book (§2.6) by performing 2D grid searches for 20 randomly chosen sub-sets of 250 verses for each representation.
We perform a cross-validated cyclic hypothesistesting analysis for the optimal overlap value (§2.7)
using five randomly-chosen sub-sets of verses, similarly to the above - and derive a p-value. Finally, we perform a cross-validated feature importance analysis for every representation (§2.8), over 100 randomly-chosen sub-sets of verses, similar to the
## Above.
In Table 1, we list the cross-validated results of each representation for both books and the derived p-value. In Fig. 2, we visualize results for all stages of our analysis applied to the lexematic representation of the book of Genesis. In Appendix D, we plot the results for all stages of our analyses for both books and discuss them in detail. Appendix E presents a detailed biblical-exegetical analysis of our results and an expert's evaluation of our approach.
## 3.1 **Understanding The Discrepancy In Results** Between The Books Of Genesis And Exodus
In Table 1, we list our optimal overlap values for all textual representations of both books. Notice that there exists a statistically-significant discrepancy between the optimal overlap values between both books that is as follows:
For Exodus, all three representations reach the same optimal overlap of roughly 88%. In contrast, in Genesis - there is a difference between the achieved optimal overlap values for the lexematic and low-res POS representations on the one hand
(73%) and high-res POS representation on the other (65%).
When examining the verses belonging to each cluster when classified with the optimal parameter combination, it is evident that most of the overlapping P-associated verses in Exodus are grouped in two blocks of P-associated of text, spanning 243 and 214 verses. These make considerable outliers in the size distribution of P-associated blocks in both books (Fig. 3), which may be related to the observed discrepancy.
This discrepancy begs two questions:
1. Are the linguistic differences between P/nonP
- which may be captured by our analysis –
that are attenuated for shorter sequences of P texts?
2. Does the high overlap in the book of Exodus arise due to an implicit sensitivity to the generic/semantic uniqueness of the two big Passociated blocks rather than a global stylistic difference between P/nonP?
To examine this, we perform the following experiment: each time, we remove a different Passociated block (1st largest, 2nd largest, 3rd largest, and the 1st + 2nd largest) from the text and perform a cross-validated optimization analysis with low- and high-res POS (see §2.6). We then compare the resulting optimal overlap values of each time. We plot the results of this experiment in Fig. 4.
We find that, as expected, the optimal overlap drops as a function of the size of the removed Passociated text. Interestingly, the optimal overlap increases when a smaller block is removed. Additionally, unlike in the case of Genesis, the lowres POS representation doesn't lead to increased optimal overlap relative to the high-res POS representation. This suggests that: (1) the fluctuation of the optimal overlap indicates that our pipeline is sensitive to some semantic field associated with the two large P blocks rather than to a global stylistic difference between P/nonP. (2) In cases of more sporadically-distributed texts that are stylistically different from the text in which they are embedded
- one representation of the text is not systematically preferable to others.
## 4 Conclusions
We examined the hypothetical distinction between texts of priestly (P) and non-priestly (nonP) origin in the books of Genesis and Exodus, which we explored with a novel unsupervised pipeline for text stylometry. We sought a combination of a running-window width (i.e., the number of consecutive verses to consider as a single unit of text) and n-gram size of lexemes, low- or high-resolution phrase-dependent parts-of-speech that optimized the overlap between the unsupervised and hypothesized partitions. We established the statistical significance of our results using a cyclic-shift test, which we show to be more adequate for text stylometry problems than a naive permutation test.
Finally, we extracted n-grams that contribute the most to the classification, their respective proportions, statistical robustness, and correlation to other features. We achieve optimal, statistically significant overlap values of 73% and 90% for the books of Genesis and Exodus, respectively.
We find the discrepancy in optimal overlap values between the two books to stem from two factors: (1) A more sporadic distribution of P texts in Genesis, as opposed to a more formulaic one in Exodus. (2) The sensitivity of our pipeline to a distinct semantic field manifested in two large P
blocks in Exodus, comprising the majority of the P-associated text therein.
Book Opt. overlap (lexemes) Opt. overlap (low-res POS) Opt. overlap (high-res POS) p-value Genesis 72.95±6.45% (rw: 4, n: 1) 65.03±5.64% (rw: 14, n: 1) 73.96±2.91% (rw: 4, n: 1) 0.08 (low-res POS)
Exodus 89.23±2.53% (rw: 8, n: 2) 88.63±1.96% (rw: 9, n: 4) 86.53±2.91% (rw: 6, n: 2) 0.06 (high-res POS)
Table 1: Cross-validated optimization and hypothesis testing results: for each representation, we list the optimal overlap value, its respective uncertainty, and combination of parameters (rw for running-window width and n for n-gram size).
![6_image_0.png](6_image_0.png)
![7_image_0.png](7_image_0.png)
![7_image_2.png](7_image_2.png)
![7_image_1.png](7_image_1.png)
Through complementary exegetical and statistical analyses, we show that our methodology differentiates the unique generic style of the Priestly source, characterized by lawgiving, cult instructions, and streamlining a continuous chronological sequence of the story through third-person narration. This observation corroborates and hones the stance of most biblical scholars.
## 5 Limitations
- The interdisciplinary element - at the heart of this work - mandates that our results be interpretable and relevant to scholars from the opposite side of the methodological divide (i.e., biblical scholars). This, in turn, introduces constraints to our framework - the foremost is choosing appropriate text-embedding techniques. As discussed in §2.4 and §2.8, the ability to extract specific lexical features (i.e., unique n-grams) that are important to the classification, to quantify them, and subject them to complementary philological analysis (see Appendix E) - requires that they be interpretable. This constraint limits the ability to implement state-ofthe-art language-model-based embeddings without devising the required framework for their interpretation. Consequently, using traditional embeddings
- which encode mostly explicit lexical features (e.g.,
see §2.4) - limits the complexity of the analyzed textual phenomena and is therefore agnostic of potential signal that is manifested in more complex features.
- In text stylometry questions, especially those related to ancient texts, it is often problematic (and even impossible) to rely on a benchmark training set with which supervised statistical learning can take place. This, in turn, means that supervised learning in such tasks must be implemented with extreme caution so as not to introduce a bias into a supposedly-unbiased analysis. Therefore, implementing supervised learning techniques for such tasks requires a complementary framework that could overcome such potential biases. In light of this, our analysis involves predominantly unsupervised exploration of the text, given different parameterizations.
- Our ability to draw insight from exploring the stylistic differences between the hypothesized distinct texts relies heavily on observing significant overlap between the hypothesized and unsupervised partitions. Without it, the ability to discern the similarity between the results of our pipeline is greatly obscured, as the pipeline remains essentially agnostic of the hypothesized partition. Such a scenario either deems the parameterization irrelevant to the hypothesized partition or disproves the hypothesized partition. Breaking the degeneracy between these two possibilities may entail considerable additional analysis.
## 6 Acknowledgements
We thank Dr. Rotem Dror for her kind assistance and contribution to the editing process and Ziv Ben-Aharon for providing technical guidance in operating the HUJI cluster.
## References
Bashir Ahmed, Sung-Hyuk Cha, and Charles Tappert. 2004. Language identification from text using n-gram based cumulative frequency addition. In Proceedings of Student/Faculty Research Day, volume 12. CSIS, Pace University.
Akiko Aizawa. 2003. An information-theoretic perspective of tf–idf measures. *Information Processing &*
Management, 39(1):45–65.
Adele Berlin. 1979. Grammatical aspects of biblical parallelism. *Hebrew Union College Annual*, 50:17–
43.
Léon Bottou. 2004. Stochastic learning. In *Summer School on Machine Learning*, pages 146–168.
Springer.
Deng Cai, Chiyuan Zhang, and Xiaofei He. 2010. Unsupervised feature selection for multi-cluster data.
In *Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data* Mining, pages 333–342.
William B. Cavnar and John M. Trenkle. 1994. N-grambased text categorization. In Proceedings of SDAIR94, 3rd Annual Symposium on Document Analysis and Information Retrieval, volume 161175. Citeseer.
Idan Dershowitz, Navot Akiva, Moshe Koppel, and Nachum Dershowitz. 2015. Computerized source criticism of biblical texts. *Journal of Biblical Literature*, 134(2):253–271.
Idan Dershowitz, Nachum Dershowitz, Tomer Hasid, and Amnon Ta-Shma. 2014. Orthography and biblical criticism. In *Proceedings of Digital Humanities*
(DH 2014, Lausanne, Switzerland), pages 451–453.
Maruf A. Dhali, Sheng He, Mladen Popovic, Eibert ´
Tigchelaar, and Lambert Schomaker. 2017. A digital palaeographic approach towards writer identification in the Dead Sea Scrolls. In *Proceedings of the 6th* International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392.
Maël Fabien, Esaú Villatoro-Tello, Petr Motlicek, and Shantipriya Parida. 2020. Bertaa: Bert fine-tuning for authorship attribution. In *Proceedings of the 17th* International Conference on Natural Language Processing (ICON), pages 127–137.
Shira Faigenbaum-Golovin, Arie Shaus, Barak Sober, David Levin, Nadav Na'aman, Benjamin Sass, Eli Turkel, Eli Piasetzky, and Israel Finkelstein. 2016.
Algorithmic handwriting analysis of Judah's military correspondence sheds light on composition of biblical texts. Proceedings of the National Academy of Sciences, 113(17):4664–4669.
Shira Faigenbaum-Golovin, Arie Shaus, Barak Sober, Eli Turkel, Eli Piasetzky, and Israel Finkelstein. 2020.
Algorithmic handwriting analysis of the Samaria inscriptions illuminates bureaucratic apparatus in biblical Israel. *PLOS ONE*, 15(1):e0227452.
Avraham Faust. 2019. The world of P: The material realm of priestly writings. *Vetus Testamentum*,
69(2):173–218.
Sergey Feldman, Marius A. Marin, Mari Ostendorf, and Maya R. Gupta. 2009. Part-of-speech histograms for genre classification of text. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4781–4784. IEEE.
Federico Giuntoli and Konrad Schmid. 2015. *The PostPriestly Pentateuch: New Perspectives on its Redactional Development and Theological Profiles*. Mohr Siebeck.
Talmy Givón. 1991. The evolution of dependent clause morpho-syntax in Biblical Hebrew. In Elizabeth Closs Traugott and Berund Heine, editors, *Approaches to SGrammaticalization*, volume 2, pages 257–310. John Benjamins.
Hermann Gunkel. 2006. Creation and Chaos in the Primeval Era and the Eschaton: Religio-Historical Study of Genesis 1 and Revelation 12. Eerdmans, Grand Rapids, MI. Trans. by K. William Whitney, Jr.; original edition 1895.
Mehahem Haran. 1981. Behind the scenes of history:
Determining the date of the priestly source. Journal of Biblical Literature, 100(3):321–333.
T. Hastie, R. Tibshirani, and J. H. Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics.
Springer.
David I. Holmes. 1998. The evolution of stylometry in humanities scholarship. Literary and Linguistic Computing, 13(3):111–117.
Heinrich Holzinger. 1893. *Einleitung in den Hexateuch*,
volume 1. Mohr Siebeck.
Eduardo R. Hruschka and Thiago F. Covoes. 2005. Feature selection for cluster analysis: an approach based on the simplified silhouette criterion. In *International* Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06), volume 1, pages 32–38. IEEE.
Avi Hurvitz. 1988. Dating the priestly source in light of the historical study of biblical Hebrew. a century after Wellhausen. *Zeitschrift für die alttestamentliche* Wissenschaft, 100:88–100.
Louis C. Jonker. 2012. Reading the Pentateuch's genealogies after the exile: The Chronicler's usage of Genesis 1–11 in negotiating an all-Israelite identity.
Old Testament Essays, 25(2):316–333.
Patrick Juola, John Sofko, and Patrick Brennan. 2006. A
prototype for authorship attribution studies. *Literary* and Linguistic Computing, 21(2):169–178.
Jakub Kabala. 2020. Computational authorship attribution in medieval Latin corpora: the case of the Monk of Lido (ca. 1101–08) and Gallus Anonymous
(ca. 1113–17). *Language Resources and Evaluation*,
54(1):25–56.
Mike Kestemont, Justin Stover, Moshe Koppel, Folgert Karsdorp, and Walter Daelemans. 2016. Authenticating the writings of Julius Caesar. *Expert Systems* with Applications, 63:86–96.
Israel Knohl. 2007. The Sanctuary of Silence: The Priestly Torah and the Holiness School. Eisenbrauns.
Israel Knohl. 2010. The Divine Symphony: The Bible's Many Voices. Jewish Publication Society.
Moshe Koppel, Navot Akiva, Idan Dershowitz, and Nachum Dershowitz. 2011. Unsupervised decomposition of a document into authorial components.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1356–1364, Portland, OR. Association for Computational Linguistics.
Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. *Literary and Linguistic Computing*,
17(4):401–412.
Moshe Koppel, Jonathan Schler, and Shlomo Argamon.
2009. Computational methods in authorship attribution. Journal of the American Society for Information Science and Technology, 60(1):9–26.
T. A. Litvinova, P. V. Seredin, and O. A. Litvinova. 2015.
Using part-of-speech sequences frequencies in a text to predict author personality: a corpus study. *Indian* Journal of Science and Technology, 8:93.
Michał Marcinczuk, Mateusz Gniewkowski, Tomasz ´
Walkowiak, and Marcin B˛edkowski. 2021. Text document clustering: Wordnet vs. TF-IDF vs. word embeddings. In Proceedings of the 11th Global Wordnet Conference, pages 207–214.
Hajime Murai. 2013. Exegetical Science for the Interpretation of the Bible: Algorithms and Software for Quantitative Analysis of Christian Documents.
In *Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing*, pages 67–86. Springer International Publishing.
Ernest Nicholson. 2002. *The Pentateuch in the Twentieth Century*. Oxford University Press.
Na'ama Pat-El. 2021. Syntactic Aramaisms as a tool for the internal chronology of Biblical Hebrew. In Diachrony in Biblical Hebrew, pages 245–264. Penn State University Press.
Michael Piotrowski. 2012. NLP and digital humanities.
In *Natural Language Processing for Historical Texts*, pages 5–10. Springer.
Mladen Popovic, Maruf A. Dhali, and Lambert ´
Schomaker. 2021. Artificial intelligence based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa). *PLOS ONE*,
16:1–28.
Yehuda T. Radday. 1970. Isaiah and the computer: A
preliminary report. *Computers and the Humanities*,
5(2):65–73.
Yehudah T. Radday and Haim Shore. 1985. Genesis:
An authorship study in computer-assisted statistical linguistics, volume 103 of *Analecta Biblica*. Biblical Institution Press.
Thomas Römer. 2014. From the call of Moses to the parting of the sea: Reflections on the priestly version of the Exodus narrative. In *The Book of Exodus*,
pages 121–150. Brill.
Thomas Römer. 2015. *The Invention of God*. Harvard University Press.
Dirk Roorda. 2015. The Hebrew Bible as Data:
Laboratory-Sharing-Experiences. CLARIN in the Low Countries.
Dirk Roorda. 2018. Coding the Hebrew Bible: Linguistics and literature. Research Data Journal for the Humanities and Social Sciences, 3(1):27–41.
Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Cheyn Shmuel Shmidman, Eli Handel, and Moshe Koppel. 2022. Introducing berel: Bert embeddings for rabbinic-encoded language. *arXiv preprint* arXiv:2208.01875.
Marina Sokolova, Nathalie Japkowicz, and Stan Szpakowicz. 2006. Beyond accuracy, f-score and roc:
a family of discriminant measures for performance evaluation. In *Australasian Joint Conference on Artificial Intelligence*, pages 1015–1021. Springer.
Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. *Journal of the American Society for information Science and Technology*,
60(3):538–556.
Efstathios Stamatatos. 2013. On the robustness of authorship attribution based on character n-gram features. *Journal of Law and Policy*, 21(2):421–439.
Ching Y. Suen. 1979. N-gram statistics for natural language understanding and text processing. *IEEE*
Transactions on Pattern Analysis and Machine Intelligence, PAMI-1(2):164–172.
Xiaoyan Tang and Jing Cao. 2015. Automatic genre classification via n-grams of part-of-speech tags.
Procedia-Social and Behavioral Sciences, 198:474–
478.
Fiona J. Tweedie, Sameer Singh, and David I. Holmes.
1996. Neural network applications in stylometry:
The Federalist papers. *Computers and the Humanities*, 30(1):1–10.
Wido van Peursen. 2019. A Computational Approach to Syntactic Diversity in the Hebrew Bible. *Journal* of Biblical Text Research, 44:237–253.
Mayuri Verma. 2017. Lexical Analysis of Religious Texts using Text Mining and Machine Learning Tools. *International Journal of Computer Applications*, 168(8):39–45.
Gerhard von Rad. 1972. *Genesis - A commentary*, 3rd rev. ed. edition. S.C.M. Press, London. Trans. by John H. Marks.
Gerhard von Rad. 2001. *Old Testament Theology: The* theology of Israel's historical traditions, volume 1.
Westminster John Knox Press.
Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. 2021. On calibration and out-of-domain generalization. *Advances in neural information processing systems*, 34:2215–2227.
Julius Wellhausen, J. Sutherland Black, Allan Menzies, and William Robertson Smith. 2009. Prolegomena to the History of Israel. Cambridge University Press.
William R. Wright and David N. Chin. 2014. Personality profiling from text: Introducing part-of-speech n-grams. In *International Conference on User Modeling, Adaptation, and Personalization*, pages 243–253.
Springer.
Ofer Yosef. 2018. Determining orthography on the basis of Masoretic notes. In The Masora on Scripture and Its Methods, chapter 3, pages 34–48. De Gruyter, Berlin.
Yair Zakovitch. 1980. A study of precise and partial derivations in biblical etymology. Journal for the Study of the Old Testament, 5(15):31–50.
Guo-Niu Zhu, Jie Hu, Jin Qi, Jin Ma, and Ying-Hong Peng. 2015. An integrated feature selection and cluster analysis techniques for case-based reasoning.
Engineering Applications of Artificial Intelligence, 39:14–22.
## Appendices A Estimating Scholarly Consensus Of P-Associated Texts In Genesis And Exodus B Cross-Validated Overlap Optimization
To quantify the consensus amongst biblical scholars concerning the distinction between P/nonP texts in Genesis and Exodus, we consider two sets of labelings of P/nonP: the first is that provided by biblical scholars, and the other is similar labeling used in the work of Dershowitz et al. (2015), who also apply computational methods in an attempt to detect a meaningful dichotomy between P and nonP
texts as well, albeit under a different paradigm. In their work, Dershowitz et al. consider P/nonP distinctions of three independent biblical scholars and compile a single "consensus" labeling of verses over the affiliation of which all three scholars agree
(1291 verses in Genesis and 1057 verses in Exodus), except for explicitly incriminating P texts (such as genealogical trees that are strongly affiliated with P (e.g., Jonker, 2012)). We find a 96.5%
and 97.3% agreement between the two labelings for the books of Genesis and Exodus, respectively.
The cross-validated overlap optimization is performed as follows:
- For lexemes/POS, we consider a range of ngram sizes and a range of running-window widths, through combinations of which we optimize the classification overlap. These ranges are determined empirically by observing the overlap values decrease monotonically as the n-gram sizes/running-window widths become too large (see Figs. 6, 5). This produces a 2D
matrix, where the (*i, j*)th entry is the resulting overlap value achieved by the k-means given the combination of the ith running-window width and the jth n-gram size.
- We cross-validate the grid-search process by performing a series of simulations, where in each simulation, we generate a random subset of 250 verses from the given book, containing at least 50 verses of each class (according to the scholarly labeling). Each such simulation produces a 2D overlap matrix, as mentioned above, for the given subset of verses.
- Finally, we average the 2D overlap matrices
of all simulations, and for each entry, we calculate its standard deviation across all simulations, producing a 2D standard deviation matrix. After normalizing the average overlap matrix by the standard deviation matrix, we choose the optimal combination that produces the classification with the optimal overlap.
- We perform this analysis on both lexemes, low- and high-res POS, yielding three averaged overlap matrices, from which optimal parameter combinations and their respective standard deviations (calculated across the cross-validation simulations) can be identified for each feature.
## C Feature Importance Analysis
We consider the two optimal overlap clusters of verses as the algorithm assigned them. We calculate a difference matrix D, where from every verse in cluster 1, we subtract every verse in cluster 2, receiving a matrix D ∈ R|S1|·|S2|×d, where |S1|, |S2| are the number of verses assigned to cluster 1 and 2, respectively, and d is the dimension of the embedding (i.e., number of unique n-grams in the text). Note that tf-idf embedded texts are non-negative, such that the difference matrix D has positive values for features with a high tf-idf score in cluster 1 and negative values for features with a high tf-idf score in cluster 2. As mentioned in §2.8, the first principal axis of D is equivalent to the difference between the two centroids produced by the k-means. Similarly, this difference vector is a linear combination of all the features (i.e., n-grams),
where a numerical value gives the importance of each feature called "loading", ranging from negative to positive values and their importance to the given principal axes is given by their absolute value.
Due to the nature of the difference matrix D (see
§2.8) - the sign of the loading indicates in which cluster the feature is important. Thus we can assign distinguishing features to the specific cluster of which they, or a combination thereof with other features, are characteristic.
Finally, we seek to determine the stability of the importance of features across multiple sub-samples of each book. We perform this computation as follows: given a parameters combination, we perform all the steps listed above for 100 simulations, where in each simulation, we randomly sample a sub-sample of 250 verses and extract the importance loadings of the features. We then average all simulation-wise loadings and derive the variance thereof to receive a cross-validated vector of feature importance loadings and their respective uncertainties. These are plotted as the error bars in Figs. 2, and similarly in the figures in Appendix C
## D Results D.1 Optimization Results: Genesis
For each representation, we achieve the following optimal overlap values (see Fig. 6): 72.95±6.45%
for lexemes, 65.03±5.64% for high-res POS, and 73.96±2.91% for low-res POS. We observe the following:
- Optimal overlap values achieved for lexemes and high-res POS are consistent to within a 1σ, whereas the optimal overlap achieved for low-res POS is higher by ∼ 2σ.
- We find a less consistent and considerably larger spread of optimal parameter combinations for the low- and high-POS-wise embeddings, as opposed to Exodus. While no clear pattern of optimal parameter combinations is observed in any feature, small n-gram sizes are also preferred here. For lexemes, a welldefined range of running-window widths of unigrams is observed to yield optimal overlaps.
- Unlike in the case of Exodus, parameter combinations yielding optimal overlap values of each feature (marked with red cells in Fig.
5) do not exhibit higher consistency across the cross-validation simulations than combinations that yield smaller overlap (i.e., small cross-validation variance, see the right panels in Fig. 5), except for the low-res POS feature.
## D.2 Optimization Results: Exodus
For each feature, we achieve the following optimal overlaps (see Fig. 6): 89.23±2.53% for lexemes, 88.63±1.96% for high-res POS, and 86.53±2.91%
for low-res POS. We observe the following:
- For all three representations, optimal overlap values are consistent to within 1σ.
- For all three representations, parameter combinations yielding optimal overlap values
(marked with red cells in Fig. 6) exhibit high consistency across the cross-validation simulations (i.e., slight cross-validation variance, see the right panels in Fig. 6).
- For lexemes, we find that the range of optimal parameter combinations is concentrated within 1- and 2-grams and is relatively independent of running-window width (i.e.,
when some optimal running-window width is reached–the overlap values do not change dramatically as it increases).
- For the high-res POS, we find that the range of optimal combinations is restricted to 2-grams, but is also insensitive to the running-window width.
- For low-res POS, we find that larger n-gram sizes, and a wider range thereof, produce the optimal overlap values. Additionally, we observe a dependence between given n-gram sizes and running-window widths to reach a large overlap.
## D.3 Hypothesis Testing Results
We present our results of the hypothesis testing through cyclic-shift, described in §2.7. Here, too, we perform a cross-validated test by performing five simulations - each containing a randomly chosen sub-sample of 250 verses (with a mandatory minimum of 50 verses of each class), to which the cyclic-shift analysis is applied. We compute the optimal overlap for every shift and generate a shiftseries of optimal overlap values (i.e., the null distribution). We then average across simulations. We then derive the p-value from the synthesized null distribution. The chosen "real optimal overlap",
which we use to derive the p-value, is the average optimal overlap at a shift of 0 (i.e., original labeling) minus its standard deviation. For each book, we perform this analysis for the features yielding the optimal overlap; low-res POS for Genesis and high-res POS for Exodus. We plot our results for both books in Fig. 7.
The resulting p-values are 0.08 and 0.06 for the books of Genesis and Exodus, respectively.
## D.4 Feature Importance Analysis Results D.4.1 Feature Importance: Genesis
In Figs. 8-10 we plot feature importance analysis results for the three representations of the book of Genesis.
## D.4.2 Feature Importance: Exodus
In Figs. 11-13 we plot feature importance analysis results for the three representations of the book of Exodus.
## E Biblical-Exegetical Discussion
Here we perform an exegetical analysis of our results for each book. All data to which this analysis was applied is available online5.
## E.1 Genesis E.1.1 Semantics
The extraction of the features of P for Genesis overlaps with the work of characterizing the priestly stratum (Holzinger, 1893). Thus, the characteristic use of numbers in P (here, in descending order of importance, the algorithm considered the terms 100, 9, 8, 5, 3, 6, 4, 2, 7 as characteristic of P) appears mainly in the genealogies, e.g., Gen 5 and 11, but also in the use of ordinal numbers to give the months and in the definition of the dimensions of the tabernacle. The term "year" (!שׁנה( is used in both dates and P genealogies, and the term "day" (!Mיו (demonstrates a similar calendrical concern. Furthermore, in the genealogies, we find the names of the patriarchs considered to belong
,שׁת! ,אדM !,אנושׁ! ,מתושׁלח! ,חנוK !,קינN !,מהללאל! ,נח!) P to (בN" (!son "term The .)סרוג! ,פלג! ,רעו! ,למK !,ישׁמעאל!
appears in genealogies but also in typical P expressions such as "son of X year" (!שׁנה Nב (to indicate the age of someone, "sons of Israel" (!ישׂראל בני(,
etc. The root !ילד) to beget or to give birth) is found in the genealogies in Gen 5; 11; Exod 6 but also in other P narratives of the patriarchs (Gen 16–17; 21; 25; 35; 36; 46; 48) which focus on affiliation.
The term "generation" (!דור (is also recognized as typically P (Gen 6:9; 9:12; 17:7,9,12; etc. ), as well as the term "annals" (!תולדות(, which serves to introduce a narrative section or a genealogy and.
This term structures the narrative and genealogical sections in the book of Genesis (Gen 2:4; 5:1; 6:9; 10:1,32; 11:10,27; 25:12–13,19; 36:1,9; etc.). The terms "fowl" (!Pעו(," beast/flesh" (!בשׂר(," creeping"
,(נפשׁ חיה!) "being living ",)שׁרZ" (!swarming ",)רמשׂ!)
"cattle" (!בהמה(," kind" (!Nמי (are found in the typically P expression "living creatures of every kind:
cattle and creeping things and wild animals of the earth of every kind" (Gen 1:24; cf. Gen 1:25-26; 6:7,20; 7:14,23; etc.). These expressions are often associated with "multiplication" (!רבה(, an essential theme for P that also appears in the blessings of P
narratives as in Gen 17; 48; etc. The term "being"
(!נפשׁ (is also used in P texts to refer to a person, e.g.,
in Gen 12; 17; 36; 46. As for the term "all" (!כל(, it 5https://github.com/YoffeG/PnonP
is used overwhelmingly in both P and D texts. In P
texts (Gen 1:27; 5:2; 6:19), humanity (!Mאד(, in the image (!Mצל (of God, is conceived in a dichotomy of "male" (!זכר (and "female" (!נקבה(. The root !זכר in its second sense, that of "remembering", also plays a role in the P narratives (Gen 8:1; 9:15–16; 19:29; Exod 2:24; 6:5) when God remembers his covenant and intervenes to help humanity or the Israelites. The covenant (!ברית ;cf. also Gen 9; 17), the sign of which is the circumcision (!מול (of the foreskin (!ערלה (is correctly characterized as P. According to P, God's covenants are linked to a promise of offspring (!זרע ;cf. Gen 17; etc.) and valid forever (!Mעול ;Gen 9; 17; 48:4; Exod 12:14; etc.). The term "seed/descendant" (!זרע (is also used by P in the creation narratives in Gen 1. The term "between" (!Nבי (is used several times to indicate the parties concerned by the covenant in Gen 9 and Gen 17. The term is also found frequently in Gen 1 in the creation story, where creation is the result of separation "between" (!Nבי (different elements - presenting God as the creator is not typical of a national god whose role primarily guarantees protection, military success, and fertility. The transformation of the God of Israel into a creator God appears only in the exilic or postexilic texts. Thus, the root "to create" (!ברא (is rightly associated with P (Gen 1:1-2:4; 5:1-2). The use of divine names is particular in the priestly narratives. "God" (!Mאלהי( is the term used in the origin stories (Gen 1–11),
"El Shaddai" for the patriarchs (Gen 12–Exod 6),
and finally, "YHWH" from Exodus 6,2-3 on. Here, the algorithm did understand a particular use by P
of the term "God" (!Mאלהי(. One of the differences with Holzinger's list is the fact that the algorithm considers the terms Noah (!נח(, flood (!מבול(, and the ark (!תבה (as typically P. This is probably because the flood narrative is much more developed in P than in non-P or because the semantic environment is attached to other P expressions. Nevertheless, all three terms appear in non-P texts as well.
The term "daughter" (!בת (should be considered P
not because of its frequency, which is admittedly somewhat higher in the P narratives of Genesis, but probably because of its semantic environment.
Thus, the term appears in the expression "sons and daughters" (!Mובני בנות(, which is very frequently used in Gen 5; 11. The preposition "after" (!אחר( appears in the expression "after you" (!Kאחרי (in the promise to Abraham in Gen 17 or the expression
"after his begetting" (!אחרי הולידו (in the genealogies in Gen 5; 11. The appearance of the term "to die" (!מות (as a characteristic of P is explained by its presence in the genealogies of Gen 5; 11 but also in the succession of each of the generations of the patriarchs. Finally, the terms "water" (!Mמי (and
"heaven" (!Mשׁמי (play a major role in the creation narrative P (Gen 1:1-2:4) and the flood narrative
(Gen 6-9*). These two terms also appear in Exodus, where water is mentioned in the account of the duel of the magicians (Exod 7-9*), in the passage of the sea of reeds which is paralleled in the creation of Gen 1 (Exod 14), and as a means of purification during the building of the tabernacle (Exod 29-30; 40). This latter function of water probably builds its symbolism in the other narratives. The term firmament (!רקיע (is associated with heaven and appears only in the creation story P of Gen 1 but is of little significance elsewhere.
On the non-P side, terms like "Joseph",
"pharaoh", and "Egypt" are non-P features since the story of Joseph is non-P. Similarly, the presence of "Jacob" is explained by an account of only a few verses for this story in P as opposed to several whole chapters for the non-P account of Jacob. The terms "brother" (16P /179*nonP* ), "father"
(19P /213*nonP* ), and "mother" (4P /33*nonP* ) as nonP features can be understood by a greater emphasis on family in the original patriarchal accounts, whereas P emphasizes genealogy. The terms "master" (!Nאדו(," slave/servant" (!עבד(, and "boy/servant"
(!נער (reflect the hierarchical structures of the household of the wealthy landowners in the narratives of the patriarchs but are of no interest to the priestly editors. Similarly, non-P texts show more interest in livestock, with terms such as camel (!גמל(,
donkey (!חמור(, or small livestock (!Nצא (considered non-P features. The dialogues are more present in the non-P stories than in the P stories. Thus the terms that open the direct discourses "speak" (!דבר(,
"say" (!אמר(, and "tell" (!נגד (are considered typical non-P terms as well as the set of Hebrew propositions in direct discourse (!מה,! עתה,! ה,! כי,! Mג,! הנה,
!אל,! נא,! Mא,! לא(. The term "man" (!אישׁ (can be used in many ways: man, husband, human; someone. Its use and expressions using it are significantly more frequent in non-P texts (42P/213nonP). This may be an evolution of the language rather than a deliberate or theological change on the part of P.
Finally, for the terms "to enter" (!בוא (or "to go"
(!Kהל (to be characteristic of non-P, this may reflect a stronger interest in place, in travel in the original texts probably composed to legitimize sanctuaries or as etiological narratives whereas these aspects are less marked in the P texts.
## E.2 Exodus E.2.1 Semantics
For the P-texts of Exodus, the algorithm has extracted the semantic features of the tabernacle construction in Exod 25-31; 35-40 but does not give features of the P-texts that would be found elsewhere. We find in the features: the different names of the holy place, "the holy one", "the dwelling",
"the tent of meeting"; the materials used for the construction, "acacia wood", "pure gold", "bronze", "linen", "blue, purple, crimson yarns", etc.; the spatialization, "around", "outside"; the dimensions,
"length", "cubit", "five"; the components, "altar",
"curtain", "ark", "utensils", "table", and YHWH's orders to Moses, "you shall make". Thus, the algorithm has a good understanding of the terms specific to the construction of the Tabernacle but is not susceptible to a more general understanding of the characteristics of P in Exodus. The non-P features are more interesting. For example, the use of the word "people" (!Mע (appears primarily in the non-P texts because the priestly redactors usually preferred to use the term "assembly" (!עדה(. The word "I" in the long form
(!אנכי (is considered non-P because the word 'ny, the short form, appears in P texts. The expression "to YHWH" does appear 24 times in non-P
texts, e.g., "to cry out to YHWH"/"to speak to YHWH"/"to turn to YHWH", whereas P avoids the expression. This is easily understandable by a desire to give YHWH the initiative in all interactions.
In P, it is he who demands, commands, and speaks.
There are few dialogues. As for the terms Egypt
(!Mמצרי (and Pharaoh (!פרעה(, they are indeed quantitatively more frequent in the non-Priestly texts of Exodus (respectively 36P/139nonP et 26P/89nonP)
as in Genesis.
## E.2.2 Grammar
As we have already seen, non-P texts more often adopt the protagonists' point of view by including dialogues or their thoughts, whereas P texts prefer a third-person narration. One of the consequences is the privileged use of 3rd singular or plural suffixes, unlike non-P texts where 1st singular or 2nd singular suffixes are more often used.
Moreover, the massive use of the 3rd person in P texts can also be explained by the presence of pleonasms which use a form with this suffix: !עמו,
!אתו, etc. (Holzinger, 1893). Concerning verbs, the form Qal or Piel, qatal in the 2nd masculine singular, is prevalent in P texts. This is understandable because of P's theology, according to which God orders using the second person, and then the protagonists carry out according to YHWH's order.
On the side of the non-P texts, the narrative form, i.e., Qal, wayyiqtol in the 3rd masculine singular, is significant, although these forms are also very present in P texts. Another peculiarity is the use in non-P of "name in the constructed form + place name". P seems to have avoided topical constructions because of reduced interest in localizations.
The remaining terms are persistent elements. Further analysis would be needed to understand the relevance of the distinction made by the algorithm.
## E.3 Summary
As we can see, the pipeline could extract typical features of priestly texts in Genesis, easily recognizable for a specialist. In addition, other P features have also been found that may be specific to a single narrative, correspond to repeated use of an expression, or be a significant theological theme such as water. On the other hand, the features of non-P texts do not indicate a coherent editorial milieu or style but rather allow us to better distinguish between P texts and non-P texts by particular theological or linguistic features. The data provided by the algorithm allow for the detection of particularities that require an explanation. More detailed investigations than those presented above are necessary to better understand specific instances of the results. For the texts of Exodus, the excessive importance of the chapters devoted to the construction of the Tabernacle (Exod 25-31; 35-40) did not allow us to obtain satisfactory results in the characterization of P, which could indicate an originally independent document. Nevertheless, the characterization of non-P texts is relevant, as well as the results on grammar.
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
![15_image_2.png](15_image_2.png)
![15_image_3.png](15_image_3.png)
![16_image_0.png](16_image_0.png)
![16_image_1.png](16_image_1.png)
![16_image_2.png](16_image_2.png)
![17_image_0.png](17_image_0.png)
![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png)
![19_image_0.png](19_image_0.png)
![19_image_1.png](19_image_1.png)
![20_image_0.png](20_image_0.png)
![20_image_1.png](20_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
liu-etal-2023-language | A Language-First Approach for Procedure Planning | https://aclanthology.org/2023.findings-acl.122 | Procedure planning, or the ability to predict a series of steps that can achieve a given goal conditioned on the current observation, is critical for building intelligent embodied agents that can assist users in everyday tasks. Encouraged by the recent success of language models (LMs) for zero-shot and few-shot planning, we hypothesize that LMs may be equipped with stronger priors for planning compared to their visual counterparts. To this end, we propose a language-first procedure planning framework with a modularized design: we first align the current and goal observations with corresponding steps and then use a pre-trained LM to predict the intermediate steps. Under this framework, we find that using an image captioning model for alignment can already match state-of-the-art performance and by designing a double retrieval model conditioned over current and goal observations jointly, we can achieve large improvements (19.2{\%}-98.9{\%} relatively higher success rate than state-of-the-art) on both COIN and CrossTask benchmarks. Our work verifies the planning ability of LMs and demonstrates how LMs can serve as a powerful {``}reasoning engine{''} even when the input is provided in another modality. | # A Language-First Approach To Procedure Planning
Jiateng Liu*, Sha Li*, Zhenhailong Wang, Manling Li, Heng Ji University of Illinois Urbana-Champaign jiateng5,shal2,manling2,wangz3,[email protected]
## Abstract
Procedure planning, or the ability to predict a series of steps that can achieve a given goal conditioned on the current observation, is critical for building intelligent embodied agents that can assist users in everyday tasks. Encouraged by the recent success of language models
(LMs) for zero-shot (Huang et al., 2022a; Ahn et al., 2022) and few-shot planning (Micheli and Fleuret, 2021), we hypothesize that LMs may be equipped with stronger priors for planning compared to their visual counterparts. To this end, we propose a language-first procedure planning framework with modularized design: we first *align* the current and goal observations with corresponding steps and then use a pre-trained LM to *predict* the intermediate steps. Under this framework, we find that using an image captioning model for alignment can already match state-of-the-art performance and by designing a double retrieval model conditioned over current and goal observations jointly, we can achieve large improvements
(19.2% - 98.9% relatively higher success rate than state-of-the-art) on both COIN (Tang et al.,
2019) and CrossTask (Zhukov et al., 2019) benchmarks. Our work verifies the planning ability of LMs and demonstrates how LMs can serve as a powerful "reasoning engine" even when the input is provided in another modality.1
## 1 Introduction
Developing autonomous agents of versatility and flexibility requires the ability to produce plans onthe-fly for a given task based on observations of the current state. Procedure planning, as proposed by (Bi et al., 2021), tests whether an agent can predict the steps needed to bring a given initial state into a given goal state, where both states are specified with visual observations, as shown in Figure 1. Compared to planning in a closed-world with structured environments, procedure planning with instructional videos provides an unstructured, visually complex, and highly-detailed observation of the world (i.e., *visual observation space*, presented as video instances) while asking the model to predict high-level actions (i.e., *action space*,
highlighted in the green box).
To handle such a mismatch between the observation space and the action space, previous methods (Bi et al., 2021; Chang et al., 2020) have focused on learning a *latent visual feature space* from visual observations that is more suitable for planning. However, learning the ideal latent space is challenging since visual observations can differ greatly due to changes in the background, actor, or tools, even for the same task. For example, the two observations in Figure 1 are highly dissimilar although they are part of the same task *making* salad. This makes it inherently difficult for models to *align* visual observations to high-level actions, not to mention *reason* and *predict* over multiple steps to produce a plan.
Meanwhile, pre-trained language models (LMs)
show strong planning ability, as demonstrated by their excellent performance for zero-shot (Huang et al., 2022a) and few-shot text planning tasks (Micheli and Fleuret, 2021). This inspires us to think if planning in *text feature space* is a better alternative to planning in *visual feature space* used in prior work. Apart from the strong prior from language model pretraining, the actions in procedure planning have the dual representation of text and labels (Zhao et al., 2022), which makes text space more easily aligned with the action space, both of which are more abstract than visual observations.
While the idea of converting visual input into text and relying on language models has been effective in a series of multimodal tasks such as image captioning and visual question answering
(VQA) (Zeng et al., 2022; Wang et al., 2022), the case is different for procedure planning as (1) proce1Our code is available at https://github.com/
Lumos-Jiateng/LFP
![1_image_0.png](1_image_0.png)
dure planning was originally proposed as a visiononly task instead of being inherently multi-modal;
(2) we attempt the transfer of the procedure reasoning and prediction ability of the LM instead of simply extracting information from the images. As shown in Figure 1, LM helps us predict the hardest intermediate steps (Put the ingredients into the bowl) which have little support from either start or end observations.
The major challenge of employing language models for procedure planning is how to map the start and goal observations into text space without losing salient information for planning. If the mapping is largely inaccurate, then even with the strong reasoning ability of LMs, it might not be worth the trouble of converting the problem into text space.
As the first exploration, we validate the effectiveness of a simple baseline model in our languagefirst planning framework, i.e., using image captioning to convert visual observations into text to prompt LMs. We find that by using image captioning we can already achieve performance comparable to state-of-the-art models. However, closer examination shows that image captioning is not sufficient to capture visual details across the current and goal observation (especially those related to movement and state change) and in turn does not effectively leverage the planning power of LMs.
Rooted in this observation, we propose to perform direct alignment from observations to steps by retrieving the most relevant step from the datasetwide candidate step pool. Since visual observations can be highly diverse for the same step, for the modularized framework, we design a double retrieval model that jointly retrieves the first and the last steps corresponding to the start and goal observation respectively. Using both the visual observations (such as the video input of the start step and goal step in Figure 1) and the task name (such as *make salad*), we can further constrain the search space and identify the steps with higher accuracy.
Experiments on two benchmark datasets COIN (Tang et al., 2019) and Crosstask (Zhukov et al., 2019) show that our proposed language-first framework can improve procedure planning effectiveness under all settings. In particular, our best model, which represents each observation by a montage of multiple frames and utilizes the double retrieve model, achieves the best results and yields 19.2% - 98.9% relatively higher success rate than the state-of-the-art. This demonstrates the strong planning ability of pre-trained LMs and shows the potential of using LMs as a general "reasoning engine" or "planning engine", even in tasks where images are provided as input.
In summary, our contributions are as follows:
1. We verify the effectiveness of planning in text space compared to visual space by employing language models for procedure planning.
2. We design two models for adapting language models for procedure planning: an image captioning based baseline model performs explicit conversion to generate prompts and a modularized framework which split the prediction into two stages.
3. On two instructional video datasets COIN and Crosstask, we show that our proposed text space planning approach can significantly outperform prior methods, in certain cases doubling the plan success rate.
## 2 Related Work
Instructional Procedure Planning Introduced by (Chang et al., 2020), the procedure planning task aims at predicting the intermediate steps (actions) given a start visual observation and a goal visual observation. The key challenge of this task lies in its unstructured, highly diverse observations which are unsuitable for directly planning over. To tackle this challenge, most previous approaches
(Bi et al., 2021; Chang et al., 2020; Srinivas et al.,
2018; Sun et al., 2022) attempt to learn a latent space from visual observations by a supervised imitation learning objective over both the actions and the intermediate visual observations. More recently, P3IV(Zhao et al., 2022) observes that actions can be treated as both discrete labels and natural language. By using a pretrained visionlanguage model to encode the actions as text, P3IV
achieves higher planning success rate using only action-level supervision. P3IV can be seen as an attempt to map the action text into visual space to provide more stable supervision. In comparison, our model maps visual observations into text space.
Pre-trained Language Models for Planning Recent work has shown the potential of language models for text-based planning tasks. Language models pre-trained on a large internet-scale corpus encodes rich semantic knowledge about the world and are equipped with strong low-shot reasoning abilities. In the effort of connecting language models with embodied AI, pioneering work on text-based planning (Côté et al., 2018; Shridhar et al., 2020; Micheli and Fleuret, 2021) shows that learning to solve tasks using abstract language as a starting point can be more effective and generalizable than learning directly from embodied environments. More recently, (Ahn et al., 2022; Huang et al., 2022b; Yao et al., 2022; Huang et al.,
2022a) further show that using large language models as out-of-the-box planners brings significant benefits to a wide range of embodied tasks, such as navigation and instruction following.
In this paper, we utilize language model's planning ability to solve cross-modal planning tasks.
We finetune a pre-trained BART model (Lewis
## 3 Method
In this section, we introduce our language-first approach to procedure planning. We first investigate whether language models can be applied for the task of procedure planning using text-only input
(Section 3.2). Building upon this model, we explore two different methods to map the visual observations to their corresponding steps.
In Section 3.3 we introduce our baseline model which incorporates a pre-trained image-captioning model and a language model to do procedure planning task. This baseline yields results comparable to the state-of-the-art approaches, we identified its deficiencies by giving examples.
In Section 3.4 we introduce our modularized framework which first utilizes a conditional double retrieval model to retrieve the most similar step for the start and goal visual observations jointly. Then the retrieved steps will be plugged into the language model to predict all the intermediate steps.
## 3.1 Task Formulation
As shown in Figure 1, given a current visual observation o0, and a goal visual observation oT , procedure planning requires the model to plan a sequence of actions {a1, · · · , aT } that can turn the current state into the goal state, where T is the planning horizon. Additionally, every task has an overall goal, or task name, g such as Replace a lightbulb.
During training, two types of supervision are available: visual supervision and action supervision. Visual supervision refers to the visual observations at each intermediate timestep {o1*, ..., o*T }.
Action supervision refers to the corresponding action labels {a1*, ..., a*T }. In particular, aiis the action that transforms the observed state from oi−1 into oi. Each action can be interpreted as a discrete label (Action 33) or a short piece of text
(Remove the lampshade). In this paper, we use the terms *action* and *step* interchangeably. Following P3IV (Zhao et al., 2022), in our work, we only use action supervision during training.
## 3.2 Text-Based Planning Model
Language models are trained with the selfsupervised objective of recovering the original text given a partial or corrupted text sequence. To adapt language models for our use case where the out-
![3_image_0.png](3_image_0.png)
put action descriptions are of variable token length, we employ a pretrained encoder-decoder model BART (Lewis et al., 2019).
Assuming that we can perfectly map the input visual observations to actions, the input x to the BART model will be a prompt containing the task g, the first action a1, the last action aT , and the prediction horizon T. Here, the actions are interpreted as a short piece of text. The model will then be fine-tuned to sequentially predict all of tokens a 1 i
, · · · , am ithat comprise each of the intermediate action descriptions ai. This factorization allows us to train the language model using cross-entropy loss over each token a j i
.
During inference, we face two challenges: (1)
restricting the language model's output to the set of feasible actions and (2) allowing for diversity in the generated plans.
The first challenge is due to the fact that the language model predicts a distribution over the entire vocabulary at each decoding step, which makes the output domain essentially the space of all possible text strings. We experiment with two methods, namely *projection* and *constrained decoding*. In the projection method, similar to (Huang et al., 2022a),
we first generate the entire action sequence using beam search and then for each predicted action, we project it to the most similar viable action based on SentenceBERT (Reimers and Gurevych, 2019), embedding cosine similarity between predicted steps and all the candidate steps. In the constrained decoding approach, we first construct a Trie of tokens using all of the viable actions. During decoding, we look up the Trie to check which tokens are valid and suppress the probability of the other tokens, effectively reducing the possible output space.
## 3.3 Baseline Model
A straightforward way to use LMs for procedure planning is to first convert the visual observations into text. We adopted a pre-trained image captioning model to do this. As shown in Figure 2, we first conduct image captioning for both the start and goal images. Then, the captions are converted into a prompt to be fed into a generative language model to predict the intermediate steps.
## 3.4 Modularized Framework
Our baseline model yields results comparable to state-of-the-art models. However, large amounts of inaccurate captions are found as shown in the right part of Figure 2. This leads to the design of our modularized model, where we first employ a pretrained vision-language model to align the visual observation to the most similar step, directly mapping it to the text space and label space.
We formulate the first step as a retrieval problem over all possible actions in the dataset. Initially, we tried to retrieve the start and goal actions independently conditioned on the corresponding observations:
$${\hat{a}}_{1}=f(o_{0}),{\hat{a}}_{T}=f(o_{T})\qquad\qquad(1)$$
![4_image_0.png](4_image_0.png)
However, the retrieval performance using an offthe-shelf vision-language model is far from satisfactory even after fine-tuning on our target dataset.
This is due to the high visual variance within the same action class (same action can happen in different backgrounds and involving visually dissimilar objects) and relatively low visual variance within the same observation trajectory (frames of the same actor in the same environment).
Thus we propose to make the retrieval problem less ambiguous and more constrained by retrieving the start and goal actions jointly, namely the double retrieval model.
$${\hat{a}}_{1},{\hat{a}}_{T}=f(o_{0},o_{T})$$
An illustration of the model is shown in Figure 3.
Double retrieval input The input to the model is a pair of visual observations (o0, oT ) and a text prompt specifying the task name d and the planning horizon T: The task is g and there are T −2 steps in between.
Vision-Language cross-attention model We use pre-trained BLIP (Li et al., 2022) as the basis for our retrieval model. The input observations and prompt are first encoded by the image encoder and text encoder respectively and then passed through a cross-attention module to model their interaction.
Then, the fused representation for the start observation and the goal observation will be passed to a merging layer to combine the information from both images. This merging layer is implemented as a single linear projection which maps the concatenated features into 768 dimensions.For each of the observations, we use a classification head and a language embedding head to output the predicted action as a probability over a candidate set p(a),
and as a text embedding hˆ, respectively. The loss function is a combination of the cross-entropy action classification loss Laand the text embedding contrastive loss Ll.
$$(2)$$
$${\mathcal{L}}_{a}=-\sum_{i=0}^{N}a_{i}\log p(a_{i})\qquad\qquad(3)$$
$${\mathcal{L}}_{l}=-\log{\frac{\exp(l_{i}\cdot{\hat{h}})}{\sum_{j=0,j\neq i}^{N}\exp(l_{j}\cdot{\hat{h}})}}\qquad(4)$$
where N is the number of the valid actions in the dataset, liis the text embedding of the ground truth label for this instance and lj are the text embeddings of all the other labels, which serve as negative examples.
## 4 Experiments 4.1 Experiment Setup
Datasets We evaluate on two mainstream datasets of instructional videos including COIN(Tang et al., 2019) and CrossTask(Zhukov et al., 2019). COIN is a dataset containing 11827 videos with 180 different tasks and 46354 annotated video segments. Following previous attempts (Zhao et al., 2022; Chang et al., 2020),
we adopt the 70%/30% split to create our training and testing set. We use 20% of training data for validation.
We followed the data preprocessing steps of the procedure planning task(Chang et al., 2020) to select the start and goal visual observations, while at the same time, we also adopt a multi-frame dataset curation approach to boost our model's ability. Apart from the original approach of getting the start image and the goal image of the video segment directly, we also use a uniform sampling of nine frames across the video and concatenate them into one single image to represent the visual observation. We use this method to see whether a more comprehensive visual feature would help in our approach. Details about our data pre-processing and parameter setting can be found in Appendix A
We report the results of both methods in our main result table which is in Section 4.2.
Metrics Previous efforts regard the step prediction for procedure planning tasks as a classification task. Instead, we focus on generating each step with a language model. It is certainly possible for the language model to generate steps that have same meaning as the ground-truth steps but of different textual descriptions. For example, the language model may produce an output as "put all the bed boxes together" while the correct prediction is "put all bed boxes together". However, we only consider predictions that are identical to ground truth as successful. As a result of this evaluation protocol, we are able to use similar metrics as previous work to ensure our results to be comparable.
Generally, our model will generate a sequence containing several steps. The sequence is separated by a separator "." to distinguish different steps. We use the first K steps as our final output for predictions that have more steps than we want. In the case of predictions with fewer steps than we would like, we regard the last few predictions as empty strings. The metrics that we adopt include:
- Success Rate (SR) considers a plan successful only if it exactly matches the ground truth.
- Mean accuracy (mAcc) treats each step prediction independently, so the order of the predicted steps matters.
- Mean Intersection over Union (mIoU). In this evaluation, if one step is successfully predicted at anywhere in the procedure, this step will be considered as correct.
Baselines We adopt state-of-the-art models as baselines, including DDN (Chang et al., 2020),
PlaTe (Sun et al., 2022), Ext-GAIL (Bi et al., 2021)
and P3IV (Zhao et al., 2022).
We also include our image captioning baseline with single frames as the visual representation (Captioning Baseline) and two variants of our proposed approach. "Ours(multi-frame)" and
"Ours(single-frame)" employ our double retrieval model and use multiple frames and single frames as input respectively.
## 4.2 Main Results
The main results of our modularized framework are shown in Table 1 and Table 2. Note that we use neither *projection* nor *constrained-decoding* here and we use the metrics which are talked about in Section 4.1.
Notably, our model's performance on COIN
greatly outperforms prior work, especially for the success rate (SR) metric, which shows a near-2x increase. According to our quantitative evaluation results on COIN and CrossTask, we have the following observations:
1. The language first approach brings significant accuracy improvement to procedure planning tasks, especially for step number T =3.
2. Our modularized framework outperforms the base model which considers vision-to-text transformation and text planning independently. It demonstrates that the two submodules are complimentary and mutually beneficial.
3. LMs demonstrate strong ability in planning while the mapping from visual observations to the text space remains a challenge. Also, the performance of BART drops with an increasing planning horizon due to variable executable plans.
| T = 3 | T = 4 | | | | | |
|----------------------------------|---------|-----------|-------|-----------|-------|-------|
| Model | SR | mAcc mIoU | SR | mAcc mIoU | | |
| Random | <0.01 | 0.94 | 1.66 | <0.01 | 1.83 | 1.66 |
| DDN(Chang et al., 2020) | 12.18 | 31.29 | 47.48 | 5.97 | 27.10 | 48.46 |
| PlaTe(Sun et al., 2022) | 16.00 | 36.17 | 65.91 | 14.00 | 35.29 | 55.36 |
| Ext-GAIL (Bi et al., 2021) 21.27 | 49.46 | 61.70 | 16.41 | 43.05 | 60.93 | |
| P3IV(Zhao et al., 2022) | 23.34 | 49.96 | 73.89 | 13.40 | 44.16 | 70.01 |
| Captioning Baseline | 10.15 | 30.28 | 54.65 | 3.14 | 22.03 | 49.44 |
| Ours(single-frame) | 25.01 | 53.79 | 75.43 | 14.11 | 47.93 | 73.21 |
| Ours(multi-frame) | 30.55 | 59.59 | 76.86 | 15.97 | 50.70 | 75.30 |
Table 1: Procedure planning results (%) on CrossTask. The best results are shown in bold and the next best results are underlined.
| T = 3 | T = 4 | | | | | |
|-------------------------------|-------------|-----------|-------------|-----------|-------|-------|
| Model | SR | mAcc mIoU | SR | mAcc mIoU | | |
| Random | <0.01 <0.01 | 2.47 | <0.01 <0.01 | 2.32 | | |
| DDN(Chang et al., 2020) 13.90 | 20.19 | 64.78 | 11.13 | 17.71 | 68.06 | |
| P3IV(Zhao et al., 2022) | 15.40 | 21.67 | 76.31 | 11.32 | 18.85 | 70.53 |
| Captioning Baseline | 12.27 | 33.29 | 59.76 | 3.52 | 24.81 | 52.48 |
| Ours(single-frame) | 28.35 | 53.14 | 78.56 | 15.43 | 45.04 | 78.07 |
| Ours(multi-frame) | 30.64 | 54.72 | 80.64 | 18.52 | 49.31 | 80.32 |
Table 2: Procedure planning results (%) on COIN.
| Dataset | Horizon T | SR | mAcc mIoU |
|-----------|-------------|-------------|-------------|
| COIN | 3 | 67.37 67.37 | 67.37 |
| 4 | 35.43 51.12 | 62.89 | |
| CrossTask | 3 | 60.04 60.04 | 60.04 |
| 4 | 33.27 48.28 | 61.37 | |
## 4.3 Ablation Studies
We conduct detailed ablation studies to highlight three points that support our overall design for this framework: (1) on the pure text planning side, the fine-tuned language model is stable when doing generation in the text space with remarkable performance. (2) our double retrieval approach excels in different settings on the vision-to-text transformations. (3) similar to previous works, our model has the ability of probabilistic modeling.
Step prediction with LMs The overall result of directly planning in the text space is shown in Table 3. We report the result of obtaining the intermediate steps with the start and goal steps using a fine-tuned language model. This result is rather satisfying.
To verify the stability and quality of this generation, we further experiment with different decoding strategies as discussed in Section 3.2.
The result of using *projection* and *constraineddecoding* is shown in Table 4. We witness only marginal increase in the overall accuracy when adding constrained decoding, which proves that LMs adapt well to the new data domain.
Double retrieval performance We present the overall double retrieval performance of the first step and the last step in Table 5. The success rate of this experiment is determined by the retrieval correctness of both the first and last steps. The results of our double retrieval model are based on either multi-frame input or single-frame input. According to Table 5, it is clear that our multi-frame setting generally produces a better result. This suggests that obtaining more fine-grained visual features can further boost our model's performance. Furthermore, the performance drops when the step number increases. That is mainly because the train-
| T = 3 | T = 4 | | | |
|--------------------------------------|-------------------|-------------------|-------|-----------|
| Decoding Method | SR | mAcc mIoU | SR | mAcc mIoU |
| No constraint | 28.35 53.14 | 78.56 15.43 45.04 | 78.07 | |
| Sentence-BERT projection 29.11 53.45 | 80.07 16.95 45.82 | 79.92 | | |
| Trie constrained | 29.02 53.30 | 79.67 16.86 46.02 | 79.43 | |
Table 4: Ablation study on how different decoding strategies influence the final planning performance. The default decoding method is beam search.
| Dataset | Visual Repr. | T = 3 T = 4 | |
|--------------------|----------------|---------------|-------|
| COIN | Multi-frame | 37.83 | 31.03 |
| Single-frame 35.22 | 30.38 | | |
| CrossTask | Multi-frame | 47.48 | 40.95 |
| Single-frame 39.37 | 36.44 | | |
Table 5: Retrieval top-1 accuracy (%) for start and end steps.
| Retrieval Model | Top-1 Acc |
|-------------------|-------------|
| BLIP | <1.00 |
| BLIP-finetuned | 21.30 |
| Double Retrieval | 37.83 |
| w/o language loss | 24.81 |
| w/o task name | 33.32 |
ing image-text pair set will be smaller when the step number increases. The finetuned vision-language model may find it hard to generalize to unseen examples with limited training instances.
To verify that our design of double retrieval is effective in transforming visual details into language, we compare it with the state-of-the-art visual-language transformation approaches in Table 6. Note that this ablation study is based on our Multi-frame setting on Coin with step number
= 3. We observe that directly finetuning a BLIP
retrieval model does not work well. This is due to the difficulty of predicting two steps independently from the visual input.
We also present the ablation studies of removing language loss and task name in Table 6. The performance drop indicates the importance of the language loss term and the additional task name term to the success of our double retrieval model.
Probabilistic modeling ability LMs inherently have the ability of probabilistic modeling. As a result of experimenting with different decoding methods (greedy search, beam search, and sampling) for LMs, we found that the overall accuracy difference is less than 1%. We recognize, however, that the model is capable of generating multiple reasonable plans for a given input. For example, in Figure 4, alternative planning results can be produced through sampling. All alternative predictions are tagged as correct in the test set. It matches the observation that multiple alternative plans can exist given the same start step and the same goal.
## 5 Conclusion And Future Work
We introduce a new language-first perspective for the procedure planning task, and propose two models to construct a text planning space and transfer the generalization ability of LMs to vision-based planning. Different from previous approaches that derive a latent space from visual features to perform planning, we propose that a language model with sufficient priors can serve as a better planning space. The key challenge is enabling LMs to capture appropriate visual details for planning purposes. To deal with this issue, we transform visual input into language and propose a doubleretrieval mechanism to force the model to align salient visual details with actions. The superior performance of our approach proves that using language models with strong priors is a promising and powerful paradigm to procedure planning over visual observations.
In the future, we would like to explore the domain generalizability of LM-based planning models and extend our model to handle longer planning horizons, possibly with the help of sub-goal prediction.
## Limitations
We reflect on the limitations of our model below:
![8_image_0.png](8_image_0.png)
Figure 4: Probabilistic modeling results. We enable language models to generate different outputs via sampling.
1. Our experiments are based on large everyday household datasets (i.e. COIN and Crosstask).
Our language model is pretrained with web data, which helps it handle such householdrelated procedures well. However, when applied to other more specialized domains like medical procedures, language models might suffer from the domain gap and impact overall model performance.
2. The language model has excellent planning ability given the ground truth start and goal steps. However, it is still hard for the language model to generate very long sequences of steps. When the planning horizon T increases, the performance of our model drops quickly just as other methods do.
3. In real-world applications (i.e planning task for robots), a good model should be able to dynamically adjust the plan given external feedback. For example, when the execution of one step fails, the model will need to re-plan as soon as possible. Our model does not possess such an ability so far, since our planning approach is offline. We leave this direction for future research.
## Acknowledgement
This research is based upon work supported by U.S. DARPA KAIROS Program No. FA8750-19-21004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
## References
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Jayant Joshi, Ryan C. Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego M Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan. 2022. Do as i can, not as i say: Grounding language in robotic affordances. *ArXiv*,
abs/2204.01691.
Jing Bi, Jiebo Luo, and Chenliang Xu. 2021. Procedure planning in instructional videos via contextual modeling and model-based policy learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15611–15620.
Chien-Yi Chang, De-An Huang, Danfei Xu, Ehsan Adeli, Li Fei-Fei, and Juan Carlos Niebles. 2020.
Procedure planning in instructional videos. In *European Conference on Computer Vision*, pages 334–
350. Springer.
Marc-Alexandre Côté, Akos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. 2018. Textworld: A learning environment for text-based games. In *Workshop on Computer Games*, pages 41–75. Springer.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022a. Language models as zeroshot planners: Extracting actionable knowledge for embodied agents. *arXiv preprint arXiv:2201.07207*.
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al.
2022b. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H.
Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In *ICML*.
Vincent Micheli and Francois Fleuret. 2021. Language models are few-shot butlers. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 9312–9318, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2020. Alfworld: Aligning text and embodied environments for interactive learning. *arXiv* preprint arXiv:2010.03768.
Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, and Chelsea Finn. 2018. Universal planning networks: Learning generalizable representations for visuomotor control. In *International Conference on* Machine Learning, pages 4732–4741. PMLR.
Jiankai Sun, De-An Huang, Bo Lu, Yun-Hui Liu, Bolei Zhou, and Animesh Garg. 2022. Plate: Visuallygrounded planning with transformers in procedural tasks. *IEEE Robotics and Automation Letters*,
7(2):4924–4930.
Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou.
2019. Coin: A large-scale dataset for comprehensive instructional video analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1207–1216.
Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu Chang, Mohit Bansal, and Heng Ji. 2022. Language models with image descriptors are strong few-shot video-language learners.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language models. *arXiv preprint arXiv:2210.03629*.
Andy Zeng, Adrian S. Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael S. Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Peter R. Florence. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *ArXiv*, abs/2204.00598.
He Zhao, Isma Hadji, Nikita Dvornik, Konstantinos G
Derpanis, Richard P Wildes, and Allan D Jepson.
2022. P3iv: Probabilistic procedure planning from instructional videos with weak supervision. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 2938–2948.
## A Appendix
Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly supervised learning from instructional videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3537–3545.
Experiment Settings We trained and evaluated our approach on a single RTX3090 GPU. For COIN
and Crosstask dataset processing, we transform the visual observations of a video segment into images.
Under our single image setting, we followed previous works and used the first frame of the video segment for the start visual observation while using the last frame to represent the goal visual observation.
Under our multiple-image setting, we uniformly sampled 9 images from the videos. The image size is 384*384 under the single image setting while the 9 images are concatenated and then resized to 384*384 under the multiple image setting.
For the baseline model, we used the original image captioning model of Blip. We used the prompt
"A picture of" for all the captioning samples. We set the min-length and the max-length of generation to 5 and 20 independently and set the number of beams to 3.
For the language planning side, we employed BART language model (Lewis et al., 2019). During the fine-tuning process, we set the batch size to 16 and used the Adam optimizer with lr = 10−5 and weight decay as 0.02. For the double retrieval side, we initialize the model with a BLIP pretrained model checkpoint. During training, we set the batch size to 4 and used an Adam optimizer with a learning rate of 10−5and 0.05 weight decay.
To get our main results on the COIN dataset, it costs about 12 hours to independently fine-tune the language model and train the double retrieval model.
Examples of output We give more examples of our Modularized Framework output in this section.
In Figure 6, we provide an example where our model makes a successful prediction. In Figure 7, we show an example where the language model fails. In Figure 5, we show an example where using the multi-image input gets the right prediction while using the single-image variant makes mistakes. It shows that the alignment ability from visual observations to step(action) space is still our model's bottleneck.
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
![10_image_2.png](10_image_2.png)
![10_image_3.png](10_image_3.png)
Figure 6: We present a perfect prediction example in this figure. We used single image as input and generate a plan of Horizon T = 4. We get all the steps right in this example.
| T = 3 | T = 4 | | | |
|---------------------|-------------------|-----------|----|-----------|
| Method | SR | mAcc mIoU | SR | mAcc mIoU |
| Prompt1 66.03 66.03 | 66.03 34.87 49.95 | 61.63 | | |
Prompt1 66.03 66.03 66.03 34.87 49.95 61.63 Prompt2 65.96 65.96 65.96 34.83 49.72 61.41 Prompt3 **67.37** 67.37 67.37 **35.43** 51.12 62.89 Table 7: Evaluation (%) of different language prompts on COIN dataset.
Impact of language model prompts We use three types of language model prompts to obtain the intermediate steps from the start step and the end step.
- Prompt 1: "Taking T − 2 steps from + a1 to aT + we need to."
- Prompt 2: "You start from a1. Your goal is aT . List T − 2 steps to do this."
- Prompt 3: "For Task d , given the first step and the last step, a1, aT . Predict the intermediate T − 2 steps."
Note that all the actions here are interpreted as their textual expression. The results of predicting the intermediate steps with the given three prompts are shown in Table 7. Experiments show that the design of the prompts do not have a major impact on the language planning performance. We suppose that it is because the fine-tuning process has make the generation process more stable. However, adding in the task name will still bring a visible increase. This increase is mainly brought by some overlapped step names. For example, the task PractiseTripleJump contains a sequence of steps of {"begin to run up", "do the first two jumps", "do the third jump", "begin to run up"},
while the task PractisePoleVault contains a sequence of steps of {"begin to run up", "begin to jump up", "fall to the ground", "begin to run up"}.
The "task name" label can help the language model distinguish between this two samples.
![11_image_0.png](11_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We talked about the Limitations of our paper after the main paper, in the Limitation section.
✗ A2. Did you discuss any potential risks of your work?
We did not witness or perceive any way in which this paper could be used to cause a risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In Abstract and Section 1. Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
We did not use any AI writing assistants.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?**
We talked about the computational experiments in Section 4. Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In section Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In section Appendix A.1
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In section Appendix A.1 and section 3. Methods D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kulkarni-etal-2023-empirical | An Empirical Analysis of Leveraging Knowledge for Low-Resource Task-Oriented Semantic Parsing | https://aclanthology.org/2023.findings-acl.123 | Task-oriented semantic parsing has drawn a lot of interest from the NLP community, and especially the voice assistant industry as it enables representing the meaning of user requests with arbitrarily nested semantics, including multiple intents and compound entities. SOTA models are large seq2seq transformers and require hundreds of thousands of annotated examples to be trained. However annotating such data to bootstrap new domains or languages is expensive and error-prone, especially for requests made of nested semantics. In addition large models easily break the tight latency constraints imposed in a user-facing production environment. As part of this work we explore leveraging external knowledge to improve model accuracy in low-resource and low-compute settings. We demonstrate that using knowledge-enhanced encoders inside seq2seq models does not result in performance gains by itself, but jointly learning to uncover entities in addition to the parse generation is a simple yet effective way of improving performance across the board. We show this is especially true in the low-compute scarce-data setting and for entity-rich domains, with relative gains up to 74.48{\%} on the TOPv2 dataset. | # An Empirical Analysis Of Leveraging Knowledge For Low-Resource Task-Oriented Semantic Parsing
Mayank Kulkarni1, Aoxiao Zhong2∗, Nicolas Guenon des mesnards1**, Sahar Movaghati**1, Mukund Sridhar1, He Xie1, **Jianhua Lu**1 1Amazon Alexa AI 2Harvard University
{maykul,mesnarn,movas,harakere,hexie,jianhual}@amazon.com [email protected]
## Abstract
Task-oriented semantic parsing has drawn a lot of interest from the NLP community, and especially the voice assistant use-cases as it enables representing the meaning of user requests with arbitrarily nested semantics, including multiple intents and compound entities. SOTA models are large seq2seq transformers and require hundreds of thousands of annotated examples to be trained. However annotating such data to bootstrap new domains or languages is expensive and error-prone, especially for requests made of nested semantics. In addition large models easily break the tight latency constraints imposed in a user-facing production environment. As part of this work we explore leveraging external knowledge as a replacement for additional annotated data in order to improve model accuracy in low-resource and low-compute settings. We demonstrate that using knowledgeenhanced encoders inside seq2seq models does not result in performance gains by itself, but multitask learning to uncover entities in addition to the parse generation is a simple yet effective way of improving performance across the domains and data regimes. We show this is especially true in the low-compute low-data setting and for entity-rich domains, with relative gains up to 74.48% in some cases on the TOPv2 dataset.
## 1 **Introduction**
Fostered by NLP advances, virtual assistants such as Google Home or Alexa are becoming increasingly competent to address complex yet natural, everyday user needs. While requests as simple as
"turn off the living room lights when the movie starts" could not be fulfilled with legacy systems that assigned a single user intent to each utterance and a single slot label to each token in an utterance
(Mesnil et al., 2013; Liu and Lane, 2016), recent works on task-oriented semantic parsing (Gupta et al., 2018; Aghajanyan et al., 2020) represent utterance semantics with arbitrarily nested trees (Figure 1), thus handling the above use-case among others (e.g. multiple intents, cross-domain intents, compound entities, etc.). The research community tackles this task with success by treating it as a seq2seq generation task where a linearized semantic tree is predicted iteratively (Rongali et al., 2020),
but such approaches fall short when confronted by real-life constraints such as strict run-time latency and scarcity of quality training data. Manual data annotation of training examples is a costly and error-prone process, which is exacerbated as utterance target representations become richer (more nested). The impact of data scarcity has been quantified in recent years with the introduction of the TOPv2 benchmark (Chen et al., 2020) that provides low-resource scenarios for task-oriented parsing.
Popular approaches to overcome data scarcity include synthetic data augmentation (Feng et al.,
2021; Jia and Liang, 2016; Schick and Schütze, 2021), transfer learning (Ruder et al., 2019; Fan et al., 2017), and meta-learning (Gu et al., 2018; Huang et al., 2018; Wang et al., 2020). In this paper, we explore if we can model richer token representations for mentions by leveraging external knowledge, as mentions are fundamental to generating the correct parse. The backbone motivation lies in the observation that several everyday NLP applications involve real-life entities referenced in knowledge bases (for e.g. street names, sports events, or public figures). This information can be utilized for enhancing downstream NLP tasks. For example the request "play the *green line*" could refer to either a movie name or a song name, modeling this mention appropriately for the decoder could improve performance while generating a parse. This is particularly appealing in the low-data regime, for which rare entities are unlikely to be represented in the training data at all. Additionally, building entity embeddings through entity-focused modeling
∗ The work was done while at Amazon Alexa AI.
objectives has shown promising results in entity based NLP tasks such as named entity recognition
(Yamada et al., 2020) and entity linking (Wu et al.,
2020).
While there has been prior work to leverage knowledge for generation tasks (Guu et al., 2020; Izacard et al., 2022; Cao et al., 2020) this largely focused on unstructured text generation tasks such as Question-Answering or Entity Linking. To the best of our knowledge, we are the first to investigate its use in seq2seq models for task-oriented semantic parsing, a complex and structured text generation task.
We present an empirical analysis of using knowledge to improve accuracy of semantic parsing models, with a special focus on low-latency models such as small-decoder seq2seq models and nonauto-regressive models like RINE (Mansimov and Zhang, 2022). Our contributions are as follow:
- We benchmark three popular KnowledgeEnhanced encoders inside seq2seq models and show this way of leveraging knowledge does not consistently improve accuracy in the lowdata regimes for task-oriented semantic parsing generation. However when reformulated as a classification task we see promising results with knowledge-enhanced encoders.
- We propose a joint training objective combining semantic parsing and mention detection as a simple and effective approach to leverage external knowledge and improve accuracy. We find up to 74.48% relative gains over baselines for low-data settings and entity-rich domains.
- We quantify the benefits of source training for regular, knowledge-enhanced and low-latency models, in gradually increasing low-data scenarios.
## 2 **Related Work**
Task-oriented Semantic Parsing Semantic parsing refers to the task of mapping natural language queries into machine-executable representations.
Voice assistants typically transform a voice recording into text, that is further mapped to a backend exploitable representation containing the semantics of the request: the user intent, the invoked entities, relations between those entities, etc. Task-oriented parsing was popularized with the introduction of the TOP dataset (Gupta et al., 2018), and is usually treated as a seq2seq task where utterance tokens are copied into a semantic tree constructed auto-regressively (Rongali et al., 2020; Arkoudas et al., 2022). However such models are not always applicable in production environments with strict memory and latency constraints. This limitation is commonly addressed by reducing model sizes
(Jiao et al., 2019; Kasai et al., 2020) and leveraging non-auto-regressive modeling (Gu et al., 2017; Zhu et al., 2020; Mansimov and Zhang, 2022).
Knowledge-Enhanced LMs Retrieval-based seq2seq models such as REALM (Guu et al., 2020)
and ATLAS (Izacard et al., 2022) leverage factual knowledge from a corpus or knowledge-graph during training and inference, hence incur a considerable latency cost, despite attempts to make the retrieval more efficient (Wu et al., 2022). Given our low-latency setup, we focus on parametric knowledge that is learnt during the pre-training or fine-tuning process of large language models
(LLMs), resulting in embeddings that do not require explicit knowledge retrieval at inference.
Knowledge-enhanced pretraining focuses on modeling entities: WKLM (Xiong et al., 2019)
learns to determine if an entity was replaced with another entity of the same type in addition to Masked Language Modeling (MLM) and shows gains on downstream knowledge-intensive tasks such as Question-Answering (QA) and Relation Extraction (RE). LUKE (Yamada et al., 2020) explicitly models entity-embeddings through entityembedding prediction during MLM and entityentity self-attention layers during fine-tuning, with gains on Named Entity Recognition (NER), QA
and RE. KBIR (Kulkarni et al., 2022) learns to reconstruct keyphrases in a combination and extension of WKLM and SpanBERT (Joshi et al.,
2020), improving keyphrase extraction/generation tasks. Lastly, BLINK (Wu et al., 2020) learns entitydisambiguation by aligning entity surface forms to their descriptions resulting in rich entity embeddings. Work in the area of parametric knowledgeenhanced seq2seq models is limited to KeyBART
(Kulkarni et al., 2022) for Keyphrase Generation and GENRE (Cao et al., 2020) for Entity Disambiguation.
## 3 **Methods**
We explore two complementary methods for leveraging knowledge: (1) fine-tuning knowledgeenhanced encoders for task-oriented semantic pars-
![2_image_0.png](2_image_0.png)
ing inside seq2seq models, and (2) multi-tasking the parse generation with a mention detection task.
Task formulation We follow the task formulation of the Seq2Seq-PTR model as a sequence-tosequence generation setup (Rongali et al., 2020).
The source sequence is an utterance and the target sequence is a linearized representation of the semantic parse. The target sequence is modified to contain only intent and slot labels or pointers to tokens in the utterance. Following (Aghajanyan et al., 2020) and subsequent work we use the *decoupled* format that limits prediction to tokens that are leaves of slots1as it yielded better downstream performance in previous work. We illustrate the format used with an example from the TOPv2 dataset below:
Source: water parks in minneapolis Target: [IN:GET_LOCATION
[SL:CATEGORY_LOCATION @ptr0 @ptr1 ]
[SL:LOCATION_MODIFIER @ptr3]]
Each @ptritoken here points to the i th token in the source sequence. Here @ptr3 corresponds to the word *minneapolis*.
Proposed Architecture Based on the observation that many slot-values present in our task are actual real-life entities, we hypothesize that learning more effective representations of these slot-values may result in generating more accurate semantic parses as mentions play a critical role in understanding the utterance. We use knowledge-enhanced pretrained encoders (as described in Section 2) inside the Seq2Seq-PTR architecture used in Rongali et al.
(2020), extended to multitask training of parse generation and training of the encoder to perform token classification (mention detection), as it aligns with classification-based pre-training of the encoder. We anticipate that the multitask training will allow the knowledge-enhanced encoder representations to be attended and leveraged more effectively by the decoder generating the parse. Further, by modeling mentions inherently present in the annotated data, this serves well for low-resource use cases since we maximize the potential to learn from the data available.
Figure 1 illustrates our proposed architecture, whereby for a given input utterance [x1*, .., x*n]
we obtain encoder representation [e1*, ..., e*n], from which we jointly learn two tasks: a) Mention Detection and b) Parse Generation.
Mention Detection We frame this as a token classification task to identify spans corresponding to mentions using the BIO tagging schema. Given the input sequence containing two mention spans
[x0, x1] and [x3], the corresponding target labels are [B-MEN, I-MEN, O, B-MEN], where B represents the beginning of the span, I represents an intermediate label within the mention span and O
represents a non-mention span token. We only use this coarse-grained single entity-type label (MEN)
as this is not used for inference but rather only to guide learning better encoder representations to be used by the decoder. We use a cross-entropy loss to learn these model parameters:
$$L_{m}=-\sum_{c=1}^{3}y_{o,c}\log(p_{o,c})$$
Parse Generation Given the first t−1 generated tokens, the decoder generates the token at step t as follows: the decoder first produces a hidden state dtthrough a multi-layer, multi-head self-attention
(MHA) on the encoder hidden states and the decoder states so far, in line with the transformer decoder from Vaswani et al. (2017). The hidden state dtis fed into a dense layer to produce scores over the target vocabulary and weights are learnt using a reconstruction loss Lr.
As the loss scales are similar, we use an equally weighted joint loss combining the losses from both the task to update the model parameters.
## Lθ = Lr + Lm 4 **Experimental Setup**
Dataset We use a crowdsourced dataset called TOPv2 (Chen et al., 2020) for this empirical analysis. The dataset maps user queries to hierarchical representation as exemplified in Figure 1. The dataset contains 8 domains, such as Reminder (used to set alarms, reminders) and Navigation (used to get driving directions, traffic information). Some domains are more complex than others, by having larger catalogs and overall more nested semantics.
TOPv2 is a relevant testbed for virtual assistant understanding models in low-data settings, as it comes with different data regimes called Samples Per Intent and Slot (SPIS), for example 10 SPIS which means that each intent and slot label is present in only 10 different annotations.
Mention Distribution We use the FLAIR (Akbik et al., 2019, 2018) NER model2to tag entities and then leverage BLINK3(Wu et al., 2020) to link entities to get their canonical surface form when available. Entity-type information is only used to facilitate linking. Table 1 shows the entity distribution across the various domains of the TOPv2 dataset. This leads us to pick the following domains for our analysis:
- **Event**, which has the highest percentage of utterances that contain entities, serving as an ideal candidate to test our hypothesis.
2https://huggingface.co/flair/
ner-english-large 3https://github.com/facebookresearch/BLINK
- **Navigation**, which has the second highest entity presence and happens to be the domain with the most complex semantics (deepest trees, large catalogs).
- **Reminder**, which has the second least number of entities per utterance. We consider this domain to evaluate the impact of our proposed method for entity-scarce domains4.
Because FLAIR NER tagger is limited to identify only three types of entities: Organizations
(ORG), Persons (PER) and Locations (LOC), we extend our entity set by using slot-values present in the TOPv2 annotations. We manually select slots labels that are close to real-life entity types, but which slot values might not be recognized by the NER tagger. We describe the slots used for each domain in Appendix A.2.
The updated mention distribution is illustrated in Table 2. We see that trends between domains stay relatively the same, however there are significantly more utterances now containing entities. Event and Navigation almost double the number of average entities present in their utterances: from 1.04 to 1.76 for Event, and 1.31 to 1.86 for Navigation.
For Reminder it remains more or less the same as before (1.03 vs 1.07). Even by adding those slots there isn't a lot of salient information to be captured in the form of entities in Reminder.
Our experiments show that using a combination of the entities tagged by FLAIR NER + BLINK
and those tagged by the slot-matching mechanism described in A.2, was more effective than using either of these methods independently. We consider the spans of the tagged entities as labels. In the case both systems flag overlapping spans of text, longer spans override the shorter spans in case of nested entities as shown in A.3.
Source Training A common scenario for deployed production systems that serve N domains is to scale to a new N+1 th domain. We assume the existing N domains have longer established, larger datasets that we can use as training data to bootstrap the new domain, on which we want to fine-tune and perform evaluation.
Models Given our resource-constrained setting, all models we evaluate are *base* variants of the 4The number of entities is small but not zero, as having zero would not be different from simple (non-multitask) training.
Domain Alarm Event Messaging Music Navigation Reminder Timer Weather
Train Test Train Test Train Test Train Test Train Test Train Test Train Test Train Test
Avg Entities (All Utt.) 0.00 0.00 *0.37 0.37* 0.16 0.16 0.06 0.05 **0.37 0.38** 0.04 0.03 0.00 0.00 0.21 0.20
![4_image_1.png](4_image_1.png)
Avg Entities (Utt. w/ entity) 1.00 1.00 *1.04 1.03* 1.08 1.09 1.01 1.01 **1.31 1.31** 1.03 1.03 1.00 1.00 1.05 1.04 % utterances w/ entities 0% 0% **36% 36%** 15% 14% 6% 5% *28% 29%* 4% 3% 0% 0% 20% 19%
Total Utterances 20,430 7,123 9,170 2,654 10,018 3,048 11,563 4,184 20,998 6,075 17,840 5,767 11,524 4,252 23,054 5,682
Table 1: Entity distributions (FLAIR NER and BLINK Entity Disambiguation) across domains in the TOPv2 dataset.
Table 2: Updated mention distributions after manually adding some of domain's slot labels to valid entity types.
publicly available models, unless specified otherwise. We work with both seq2seq pre-trained transformer models and pre-trained transformer encoders stitched with a transformer decoder as done in Rongali et al. (2020). We primarily experiment with:
| Domain | Event | Navigation | Reminder | | | |
|-------------------------------|---------|--------------|------------|-------|------|------|
| Train | Test | Train | Test | Train | Test | |
| Avg Entities (All Utt.) | 1.46 | 1.50 | 1.23 | 1.23 | 0.72 | 0.70 |
| Avg Entities (Utt. w/ entity) | 1.76 | 1.80 | 1.86 | 1.88 | 1.07 | 1.06 |
| % utterances w/ entities | 83% | 83% 66% | 66% 67% | 66% | | |
- BART: We use the pre-trained encoderdecoder BART-base5(Lewis et al., 2020) as our baseline for the sequence generation task.
- RoBERTa2BART: We use the RoBERTabase6(Liu et al., 2019) as the encoder and randomly initialize a six layer decoder in the same configuration as the BART-base decoder.
This largely serves as a baseline to LUKE as a parametric non-knowledge-enhanced encoder i.e. a vanilla encoder.
- LUKE2BART: We use the LUKE-base7as the encoder and randomly initialize a six layer decoder in the same configuration as the BARTbase decoder. LUKE8serves as our parametric knowledge-enhanced encoder in evaluations.
Lightweight Architecture Variants As we explore the computation constrained setting with limited latency budget, we also implemented our models using a Single Layer Decoder (SLD) while maintaining the same size encoder. We do this as the largest portion of the latency footprint comes 5https://huggingface.co/facebook/bart-base 6https://huggingface.co/roberta-base 7https://huggingface.co/studio-ousia/
luke-base 8It is directly comparable to ROBERTA in architecture and size since we use only the token embeddings, and not the entity-entity self-attention layers. For results including these too see Section 6.
from the passes through the decoder, since autoregressive decoding requires token representation
![4_image_0.png](4_image_0.png)
to travel all their way up to the decoder as many times as there are tokens to generate. As such we propose BART2SLD, RoBERTa2SLD, and LUKE2SLD variants with a randomly initialized single layer decoder. Another angle to latency reduction is to use non-auto-regressive modeling, such as RINE (Mansimov and Zhang, 2022), a RoBERTa-based approach that achieve state-ofthe-art accuracy on low and high-resource TOP dataset while being 2-3.5 times faster than autoregressive counterparts. In this work we experiment with *rine-roberta* (the original RINE model), and rine-luke, where we instead initialize the encoder model weights with the LUKE-base parameters.
Implementation Details We use HuggingFace Transformers (Wolf et al., 2020) for seq2seq modeling architecture to ensure reproducibility. We do not tokenize intent and slot tags, but instead learn embeddings from scratch. For all our experiments we use 8 V100 NVIDIA GPUs, with batch sizes of 32 per GPU with a gradient accumulation of 2 with FP16 enabled. Source training uses a learning rate of 1e−5 over 100 epochs and fine-tuning uses a learning rate of 8e−5 over 50 epochs. Both use the Adam optimizer (Kingma and Ba, 2015). We use beam search decoding with beam size 3, and a maximum generation length of 128.
Evaluation We report Exact Match (EM) accuracy score metrics in line with previous literature
(Chen et al., 2020; Aghajanyan et al., 2020; Rongali et al., 2020). Exact match accuracy is the most important metric to report as it strictly penalizes any incorrectly generated intermediate tokens as the end-performance of a semantic parsing system would result in a failure even for partially correct answers.
## 5 **Results**
All our results are *source trained + fine-tuned*, unless specified otherwise. We perform 3 runs across each experiment setting and report average scores and standard deviations. Our findings are as follows:
Knowledge-enhanced encoders don't improve generative semantic parsing Table 3 shows results for the six layer (full) decoder setting and Table 4 shows results for a single layer decoder.
In both the Multitasking and Non-Multitask setting, we see that the best performing model across data-regimes and domains is not consistently the knowledge-enhanced encoder LUKE. In the full decoder setting, LUKE-encoder based models perform on par but no better than the vanilla RoBERTaencoder based models. We also note that both these model underperform BART, but that the gap bridges as we add more training samples. In the light-decoder setting, we also see similar trends, however an interesting finding is that BART tends to underperform when compared to RoBERTa and LUKE, even in the full data setting. This could be attributed to the smaller encoder size for BART.
The above findings are contrary to expected performance improvements typically seen using knowledge-enhanced encoders for other entityrelated tasks such as NER, RE and QA. We believe the reason for this is that the aforementioned tasks are all classification-based tasks that are able to leverage the entity representations in making decisions on class-types, but in contrast Task-Oriented Semantic Parsing is a complex generation task.
Even though entities play a critical role, the entity representations are not able to effectively guide the from-scratch decoder. This problem is alleviated to a certain extent through the Multitask training that we hypothesize is able to jointly learn representations of entities that will guide the decoder, but these jointly learnt representations do not necessarily benefit from the knowledge-enhanced encoder. Further, the application of Source Training potentially wipes out any gains the knowledgeenhanced encoder had over the vanilla counterparts as they have seen sufficient data to negate the gains through knowledge-enhancements as discussed in Section 6.
However knowledge-enhanced encoders can bring gains when reformulating parsing as a classification task as shown in Table 5 with the RINE approach that inserts utterance tokens in a semantic tree by recursively predicting triplets
(*label, start position, end position*) until it predicts termination. We do not penalize misplaced non-semantic tokens in metric calculation. Recasting the generation task to a classification task serves to be more in-line with how LUKE was pre-trained. Further, we also do not require any form of source training in this setting. We observe that *rine-luke* outperforms *rine-roberta* in most scenarios for the two entity-rich domains, but not on the entity-poor domain Reminder.
Multitasking with mention detection is an efficient way to leverage knowledge and improves performance across the board on the two TOPv2 domains with strong entity presence (Navigation and Event), especially in the lightweight decoder setting (up to 74.48%, Table 4), but also nonnegligible in the full decoder setting (up to 8.60%,
Table 3). When trained in domains with a weak entity presence (Reminder) multitasking serves as noise in the loss and results in a worse performing model for both full (-31.14%) and lightweight decoder (-82.83%). We also observe minor regression on 10 SPIS in Event but not in other data regimes for the domain, leading us to believe this may be an aberration. We find that while for certain settings such as Navigation+Lightweight decoder trained w/ MT knowledge-enhanced encoders outperform their vanilla counterparts, this behavior is not consistent across domains and decoder settings. Hence while the gains through multitasking remain consistent throughout, KE encoders do not play a large role in these gains. However, we also find that in the full decoder setting in the Navigation domain, LUKE seems to benefit the most from the Multitasking across all data regimes albeit performing slightly worse than RoBERTa still. Finally we also observe that as more data is added to the training set, the effectiveness of the Multitask learning reduces drastically. We believe this helps demonstrates that Multitask learning is most effective in the lower-data regime by leveraging knowledge available in the data.
Source-training is essential as shown in Table 8 in which KE models on their own are not sufficient to reach reasonable accuracy, as is the case for BART and was reported in Chen et al. (2020).
We show that source-training improves accuracy by up to 86.36% in full data regimes, with larger percentage gains for LUKE and RoBERTa when compared to BART, further demonstrating that Source Training is required to tune the encoders to the generation task as knowledge-enhanced pre-training
Data Regime 10 SPIS 25SPIS 50SPIS Full Data Training w/o MT w/ MT rel improv w/o MT w/ MT rel improv w/o MT w/ MT rel improv w/o MT w/ MT rel improv
Navigation
bart 50.28 ± *3.33* 51.86 ± **1.11** 3.14% 56.58 ± *3.12* 58.07 ± **0.25** 2.63% 60.8 ± 2.82 61.9 ± **0.56** 1.81% 83.7 ± 0.43 83.69 ± 0.1 -0.01% roberta2bart 44.48 ± 2.46 46.77 ± 1.95 5.15% 53.21 ± 1.39 54.75 ± 1.35 2.89% 61.04 ± 0.96 61.8 ± *1.68* 1.25% 83.95 ± 0.47 84.62 ± **0.16 0.80%** luke2bart 43.35 ± 1.83 47.08 ± 0.81 **8.60%** 53.11 ± 0.58 56.16 ± 1.61 **5.74%** 58.33 ± 2.57 60.81 ± 2.8 **4.25%** 83.39 ± 1 83.96 ± *0.28* 0.68%
Event
bart 63.85 ± *1.17* 61.77 ± 0.14 -3.26% 67.39 ± 1.51 67.81 ± 0.66 0.62% 71.31 ± 0.51 72.14 ± *0.58* **1.16%** 83.71 ± 0.62 83.32 ± 0.44 -0.47% roberta2bart 65.12 ± **2.68** 61.29 ± 2.45 -5.88% 67.13 ± 0.85 68.74 ± **0.47 2.40%** 71.06 ± 0.68 71.29 ± 0.38 0.32% 84.29 ± *0.39* 83.9 ± 0.46 -0.46%
luke2bart 63.07 ± 3.83 61.09 ± 3.3 **-3.14%** 67.6 ± 0.97 68.05 ± 0.3 0.67% 72.24 ± **0.89** 71.57 ± 0.78 -0.93% 84.05 ± 0.38 84.39 ± **0.27 0.40%**
Reminder
bart 52.29 ± 1.4 39.97 ± 1.61 **-23.56%** 62.23 ± 0.99 47.24 ± 2.74 -24.09% 68.04 ± 1.38 59.83 ± 1.08 **-12.07%** 82.88 ± *0.38* 82.59 ± 0.25 -0.35%
roberta2bart 54.5 ± *1.35* 37.53 ± 1.74 -31.14% 65.16 ± *1.11* 50.04 ± 1.73 -23.20% 69.28 ± *1.36* 59.71 ± 2.67 -13.81% 82.69 ± 0.11 82.64 ± 0.27 -0.06%
luke2bart 54.69 ± **1.22** 40.33 ± 1.53 -26.26% 66.12 ± **1.43** 52.52 ± 1 -20.57% 70.39 ± **1.19** 61.55 ± 0.67 -12.56% 82.35 ± 0.11 82.94 ± **0.18 0.72%**
Table 3: The impact of Multitask (MT) training on Exact Match (EM) performance across models and domains of the TOPv2 dataset in a Full Decoder setting. **Bold** is best performing and *Italic* is second best.
Data Regime 10 SPIS 25SPIS 50SPIS Full Data Training w/o MT w/ MT rel improv w/o MT w/ MT rel improv w/o MT w/ MT rel improv w/o MT w/ MT rel improv
Navigation
bart2SLD 5.59 ± 1.74 5.88 ± 1.62 5.19% 16.88 ± 1.46 19.03 ± 0.75 12.74% 28.2 ± 5.57 27.54 ± 0.94 -2.34% 78.69 ± 0.35 78.13 ± 0.71 -0.71%
roberta2SLD 5.02 ± 1.64 8.63 ± *3.16* **71.91%** 14.03 ± 2.27 24.48 ± **2.66 74.48%** 27.58 ± 1.09 35.82 ± *5.63* **29.88%** 80.44 ± 0.27 80.9 ± *0.25* 0.57% luke2SLD 6.76 ± 1.11 9.13 ± **3.48** 35.06% 18.47 ± 0.76 20.81 ± *3.55* 12.67% 30.74 ± 3.33 39.42 ± 5.5 28.24% 80.61 ± 0.5 82.29 ± **0.63 2.08%**
Event
bart2SLD 24.84 ± **4.49** 17.65 ± *1.37* -28.95% 42.17 ± **1.92** 37.72 ± *4.34* -10.55% 56.92 ± *1.68* 54.39 ± 1.07 -4.44% 79.88 ± 0.65 80.13 ± 0.25 0.31%
roberta2SLD 12.93 ± 0.79 14.9 ± 0.33 **15.24%** 24.04 ± 0.9 30.09 ± 7.32 25.17% 41.26 ± 4.9 57.13 ± **2.86 38.46%** 81.15 ± *0.28* 81.96 ± **0.64** 1.00% luke2SLD 12.95 ± 5.88 13.62 ± 4.54 5.17% 22.74 ± 10.19 31.68 ± 5.74 39.31% 45.09 ± 9.12 54.28 ± 2.73 20.38% 80.04 ± 1.88 80.99 ± 0.28 **1.19%**
Reminder
bart2SLD 27.54 ± 1.96 7.56 ± 3.64 **-72.55%** 43.72 ± 2.03 22.11 ± 3.57 **-49.43%** 57.27 ± 0.81 37.71 ± 1.39 **-34.15%** 76.31 ± 0.72 76.15 ± 0.71 -0.21%
roberta2SLD 35.82 ± *2.74* 6.15 ± 2.51 -82.83% 51.31 ± *4.46* 24.46 ± 5.65 -52.33% 62.77 ± *2.25* 39 ± 1.36 -37.87% 79.85 ± 0.55 80.07 ± **0.64 0.28%**
luke2SLD 39.42 ± **3.96** 8.6 ± 2.73 -78.18% 54.03 ± **1.08** 23.11 ± 2.55 -57.23% 64.1 ± **1.58** 38.96 ± 5.04 -39.22% 80.03 ± 0.3 80.04 ± *0.36* 0.01%
Table 4: The impact of Multitask (MT) training on Exact Match (EM) performance across models and domains of the TOPv2 dataset in the Light Decoder setting. **Bold** is best performing and *Italic* is second best.
Table 5: RINE model EM using RoBERTa-base encoder
(rine-roberta) and LUKE-base encoder (rine-luke).
is typically classification-based. We also find that source-training drastically improves performance especially in low-data regimes with gains of up to 1262.20%. However, as more training data is made available, the impact of Source Training also drops quickly. In the absence of further pretraining of KE models, source training is a required step, and can actually be viewed as pretraining step. We also explored if using a pre-trained decoder from BARTbase helps in improving performance but found no significant gains hence skipped the results for brevity.
Table 6: Exact Match (EM) performance improvements and degradations in an effort to further augment the knowledge-encoder LUKE on the Navigation domain of TOPv2.
## 6 **Case Study On Knowledge-Enhanced** Encoders
To better understand the lack of performance boost by KE encoders we propose a deeper dive on using LUKE as well as two alternative KE encoders.
Further enhancements to LUKE only result in limited gains For our previous experiments we restrict to using only LUKE's token embeddings to make a fair comparison with RoBERTa. However the original LUKE encoder is armed with many more parameters, including the entity-entity self-attention that allows us to leverage richer entity embeddings. We explore using the entity em-
| Data Regime | 10 SPIS | 25 SPIS |
|-------------------------------|--------------|--------------|
| Navigation luke2bart | 43.35 ± 1.83 | 53.11 ± 0.58 |
| luke2bart + linked entities | 45.85 ± 2.35 | 52.91 ± 2.16 |
| luke2bart + unlinked entities | 44.91 ± 1.68 | 51.75 ± 1.51 |
| luke2bart + unlinked mentions | 42.49 ± 3.52 | 51.72 ± 0.71 |
| luke2bart + MHA | 40.14 ± 1.54 | 50.04 ± 2.79 |
| Data Regime | 10 SPIS | 25 SPIS | 50 SPIS | Full Data |
|-------------------------|-----------------------------------------------------|-----------|-----------|-------------|
| Navigation rine-roberta | 37.63 ± 2.21 55.33 ± 0.44 61.15 ± 1.11 80.01 ± 0.13 | | | |
| rine-luke | 37.22 ± 0.82 56.88 ± 1.91 62.85 ± 1.12 80.02 ± 0.36 | | | |
| Event rine-roberta | 26.91 ± 2.46 43.50 ± 0.41 65.12 ± 2.48 79.98 ± 4.87 | | | |
| rine-luke | 30.40 ± 3.42 46.82 ± 4.94 64.98 ± 1.59 82.97 ± 0.10 | | | |
| Reminder rine-roberta | 34.47 ± 2.90 54.26 ± 1.38 64.63 ± 1.23 83.45 ± 0.61 | | | |
| rine-luke | 34.79 ± 3.19 52.54 ± 2.78 64.23 ± 1.12 83.20 ± 0.23 | | | |
| Data Regime | 10 SPIS | 25SPIS | | | | |
|-------------------------|--------------|--------------|------------|--------------|--------------|------------|
| Training | w/o MT | w/ MT | rel improv | w/o MT | w/ MT | rel improv |
| Navigation roberta2bart | 44.48 ± 2.46 | 46.77 ± 1.95 | 5.15% | 53.21 ± 1.39 | 54.75 ± 1.35 | 2.89% |
| luke2bart | 43.35 ± 1.83 | 47.08 ± 0.81 | 8.60% | 53.11 ± 0.58 | 56.16 ± 1.61 | 5.74% |
| kbir2bart* | 41.42 ± 0.91 | 43.21 ± 1.28 | 4.32% | 51.29 ± 0.54 | 52.42 ± 0.63 | 2.20% |
| blink2bart* | 33.08 ± 3.77 | 40.75 ± 0.84 | 23.19% | 45.57 ± 2.28 | 50.16 ± 0.88 | 10.07% |
Table 7: Exact Match (EM) performance by leveraging other knowledge-enhanced encoders on the Navigation domain of TOPv2. *Only large variants of models are available publicly.
| Data Regime | 10 SPIS | 25SPIS | 50SPIS | | | | | | |
|-----------------|-----------|----------|------------|--------|-------|------------|--------|-------|------------|
| Training | w/o ST | w/ ST | rel improv | w/o ST | w/ ST | rel improv | w/o ST | w/ ST | rel improv |
| Navigation bart | 10.65 | 50.28 | 372.11% | 40.25 | 56.58 | 40.57% | 50.67 | 60.8 | 19.99% |
| roberta2bart | 4.25 | 44.48 | 946.59% | 24.3 | 53.21 | 118.97% | 39.05 | 61.04 | 56.31% |
| luke2bart | 6.12 | 43.35 | 608.33% | 24.15 | 53.11 | 119.92% | 37.55 | 58.33 | 55.34% |
| Events bart | 7.27 | 63.85 | 778.27% | 25.77 | 67.39 | 161.51% | 50.9 | 71.31 | 40.10% |
| roberta2bart | 4.86 | 65.12 | 1239.92% | 10.32 | 67.13 | 550.48% | 38.13 | 71.06 | 86.36% |
| luke2bart | 4.63 | 63.07 | 1262.20% | 13.53 | 67.6 | 399.63% | 39.68 | 72.24 | 82.06% |
Table 8: The impact of Source Training (ST) on Exact Match (EM) performance across models and domains of the TOPv2 dataset beddings in various forms and methods as we report in Table 6. *luke2bart+linked entities* finds the corresponding entity representation from LUKE's entity vocab and concatenates the embedding to the token representation. We also explore the approach *luke2bart+unlinked entities* that does not rely on finding a match in LUKE's entity vocabulary but rather generates the entity embedding based only on the given surface form. While the two aformentioned approaches are run only on entities tagged by FLAIR NER and linked with BLINK, we also try *luke2bart+multitask entities*,
where the setup is similar to luke2bart+unlinked entities but leverages a larger entity set, which is actually the entity set used for the Multitasking, and uses entity embeddings for each surface form. We find that *luke2bart+linked entities* is the most effective methodology for 10 SPIS (+2.5 EM),
however gains are neutralized as data is added (-
0.2 EM). *luke2bart+unlinked entities* serves as a slightly more resource efficient way of improving performance as it skips the need to link entities before using them (+1.56 EM). Most interestingly, in contrast to the multitask learning setup we find that only concatentating representations of the slotvalues in *luke2bart+unlinked mentions* actually hurts model performance (-0.86 EM). We believe the reason for this is that without the jointly learnt embeddings a higher number of concatenations to token representations introduces more noise than useful information, especially in low-data settings where there is insufficient data to learn across many parameters. Lastly, along the same lines of having too many parameters to learn from too few data, we made the additional finding that in the pointer generator network used by the decoder, using Dot Product Attention (DPA) is more effective than Multi-Head Attention (MHA) as it contains fewer parameters to learn.
## Other Ke Encoders Than Luke Lead To Similar
conclusions We explore using other knowledgeenhanced encoders: KBIR and BLINK. KBIR is potentially better suited as it is pre-trained to exploit keyphrases, which are closer to slot-values than entities. However Table 7 shows that KBIR
performance is worse than its LUKE and RoBERTa counterparts (-3.87 EM). Using BLINK as the pretrained encoder also results in sub-par performance
(-6.33 EM). This further strengthens our claim that the knowledge-enhanced encoders do not automatically enhance model performance. However, we see that Multitasking still continues to largely benefit both these encoders too, with BLINK making the largest gains of up to 23.19%.
Any potential KE encoder gains are diluted by Source Training We further investigated if KE
encoders could have had a larger impact with less source training, for e.g. over fewer training epochs.
We plot training curves for all our settings as seen in Figure 2. Our main observation here is that in the multitask setting LUKE outperforms RoBERTa in the single layer decoder setups early in training.
However, as we train over more steps, the performance from both models converge. Further, in all other settings LUKE shows no discernible edge over RoBERTa during Source Training.
## 7 **Conclusion & Future Work**
We presented an empirical analysis of how we can leverage external knowledge for task-oriented semantic parsing in the low-resource and lowcompute settings, by conducting a rigorous set of experiments. We demonstrated that simply using a knowledge-enhanced encoder is not sufficient to improve performance over baselines for the complex task of sequence generation, but shows promising result when the task is reformulated as a classification task. We presented a multitask learning framework that leverages external knowledge and requires little to no extra data annotation, and demonstrated its effectiveness in the low-data and low-compute settings. Future work could probe the type of knowledge learned by this method, and attempt to apply it to other entity-rich tasks, across model architectures. It could also explore an indepth error analysis of where knowledge-enhanced encoders fail in order to address these shortcomings.
Further we could extend this work for retrievalbased seq2seq models to improve task-oriented semantic parsing.
## Limitations
We concede that there are differences in the number of parameters between the BART models when compared to the RoBERTa and LUKE counterparts.
However, as per our result discussions and observations, the gains are orthogonal to the encoder used and the differences in the base models are not as significant when comparing the larger counterparts. We note that we also explored seq2seq
![8_image_0.png](8_image_0.png)
pre-trained knowledge-enhanced models like KeyBART and GENRE, however both resulted in underwhelming performance compared to BART. Further exploration is required in improving performance for such models. We also note that while we demonstrate gains by switching to a classificationbased approach in RINE, such models are limited in other generation task capabilities such as translation or summarization. We will release the data and code used for this work, but emphasize that some processing was done over the raw TOPv2 dataset, namely reconstructing source utterances directly from the provided target instead of using the provided source, as we encountered mismatches when constructing pointers. The source was then lowercased.
## Ethics Statement
We use publicly available data sets in our experiments with permissive licenses for research experiments. We do not release new data or annotations as part of this work.
## Acknowledgements
We would like to thank Ryan Gabbard, Amir Saffari, Kai-Wei Chang, Haidar Khan, Thomas Gueudre and Chandana Prakash for insightful discussions and feedback during the development of this work.
## References
Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, et al. 2020. Conversational semantic parsing. *arXiv* preprint arXiv:2009.13655.
Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019.
FLAIR: An easy-to-use framework for state-of-theart NLP. In *NAACL 2019, 2019 Annual Conference* of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54–59.
Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018.
Contextual string embeddings for sequence labeling.
In *COLING 2018, 27th International Conference on* Computational Linguistics, pages 1638–1649.
Konstantine Arkoudas, Nicolas Guenon des Mesnards, Melanie Rubino, Sandesh Swamy, Saarthak Khanna, Weiqi Sun, and Khan Haidar. 2022. Pizza: A new benchmark for complex end-to-end task-oriented parsing. *arXiv preprint arXiv:2212.00265*.
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval.
CoRR, abs/2010.00904.
Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, and Sonal Gupta. 2020. Low-resource domain adaptation for compositional task-oriented semantic parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5090–5100, Online. Association for Computational Linguistics.
Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural semantic parsing. *arXiv preprint arXiv:1706.04326*.
Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for nlp. *arXiv preprint arXiv:2105.03075*.
Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Nonautoregressive neural machine translation. arXiv preprint arXiv:1711.02281.
Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018. Meta-learning for lowresource neural machine translation. arXiv preprint arXiv:1808.08437.
Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. arXiv preprint arXiv:1810.07942.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: retrievalaugmented language model pre-training. *CoRR*,
abs/2002.08909.
Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wentau Yih, and Xiaodong He. 2018. Natural language to structured query generation via meta-learning. arXiv preprint arXiv:1803.02400.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models.
Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. *arXiv preprint* arXiv:1606.03622.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019.
Tinybert: Distilling bert for natural language understanding. *arXiv preprint arXiv:1909.10351*.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. *Transactions of the Association for* Computational Linguistics, 8:64–77.
Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A Smith. 2020. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. *arXiv preprint arXiv:2006.10369*.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Mayank Kulkarni, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. 2022. Learning rich representation of keyphrases from text. In *Findings of the* Association for Computational Linguistics: NAACL
2022, pages 891–906, Seattle, United States. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. *arXiv preprint arXiv:1609.01454*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Elman Mansimov and Yi Zhang. 2022. Semantic parsing in task-oriented dialog with recursive insertionbased encoder. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pages 11067–11075.
Grégoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding. In *Interspeech*, pages 3771–3775.
Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing. In *Proceedings of The Web Conference 2020*, pages 2962–2968.
Sebastian Ruder, Matthew E Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Transfer learning in natural language processing. In *Proceedings of the 2019 conference of the North American* chapter of the association for computational linguistics: Tutorials, pages 15–18.
Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. *arXiv* preprint arXiv:2104.07540.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *CoRR*, abs/1706.03762.
Bailin Wang, Mirella Lapata, and Ivan Titov. 2020.
Meta-learning for domain generalization in semantic parsing. *arXiv preprint arXiv:2010.11988*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zeroshot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 6397–6407, Online. Association for Computational Linguistics.
Yuxiang Wu, Yu Zhao, Baotian Hu, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2022.
An efficient memory-augmented transformer for knowledge-intensive nlp tasks.
Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2019. Pretrained encyclopedia:
Weakly supervised knowledge-pretrained language model. *CoRR*, abs/1912.09637.
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entityaware self-attention. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442–6454, Online. Association for Computational Linguistics.
Qile Zhu, Haidar Khan, Saleh Soltan, Stephen Rawls, and Wael Hamza. 2020. Don't parse, insert: Multilingual semantic parsing with insertion based decoding.
arXiv preprint arXiv:2010.03714.
## A **Appendix** A.1 **Rine Implementation Details**
RINE uses an encoder, in this case RoBERTa-base, to encode the input sequence into hidden vectors, then uses a sequence classification head to predict the output label. It uses attention probabilities from the first and second attention head of the last attention layer to predict the begin and end positions, respectively. Finally, it trains the model by optimizing the combined three objectives, label loss, start position loss and end position loss.
The training data for RINE is different from seq2seq models. Unlike seq2seq models, RINE predicts a label to insert into the input sequence.
Hence, to train the model we need to create a dataset with partial parses, where each training example corresponds to inserting one more label into a partial linearized parse, creating a new nonterminal semantic node in the parse tree. Similar to RINE paper, we follow a top-down generation ordering to create pairs of partially constructed trees.
## A.2 **Slot Matching Schema**
Table 9 shows the slots we have used from each domain in TOPv2 while generating all for the slotvalue augmentation of the FLAIR and BLINK recognized entities. Note we need these slot label schemas for all domain as Source Training is conducted across all domain except one (the target domain) and thus we require this information during Source Training. We had two authors define this schema based on inter-annotator agreement and data analysis.
## A.3 **Multitask Entity Labeling Example**
For the utterance, "How long is the drive to 401 North Highway". In the case where FLAIR NER
identifies "401 North" as a *Location (LOC)* entitytype, whereas our slot-matching schema identifies
"401 North Highway" as it corresponds to the *Destination* Slot. Since these are overlapping spans from two systems we consider the longer span, which in this case leads to "401 North Highway" span tagged as an entity and "401 North" discarded.
| Domain | Entity Slots |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|
| ALARM | N/A |
| EVENT | LOCATION, ORGANIZER_EVENT, CATEGORY_EVENT, NAME_EVENT, CATEGORY_LOCATION, ATTENDEE_EVENT, POINT_ON_MAP |
| MESSAGING CATEGORY_LOCATION, CATEGORY_EVENT, RECIPIENT, RESOURCE, LOCATION, CONTACT, SENDER MUSIC MUSIC_PLAYLIST_TITLE, MUSIC_PROVIDER_NAME, MUSIC_TRACK_TITLE, MUSIC_ARTIST_NAME, MUSIC_ALBUM_TITLE NAVIGATION LOCATION, DESTINATION, SOURCE, POINT_ON_MAP, CATEGORY_LOCATION, MUTUAL_LOCATION, LOCATION_WORK, LOCATION_CURRENT, NAME_EVENT, PATH, PATH_AVOID REMINDER PERSON_REMINDED, ORGANIZER_EVENT, CATEGORY_EVENT, ATTENDEE_EVENT, RECIPIENT, ATTENDEE, CONTACT, SENDER TIMER N/A WEATHER LOCATION, CONTACT | |
Table 9: Slots schema matching mechanism to detect mentions in all the TOPv2 Domains.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Ethics
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 2 and Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Ethics
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Table 1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Table 1 and Table 2
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 - Implementation Details The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 - Implementation Details
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 - Implementation Details D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-etal-2023-templm | {T}emp{LM}: Distilling Language Models into Template-Based Generators | https://aclanthology.org/2023.findings-acl.124 | While pretrained language models (PLMs) have greatly improved text generation, they have also been known to produce unfaithful or inappropriate content. In contrast, classic template-based systems provide strong guarantees of faithfulness at the cost of fluency. We propose TempLM, which achieves the best of both worlds by distilling a PLM into a template-based generator. On the E2E and SynthBio data-to-text datasets, we show that TempLM is more faithful than the original PLM and is more fluent than prior template systems. Notably, on an out-of-domain evaluation, TempLM reduces a finetuned BART model{'}s unfaithfulness rate from 83{\%} to 0{\%}. In a human study, we find that TempLM{'}s templates substantially improve upon human-written ones in BERTScore. | # Templm: Distilling Language Models Into Template-Based Generators
Tianyi Zhang, Mina Lee∗, Lisa Li∗, Ende Shen∗**, Tatsunori B. Hashimoto**
Computer Science Department, Stanford University
{tz58, minalee, xlisali, endeshen, thashim}@stanford.edu
## Abstract
While pretrained language models (PLMs)
have greatly improved text generation, they have also been known to produce unfaithful or inappropriate content. In contrast, classic template-based systems provide strong guarantees of faithfulness at the cost of fluency. We propose TempLM, which achieves the best of both worlds by distilling a PLM into a templatebased generator. On the E2E and SynthBio data-to-text datasets, we show that TempLM is more faithful than the original PLM and is more fluent than prior template systems. Notably, on an out-of-domain evaluation, TempLM reduces a finetuned BART model's unfaithfulness rate from 83% to 0%. In a human study, we find that TempLM's templates substantially improve upon human-written ones in BERTScore.
## 1 Introduction
Pretrained language models (PLMs; Brown et al.,
2020; Lewis et al., 2020) can generate fluent text and are data-efficient when being transferred to downstream tasks (Chen et al., 2020; Schick and Schütze, 2021). However, PLMs have been known to produce unfaithful outputs (Maynez et al., 2020)
and inappropriate content (Gehman et al., 2020)
that can lead to disastrous outcomes in real-world deployments (Wired, 2021). These errors can be worsened when models are queried with out-ofdomain (OOD) input. Figure 1 shows that querying a finetuned PLM with a novel entity (e.g. Starbucks) not in the training data can lead to surprising failures even though the PLM achieves high indomain performance. This poses a great challenge in deploying PLMs in real-world applications.
In stark contrast, classic template-based systems (Reiter and Dale, 1997; Barzilay and Lee, 2003; Angeli et al., 2010) employ templates consisting of words and nonterminal fields, which are robust to novel entities by design. Moreover, templates are directly readable by humans, and human
In domain : PLM generates high-quality output
![0_image_0.png](0_image_0.png)
Input data Output Text
![0_image_1.png](0_image_1.png)
Out of domain : PLM produces unfaithful output
Figure 1: A high-performance PLM finetuned on the E2E dataset generates unfaithful outputs when given out-of-domain inputs. We show later that BART produces such errors 83% of the time while TempLM never suffers from such failures.
inspection can provide direct guarantees of faithfulness. However, templates can be too rigid and produce disfluent text with unexpected inputs. In this work, we seek to borrow the merits of classic template-based techniques to improve faithfulness and interpretability, while retaining the PLM's flexibility and data efficiency.
We propose TempLM, a novel framework that distills a PLM into a template-based system for data-to-text tasks. At training time, TempLM extracts templates that maximally recover the induced probability distribution of the PLM, similar to model distillation (Hinton et al., 2015).
At inference time, TempLM uses the PLM to select appropriate data (content selection) and templates (surface realization).
While distilling a PLM into a template-based generator brings benefits, it also raises new challenges. Extracting templates that match a PLM's probability distribution is a challenging combinatorial optimization problem with no clear solution. Our approach relies on two new ideas. First, because our goal is to recover the PLM's induced probability distribution, TempLM initializes its search procedure by *delexicalizing* PLM's generation outputs, *i.e.* abstracting the value in the output with data fields. For example, we can delexicalize
"Aromi is a Chinese restaurant" into "[name] is a
[food] restaurant." Second, TempLM leverages the PLM's generation ability to refine templates, using a novel *consensus beam search* algorithm.
Unlike prior works (Wiseman et al., 2018), our approach can leverage any PLM to generate templates, allowing us to take advantage of improvements in the data efficiency and fluency of PLMs.
We evaluate TempLM on the E2E (Novikova et al., 2017) and the SynthBio datasets (Yuan et al.,
2021). We observe that TempLM is the most faithful generation method (with zero faithfulness errors) on the E2E in-domain test set. Furthermore, TempLM fixes the unreliable OOD behavior of PLMs, reducing the unfaithful output rate from 83% to 0%. In addition, we show that TempLM
achieves higher metric scores than classic text generation techniques and a previous hybrid neuraltemplate method (5 BLEU scores higher than Wiseman et al. (2018) even when trained with 42 times less data). We further conduct a human study where we ask annotators to write templates for SynthBio with a time constraint. We observe that TempLM
produces more fluent templates than both the average template writer and an ensemble aggregating all the template writers.
## 2 Related Works
PLMs for language generation. PLMs (Radford et al., 2019; Brown et al., 2020; Lewis et al., 2020) are pretrained over large scale text corpora and have significantly improved generation fluency and data efficiency. However, PLMs can still produce unreliable outputs, including hallucination (Maynez et al., 2020), inconsistency (Elazar et al., 2021),
toxicity (Gehman et al., 2020), or privacy violations (Carlini et al., 2021). TempLM addresses these shortcomings by distilling a PLM into a less expressive but more trustworthy template-based system, while retaining fluency and data efficiency.
Classic template-based methods. Classic template methods often delexicalize the training set data, *i.e.* they abstract the values in examples from the training data with the nonterminal data fields (Ratnaparkhi, 2002; Oh and Rudnicky, 2000; Rudnicky et al., 1999; Angeli et al., 2010). For example, "The restaurant name is Aromi" can be delexicalized into "The restaurant name is
[name]." However, delexicalization can be challenging for human-written text. When describing that the customer rating is "3 out of 5," human writers may paraphrase it into "3 stars" or "average."
Delexicalization has difficulties capturing this paraphrasing problem and often leaves lexicalized values in templates, which makes the templates less generalizable. In contrast, TempLM first finetunes a PLM on the data-to-text task and then exploits the PLM's ability in smoothing the text distribution to tackle the paraphrasing problem. This technique enables TempLM to generate more fluent outputs than classic template-based systems.
Hybrid neural generation methods. There have been many works that explore different ways to leverage intermediate representations/operations to guide neural generation, including designing an explicit planning module (Puduppully et al., 2019),
editing exemplar training examples (Wiseman et al., 2021), and inducing latent variables (Wiseman et al., 2018; Li and Rush, 2020; Ye et al.,
2020). Much like classic template-based methods, these systems attempt to learn structured representation from diverse human-written text, which is challenging and often requires heuristics for additional supervision. We differ from prior methods in two important aspects: first, TempLM's templates consist of terminal words and nonterminal fields, which make the templates robust and interpretable. Second, TempLM can leverage any PLM to generate templates, allowing us to take advantage of improved fluency and data efficiency brought by PLMs.
## 3 Templm: Template-Based Generators 3.1 Problem Statement
We are interested in data-to-text tasks (Figure 3), where we are given input data d, consisting of *field* and *value* pairs where a field may correspond to multiple values. For example, d could be {name: [Aromi, aromi],
article: [a, an]}, where name is a data field corresponding to multiple values "Aromi" and
"aromi". Note that we differ from common datato-text setups in allowing multiple data values and augmenting d with different capitalization and function words to accommodate for template systems.
Our task is to describe d by some text x generated by p(x|d). To this end, we want to learn a model pθ(x|d) using training examples (*x, d*). In the PLM approach, pθ is implemented by finetuning a PLM on (*x, d*), using standard log loss.
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
He died in a car accident on May 13, 1999 in Düren. Schneider was married to Regina Schneider, and the couple had no children.
Figure 3: Example of the SynthBio data-to-text task.
We are given Wikipedia-style data d about a person and are tasked with generating the biography x.
In template-based generation, we want to obtain a template set T consisting of templates t and ensure that for new input data d, we can generate a high-quality output x. We define a template t as a sequence of *terminal* tokens and *nonterminal* fields that can be replaced by their values in d. For example, a template "The restaurant name is [name]"
can be filled in as "The restaurant name is Aromi".
We represent the action of filling in a template t with data d as x = F(*t, d*).
A set of templates T captures the data distribution well if at least one template from t is highquality for every input d. We formalize this goal by stating that for a given input d, we are interested in maximizing maxt∈T log p(F(*t, d*)|d). Because we want templates to be inspectable by humans, we want to limit the size of T by a budget B, |T| ≤ B.
Putting these constraints together, we have the following optimization problem:
$$\operatorname*{argmax}_{T,|T|\leq B}\mathbb{E}_{d}[\operatorname*{max}_{t\in T}\;\log p(F(t,d)|d)].$$
What are the implications of Equation (1)? Equation (1) suggests that we would prefer **generalizable templates** such that a single t can be flexibly filled in so that log p(F(*t, d*)|d) is high for many different d. In practice, this means that our objective prefers templates with few or no *lexicalized* values. Compare the two templates, "The restaurant name is Aromi" versus "The restaurant name is [name]". Equation (1) would prefer the latter template because the first one does not work well when d describes a different restaurant name.
Although Equation (1) nicely captures our intuition of a generalizable template, it presents several optimization challenges. Equation (1) is a sizeconstrained combinatorial problem that does not have a clear solution. Analyzing the structure of Equation (1), we can decompose it into two separate maximization problems. First, we have the template extraction problem of identifying the best template set argmaxT,|T|≤B. Second, given a template set T, we have the **template inference**
problem of identifying the best template maxt∈T .
In the next two sections, we discuss how to leverage PLMs to solve these two problems respectively.
## 3.2 Template Extraction
The inherent challenge of template extraction is that human-written text in the form of x ∼ p(x|d)
may not follow a template structure. This is especially true when humans paraphrase the same data value differently, but it could also occur as humanwritten texts have complex syntactic structures that are not covered by templates. This linguistic diversity makes delexicalization, and more generally learning templates from x, extremely challenging.
$$(1)$$
Our objective in Equation (1) addresses this key problem. Maximizing log p(F(*t, d*)|d) is equivalent to asking for a template t to match at least one high probability sequence under p, rather than matching all high probability sequences, as is typical in delexicalization or latent-variable based template models. While this approach resolves the paraphrasing problem, it relies upon the true datagenerating probability p(F(*t, d*)|d) which we cannot evaluate. Therefore, we propose to approximate p with a PLM pθ. This amounts to treating pθ as the ground truth optimization target, similar to model distillation (Hinton et al., 2015).
While targeting pθ makes the optimization problem easier, Equation (1) is still intractable because of its difficult combinatorial structure. We design a series of approximations to circumvent the optimization difficulty (Figure 2).
Clustering. Suppose we can obtain the optimal template set T∗ = {t∗1
, . . . , t∗
i
, . . . , t∗B}. Then we can identify a cluster function C∗ where C∗(d) = i returns the index of the optimal template t∗
i for example d. With C∗, we can decompose Equation (1)
into B subproblems that are easier to solve,
$$\operatorname*{argmax}_{t_{i}}\quad\underset{d\text{s.t.}C^{\star}(d)=i}{\mathbb{E}}[\log p_{\theta}(F(t_{i},d)|d)].\quad\text{(2)}$$
While obtaining C∗is impossible, we can design approximate clusters C based on the presence of different fields, as is standard in other data-to-text methods (Wiseman et al., 2021).
Delexicalizing PLM outputs. Equipped with approximate clusters C, how can we find templates that work for all examples in the same cluster?
Because we are optimizing for pθ, one natural starting point is to delexicalize the model beam search output xθ. We denote t delex θ(d) as the template we obtain from delexicalizing the PLM output xθ of the input d and denote T
delex θ(d) as the corresponding template set.
Delexicalizing xθ also allows us to be more data efficient and robust. This is because obtaining T
delex θ(d) only requires unlabeled inputs d as opposed to requiring full supervision (*x, d*). Obtaining unlabeled data for out-of-domain inputs is substantially easier, and this allows us to exploit data beyond the training set. In practice, we perform data recombination (Jia and Liang, 2016) to not only increase the quantity of d but also explore more field and value compositions.
Template validation via PLM probabilities.
While T
delex θ(d) provides a good initial template
Algorithm 1 **Consensus Beam Search**
k: beam size, M: maximum length
V: terminal tokens, VT : nonterminal fields
N: number of inputs
t
′: partial template where ungeneralizable spans are removed
![3_image_0.png](3_image_0.png)
Algorithm 1 : We search for a common constituent y that can be infilled to all partial descriptions x′i
. In contrast to conventional beam search, we aggregate the log probability scores across different inputs at each step (Line 6 to Line 14). To generate nonterminal fields
(*e.g.* [name]), we account for how they will be filled in with different input d′i in Line 11.
set, some of these templates may contain a substantial number of lexicalized data values. To remove these less generalizable templates and fulfill the template budget constraint B, we want to filter the template set T
delex θ(d). We leverage the PLM's probability estimates to evaluate the template *generalizability*, defined as a template's average log probability over the entire cluster. For a template generated by delexicalizing d, this objective can be written as
X d′ s.t. C(d′)=C(d) log pθ(F(t delex θ(d), d′)|d ′). (3)
where d′are examples sampled from the same data cluster, C(d′) = C(d). Equation (3) assigns a scalar value to each t delex θ(d) that we use to filter out any ungeneralizable templates. In practice, we retain the top-K best templates in each cluster to form the template set.
Template Refinement via Consensus Beam Search. If a template contains only a few lexicalized values, we can further identify these spans using a token-level version of Equation (3) and then replace ungeneralizable spans by executing a search algorithm with Equation (3) as the objective.
To identify the ungeneralizable spans, we begin by evaluating the token-level equivalent to Equation (3) (see Appendix A.1 for details). We then aggregate these token-level scores into a constituentlevel score using a constituency parser, and mark any constituent whose score is lower than a threshold as ungeneralizable. To salvage these ungeneralizable spans, we leverage a PLM to optimize for Equation (3) directly.
We remove the ungeneralizable spans to form partial template x′and learn an infilling model p infill θ(x|x′, d) to replace the ungeneralizable spans.
We implement p infill θby finetuning a different PLM
and present the details in Appendix B.3.
There are two challenges we face in optimizing Equation (3). First, the infilling model p infill θ is learned to generate text, not templates. Second, Equation (3) is an unusual objective in text generation that is a mixture-of-experts of many language models where each model conditions on some input d′. We propose two modifications to the standard beam search algorithm to address these challenges
(Algorithm 1). First, we empower the infilling model p infill θ with the ability to generate nonterminal data fields and define their scores based on how they will be filled in (Line 11). Second, we search for a common output that is the "consensus" of many inputs d′ by aggregating the log probability scores across inputs at each decoding step (Line 6 to Line 14). Empirically, we find that template refinement can correct for errors in the earlier steps by removing lexicalized values or incorrect fields in the template. We present a qualitative study of template refinement in Appendix B.3.
Human Inspection and Validation. Once templates are refined, we save them as an internal part of TempLM and use them for template inference at test time. To obtain an even stronger faithfulness guarantee, we can have human inspectors validate each template. TempLM offers two advantages for such human-in-the-loop inspection.
First, templates in TempLM are readable by humans. Second, TempLM by design has limited freedom during inference: an output can only be generated from filling in a template with input data.
As long as none of the templates contains hallucination or inconsistency, TempLM will be guaranteed to return a faithful output. The combination of interpretability and restricted output space enables a natural interface for human-in-the-loop cooperation, where a human inspector can sanitize all the templates before deploying TempLM into realworld applications.
## 3.3 Templm Template Inference
Given the template set T that we extracted, we now need to solve the problem of identifying the best template maxt∈T for a new input d. In TempLM,
we leverage PLMs as a core primitive in both the content selection and surface realization steps.
Content Selection requires us to substitute a nonterminal field with the most appropriate value among the multiple values that a field corresponds to. We perform this step using a left-to-right autoregressive PLM. At each decoding step, we directly copy from t when encountering a terminal word; otherwise, we select the most probable data value to replace a field. PLMs are typically trained with byte-pair encoding (Sennrich et al., 2016), which might break up data values into multiple tokens.
Performing an exact search involves computing the probability of each multi-token value by additional roll-outs, which slows down inference. We circumvent this problem by performing a greedy search on the first token, which leads to faster or on-par inference time with standard PLM inference.
Surface Realization requires us to select the most appropriate output after templates are filled in. We perform this step by computing F(*t, d*) for all templates in the same cluster C(d) and returning the one with the highest pθ(F(*t, d*)|d).
## 4 Experiments
We evaluate TempLM's ability to generate faithful and fluent text in three settings: an in-domain evaluation on standard data-to-text benchmarks, an out-of-domain evaluation that stress tests the ability to generalize to novel inputs, and a human study comparing TempLM's template extraction ability to that of human template writers.
## 4.1 Experiment Setup
Datasets. We consider two data-to-text datasets:
E2E (Novikova et al., 2017) and SynthBio (Yuan et al., 2021). The E2E dataset contains data entries about restaurants and asks for text descriptions of restaurant data. Originally, the E2E dataset contained 42K training samples with eight distinct fields and 109 field combinations. To better evaluate data efficiency and faithfulness, we downsample the training set to ten samples per field combination. Results on the full E2E dataset are similar and are shown in Appendix B.3. We evaluate on the official validation and test sets.
SynthBio asks systems to write biographies based on Wikipedia-style data tables and was originally proposed as an evaluation set for WikiBio (Lebret et al., 2016). Because WikiBio is a noisy dataset created by automatic retrieval and contains pervasive hallucinations, we decided to use SynthBio instead, by splitting it into training, validation, and test sets, and evaluate on the test set. We summarize the dataset statistics in Table 5.
Evaluation Metrics. We evaluate the fluency of the generated outputs by reference-based evaluation. For E2E, we use the official toolkit and evaluate in terms of BLEU (Papineni et al., 2002),
NIST (Belz and Reiter, 2006), ROUGE-L (Lin and Rey, 2004), CIDEr (Vedantam et al., 2015),
and METEOR (Banerjee and Lavie, 2005). For SynthBio, we evaluate by BLEU, ROUGE-L, and BERTScore (Zhang et al., 2020).
On the E2E dataset, we also evaluate the faithfulness of a system output. We define an output description to be faithful if it does not contradict the input data or hallucinate information not present in the input. To automatically evaluate this, we manually inspected system output descriptions in the validation set and collected common paraphrases of each possible data value. For example, a customer rating of "3 out of 5", may appear as "3 stars", "average", etc. This allows us to develop a matching-based metric: we count precision error Eprecision when a piece of system output contains any paraphrase that matches with a value not in the input (hallucination) or a value different from the one provided in the input (inconsistency).
Note that Eprecision is a conservative metric.
When we encounter novel phrasings that do not match any entry in our phrasing collection, we do not count them toward Eprecision. We present more implementation details in Appendix B.2. For template-based methods, we reuse the same routine to measure the percentage of templates that contain lexicalized values (%. Lex. Temp), which measures the generalizability of the templates. We calculate an analogous recall-oriented metric Erecall and provide the results in Appendix B.3. We focus on Eprecision instead of Erecall, as E2E does not require systems to verbalize every value in d.
Implementing **TempLM.** We implement pθ(x|d)
and the infilling model pθ(x|x′, d) by finetuning BARTBASE (Lewis et al., 2020). On E2E, we assign training samples that have the same combination of fields into the same cluster, which results in 109 clusters. We use data recombination (Jia and Liang, 2016) to combinatorially create 50 samples for each cluster and thereby increase the training data size by five times for template extraction. We define the target number of templates per cluster for TempLM to be five, which results in around 500 templates after deduplication. On SynthBio, we cluster data by the "occupation" field, which results in eight clusters, and we set the TempLM's budget to be ten templates per cluster. We do not perform any data augmentation for SynthBio. More training details are described in Appendix B.2.
Baselines. We compare to three classes of baselines. To compare to existing PLMs, we evaluate a finetuned BARTBASE model and a KGPT model (Chen et al., 2020), which improves a LM by knowledge-grounded pretraining.
For classic template systems that delexicalize training samples, we compare to TempClassic, which delexicalizes the training data but uses our PLM based inference procedure. We also compare to the SUB baseline (Wiseman et al., 2018), which replaces the PLMs based inference in TempClassic with a rule-based procedure.
For recent hybrid neural-template methods, we compare to the NTemp method (Wiseman et al.,
2018). As we were unable to obtain good performance by NTemp on the downsampled training set, we evaluate the model trained on the full E2E
training set.
Finally, we performed ablation studies by removing the template refinement (- Refinement)
and template validation (- Validation) components from TempLM.
## 4.2 In-Domain Experiment
Table 1 shows that on E2E and SynthBio, TempLM
is more faithful than BART while achieving higher metric scores than other template-based methods.1 TempLM **is faithful.** TempLM is the only method that achieves *zero* Eprecision across validation and test sets. This improvement over BART suggests TempLM's usefulness in practice. For real-world deployments, we can further leverage human in1We present other metric scores and validation set results in Appendix B.3.
| Eprecision ↓ | BLEU↑ | ROUGE-L↑ | |
|--------------------|--------------|------------|------------|
| BART | 6.0 ± 2.9 | 66.2 ± 0.5 | 68.4 ± 0.7 |
| TempLM | 0.0 ± 0.0 | 61.5 ± 1.0 | 64.5 ± 0.8 |
| KGPT | 8 | 58.41 | 63.93 |
| Neighbor Splicing∗ | 543 | 24.12 | 37.46 |
| NTemp† | 7 | 55.17 | 65.70 |
| TempClassic | 46.7 ± 25.4 | 52.1 ± 2.0 | 62.2 ± 2.3 |
| SUB | 110.7 ± 36.2 | 45.3 ± 1.9 | 55.6 ± 2.4 |
| BLEU↑ | BERTScore F1↑ | |
|-------------|-----------------|------------|
| BART | 40.8 ± 0.2 | 55.2 ± 0.1 |
| TempLM | 40.3 ± 0.3 | 54.3 ± 0.1 |
| TempClassic | 36.6 ± 0.2 | 48.8 ± 0.1 |
| SUB | 14.1 ± 0.1 | 18.9 ± 0.1 |
| E2E | SynthBio | | | | | |
|--------------|----------------|------------|------------|---------------|------------|------|
| Eprecision ↓ | %. Lex. Temp ↓ | BLEU↑ | #. Temp ↓ | BLEU↑ | #. Temp ↓ | |
| TempLM | 0.0 ± 0.0 | 5.2 ± 1.2 | 61.5 ± 1.0 | 471.7 ± 62.9 | 40.3 ± 0.3 | 80 |
| - Refinement | 0.0 ± 0.0 | 12.1 ± 1.3 | 61.4 ± 0.9 | 534.3 ± 8.5 | 35.2 ± 0.9 | 80 |
| - Validation | 2.7 ± 2.2 | 21.4 ± 2.6 | 64.0 ± 1.0 | 2047.3 ± 43.7 | 36.4 ± 0.1 | 1511 |
| TempClassic | 46.7 ± 25.4 | 37.4 ± 0.5 | 52.1 ± 2.0 | 978.3 ± 1.2 | 36.6 ± 0.2 | 1511 |
Eprecision ↓ %. Lex. Temp ↓ BLEU↑ #. Temp ↓ BLEU↑ #. Temp ↓
TempLM 0.0 ± 0.0 5.2 ± 1.2 61.5 ± 1.0 471.7 ± 62.9 40.3 ± 0.3 80
- Refinement 0.0 ± 0.0 12.1 ± 1.3 61.4 ± 0.9 534.3 ± 8.5 35.2 ± 0.9 80
- Validation 2.7 ± 2.2 21.4 ± 2.6 64.0 ± 1.0 2047.3 ± 43.7 36.4 ± 0.1 1511 TempClassic 46.7 ± 25.4 37.4 ± 0.5 52.1 ± 2.0 978.3 ± 1.2 36.6 ± 0.2 1511
Table 2: Ablation results averaged over three random seeds on different template-based systems. We bold the best
numbers in each column and show standard errors with error bars. TempLM extracts most generalizable templates and achieves good performance with a small number of templates.
spection to sanitize TempLM's template set, which allows us to remove any lexicalized values in the templates and obtain a strict guarantee for TempLM's faithfulness. In contrast, TempClassic produces almost eight times more precision errors than BART (46 vs. 6), which shows the difficulty of inducing templates over human-written text.
TempLM **is fluent and data-efficient.** We observe that on E2E, TempLM achieves higher metric scores than other baselines except BART, and on SynthBio, TempLM even performs similarly to BART despite using the less expressive template representation. This demonstrates that TempLM
achieves better fluency than previous template methods and is competitive with neural methods.
In addition, TempLM retains the data efficiency of PLMs. In particular, TempLM achieves a significant 5 BLEU score improvement over NTemp, which is trained with much more data (1090 vs.
42K training samples). In contrast, the state-ofthe-art method Neighbor Splicing cannot do well when trained with only 1090 data points.
TempLM **enables trade-offs between fluency,**
robustness, and interpretability. We designed TempLM to have a small number of templates to make TempLM more conducive to human inspection. TempLM successfully achieves this, using less than 500 templates for E2E and only 80 templates for SynthBio. Comparing TempLM without Refinement and TempLM without Validation, we find that template validation reduces the number of templates and substantially increases reliability (halving the percentage of templates containing lexicalized values), but may incur a minor performance drop in fluency.
We find that the template structure is simpler on E2E, and refinement does not add substantial benefit. However, on Synthbio refinement is critical to reversing the performance drop and results in a 4 BLEU score gain. Upon inspection, we find that template refinement can accurately remove ungeneralizable spans in the longer and more complicated templates, which is necessary for SynthBio.
Overall, we find that TempLM ensures faithfulness, retains the PLM's fluency and data efficiency, and balances between performance and interpretability. In the following sections, we go beyond automatic in-domain evaluation. We first stress test systems with out-of-domain inputs and perform a human study to showcase the difficulty of template extraction.
## 4.3 Out-Of-Domain Experiment
Models deployed in real-world applications need to be robust to test distributions different from the training distribution. To test for out-of-domain
| Unfaithful Output Rate (%) | |
|------------------------------|------|
| BART | 83.3 |
| KGPT | 16.6 |
| Neighbor Splicing | 100 |
| TempLM | 0 |
| BERTScore F1 | | |
|----------------|------------|------------|
| Human | 51.3 ± 2.3 | |
| Human | 54.0 | |
| Writer | Ensemble | |
| Cluster | BART | 58.5 ± 0.2 |
| TempLM | 58.8 ± 1.0 | |
| Human | 42.2 ± 4.4 | |
| Human | 48.5 | |
| Spy | Ensemble | |
| Cluster | BART | 55.3 ± 0.1 |
| TempLM | 50.8 ± 0.9 | |
(OOD) generalization, we simulate such a setting on E2E by testing models with entities that are not seen during training.
We create our OOD evaluation by taking fields in E2E (area, eatType, food, name, near) and filling in common entities scraped from the internet to create 54 novel examples. For instance, we create examples like {area: Central Park, name:
McDonald's, ...}. We inspect the system outputs manually to check the correctness and present the results in Table 3. We observe that outputs from other systems produce are frequently unfaithful, often confusing entities from different types. In the previous example, BART mistakenly outputs "Central park is a restaurant ...", confusing area with name. In contrast, TempLM is robust to novel inputs and does not produce any unfaithful outputs. We provide the list of novel entities used in creating OOD input and more qualitative examples in Appendix B.4.
## 4.4 Human Study
To demonstrate the difficulty of generating templates, we conduct a human study on two clusters of the SynthBio dataset. We recruited ten volunteers from our institution to be our template writers and assigned five writers to work on each cluster.
Each template writer was given thirty minutes to write templates, and they wrote eleven templates on average. We presented them the same data that TempLM operated on: roughly 200 training examples per cluster, including the input data d and associated text x. We include our human study instruction and interface in Appendix B.5.
To evaluate human performance, we used the human-written templates in our LM-based inference pipeline and measured automatic metric scores. Table 4 shows the BERTScore F1 for both the average template writer as well as an ensemble of five template writers. We report other metric scores in Appendix B.5. We observe that the templates extracted by TempLM lead to better performance than the human-written ones, indicating the intrinsic difficulty of template writing. Based on observing template writers during the writing process, we found that a common strategy is to first go through a subset of the training examples and then find canonical examples to delexicalize. However, we identified a few shortcomings. First, our writers typically only read a few examples (approximately 5 to 20) before they exhaust their cognitive load.
As a result, some writers fail to write templates that capture the less common examples. Second, our volunteers may fail to pick the more canonical examples and choose to delexicalize examples that are not the most generalizable. Although welltrained template writers with domain knowledge might have written better templates, the difficulty in identifying such distributional characteristics remains true for any sizable data.
## 5 Conclusion And Future Work
We propose TempLM, a novel framework for distilling PLMs into template-based systems.
TempLM is designed to achieve better robustness and interpretability while inheriting the fluency and data efficiency of PLMs. Our evaluations show that TempLM can completely eliminate the unfaithful outputs produced by a finetuned BART model for out-of-domain inputs. On in-domain evaluation, TempLM is able to produce more fluent outputs compared to classic template systems, prior neuralhybrid template methods, and even human template writers. In the future, we look forward to extending the TempLM framework to learn compositional templates and grammars, as well as improving its coverage to diverse outputs, potentially via paraphrases of its input data.
## Limitations
Our system distills PLMs into a less expressive but trustworthy set of templates. In developing this method, we explicitly trade off linguistic diversity for faithfulness guarantees. While this approach works well on academic benchmarks, in more complicated real world settings sacrificing linguistic diversity may impact different groups to a different extent. This raises the question of fairness and we hope to investigate such problems on more realistic datasets in future work.
## References
Gabor Angeli, Percy Liang, and Dan Klein. 2010. A
simple domain-independent probabilistic approach to generation. In *Proceedings of the 2010 Conference* on Empirical Methods in Natural Language Processing, pages 502–512, Cambridge, MA. Association for Computational Linguistics.
S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. In *Association for* Computational Linguistics (ACL).
Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiplesequence alignment. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 16–23.
Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics.
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. 2020. Language models are few-shot learners. *arXiv preprint arXiv:2005.14165*.
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650.
Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020. KGPT: Knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages
8635–8648. Association for Computational Linguistics (ACL).
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. *Transactions of the Association for Computational Linguistics*, 9:1012–1031.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. pages 3356–3369.
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean.
2015. Distilling the knowledge in a neural network.
ArXiv, abs/1503.02531.
Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL), pages 12–22. Association for Computational Linguistics.
Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Association for Computational Linguistics (ACL), pages 2676–2686.
Rémi Lebret, David Grangier, and Michael Auli. 2016.
Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Association for Computational Linguistics*
(ACL).
Xiang Lisa Li and Alexander Rush. 2020. Posterior control of blackbox generation. In Association for Computational Linguistics (ACL), pages 2731–2743.
C. Lin and M. Rey. 2004. Looking for a few good metrics: ROUGE and its evaluation. In *NTCIR Workshop*.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Association for* Computational Linguistics (ACL), pages 1906–1919.
Jekaterina Novikova, Ondˇrej Dušek, and Verena Rieser.
2017. The E2E dataset: New challenges for end-toend generation. In *Special Interest Group on Discourse and Dialogue (SIGDIAL)*, pages 201–206.
Alice H. Oh and Alexander I. Rudnicky. 2000. Stochastic language generation for spoken dialogue systems.
In *ANLP-NAACL 2000 Workshop: Conversational* Systems.
K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002.
BLEU: A method for automatic evaluation of machine translation. In Association for Computational Linguistics (ACL).
Ratish Puduppully, Li Dong, and Mirella Lapata. 2019.
Data-to-text generation with content selection and planning. *AAAI Conference on Artificial Intelligence*.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2019. Language models are unsupervised multitask learners. *OpenAI Blog*, 1(8).
A. Ratnaparkhi. 2002. Trainable approaches to surface natural language generation and their application to conversational dialog systems. Computer Speech &
Language., 16:435–455.
Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. *Natural Language Engineering*, page 57–87.
Alexander I. Rudnicky, Eric H. Thayer, Paul C. Constantinides, Chris Tchou, Rande Shern, Kevin A. Lenzo, W. Xu, and Alice H. Oh. 1999. Creating natural dialogs in the carnegie mellon communicator system.
In *EUROSPEECH*.
Timo Schick and Hinrich Schütze. 2021. Few-shot text generation with natural language instructions. In Empirical Methods in Natural Language Processing.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Association for Computational Linguistics (ACL), pages 1715–1725.
R. Vedantam, C. L. Zitnick, and D. Parikh. 2015. CIDEr:
Consensus-based image description evaluation. In Computer Vision and Pattern Recognition (CVPR),
pages 4566–4575.
Wired. 2021. It began as an ai-fueled dungeon game.
it got much darker. Https://www.wired.com/story/aifueled-dungeon-game-got-much-darker/.
Sam Wiseman, Arturs Backurs, and Karl Stratos. 2021.
Data-to-text generation by splicing together nearest neighbors. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 4283–4299.
Sam Wiseman, Stuart Shieber, and Alexander Rush.
2018. Learning neural templates for text generation.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3174–3187, Brussels, Belgium. Association for Computational Linguistics.
Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, and Lei Li. 2020. Variational template machine for datato-text generation. In International Conference on Learning Representations.
Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, and Sebastian Gehrmann. 2021. Synthbio: A case study in faster curation of text datasets. In *Thirty-fifth Conference* on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
## A Additional Details On Template Refinement A.1 Token-Level Generalizability Measure
Our goal is to identify a set of generalizable templates given a budget B such that a single t can be flexibly filled in so that log pθ(F(*t, d*)|d) is high for many different examples d. Equation (3) does this exactly:
we fill in a single template t with many other examples d from the same cluster and measure the sum of their log probabilities. We want to generalize Equation (3) to a token-level generalizability measure, which tells us which tokens within a template t will receive a high probability after the template is filled in with new data. Our idea is to align tokens in the template with tokens in the output and aggregate the corresponding token probabilities across many different outputs.
Let us use j as the token index and denote xj as the jth token in an output text x and tj as the jth token in a template t. We use x:j to represent the prefix up to the jth token in x and analogously defined t:j .
We leverage an alignment function A(*t, d, j*), where F(t, d)A(*t,d,j*) gives the token that corresponds to tj after t is filled in. The alignment A handles the discrepancy in length that is caused by the template fill-in process because the fill-in function F substitutes nonterminal fields with various length data given in d.
With the help of A, we can define the token-level generalizability for a token tj as,
$$d^{\prime}\operatorname{s.t.}{\frac{\sum_{\mathbf{\ell}}}{C(d^{\prime}){=}C(d)}}{\big[}\log p_{\theta}(F(t_{\theta}^{\operatorname*{de}^{\mathrm{1}\,\mathrm{ex}}}(d)_{A(t,d,j)},d^{\prime})|F(t^{\operatorname*{de}^{\mathrm{1}\,\mathrm{ex}}},d^{\prime})_{\theta}(d)_{:A(t,d,j)}{\big]}.$$
$\zeta_{\phi}$
delex, d′)θ(d):A(*t,d,j*)]. (4)
Equation (4) provides a token-level measure, which we can easily turn into a span-level measure by calculating the joint token-level probability. We use this idea to calculate the generalizability of nonterminal fields that correspond to values of multiple tokens. Equation (4) gives us an useful tool for telling which tokens are ungeneralizable and we can then leverage the generation ability to replace these tokens by directly optimizing Equation (4).
Now that we formalize token-level generalizability with Equation (4), our plan is to iteratively remove ungeneralizable spans and use an infilling model to generate new template spans. We can decompose this procedure into two subproblems: removing ungeneralizable spans and generating new template spans. We discuss them in the next two sections, respectively.
## A.2 Removing Ungeneralizable Spans
The key problem we want to solve in span removal is to group multiple ungeneralizable tokens together and remove them at the same time. This is because if we remove ungeneralizable tokens one at a time, we would still condition on other ungeneralizable tokens, which deteriorates performance in practice. We leverage constituency parsing (Kitaev and Klein, 2018) to solve this problem. For each constituent in the parse tree, we calculate Equation (4) for each token in the constituent and compute the average. We set a threshold and remove all constituents whose generalizability measure is worse than this threshold.
## A.3 Generating Template With Consensus Beam Search
We refer to Section 3.2 for the description of our template generation process. In Algorithm 1, we rely on the subroutine di.get(·), which gives us the best data value among the multiple options in d for a nonterminal field. Implementing this subroutine exactly requires us to evaluate all data values at each decoding step, which is computationally expensive. In practice, we perform a greedy selection based on the first token in each data value.
## B Additional Details On Experiments B.1 Dataset Details
We include the dataset statistics of SynthBio and subsampled E2E datasets in Table 5.
## B.2 Model Training Details
Left-to-right Autoregressive LM. We finetune a BARTBASE model to implement pθ(x|d). On the downsampled E2E dataset, we train for 10 epochs for a batch size of 16 and a learning rate of 3 × 10−5.
1980
| # Train | Average Length | # Fields | |
|-----------|------------------|------------|----|
| E2E | 1090 | 19.8 | 8 |
| SynthBio | 2896 | 93.1 | 78 |
![11_image_0.png](11_image_0.png)
![11_image_1.png](11_image_1.png)
| Table 5: Statistics of SynthBio and the downsampled E2E dataset. Data Field Data Value article a, an be is, are, was, were one, two, three, four, five, number six, seven, eight, nine, ten pronoun_a he, she, they pronounce_b him, her, them pronounce_c his, her, their relation son, daughter |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
We train with half precision using the huggingface implementation. On SynthBio, we train for 5 epochs for a batch size of 8 and a learning rate of 3 × 10−5. We train with half precision using the huggingface implementation.
Infilling LM. We train our infilling models by masking a random 0 to 10 word span and predicting the masked out span. We finetune a BARTBASE model to implement pθ(x|x′, d). On the downsampled E2E
dataset, we train for 50 epochs for a batch size of 16 and a learning rate of 3 × 10−5. We train with half precision using the huggingface implementation. On SynthBio, we train for 20 epochs for a batch size of 16 and a learning rate of 3 × 10−5. We train with half precision using the huggingface implementation.
TempLM. On E2E, we cluster based on field combination. In total, we have 109 clusters and in each cluster, we have 10 training samples. We perform data recombination to create 50 examples for each cluster. Our template validation selects the top 5 templates and performs template refinement on these templates. Our template refinement process uses −2 log probability as a threshold for removing ungeneralizable spans.
## B.3 In-Domain Evaluation
Additional Details for Experiment Setup. On E2E, the familyFriendly field is a binary field with values being either "yes" or "no". To accommodate template-based generation, we replace "yes" with "family friendly" and "family-friendly" and replace "no" with "not family friendly" and "not family-friendly". We augment E2E input d with article words [article: [a, an]].
On SynthBio, we augment inputs with values listed in Table 6. For article, be, and number, we include them as multiple value options in the input. For pronouns and relations, we assign the correct value based on the gender field in the input. We parse all dates into day, month, and year and create separate fields to support different data formats in the templates.
Implementation of Faithfulness Evaluation. We present the phrasing collection we used for matching output in Table 7 and Table 8. We use this phrasing collection to perform a matching based faithfulness evaluation. We consider a phrase in an output to have a precision error if it matches with a field and value pair that is not present in the input data. We consider an output as having recall error Erecall if we cannot identify any phrase in the output that corresponds to some field and value pair in the input data
![12_image_0.png](12_image_0.png)
Because our phrasing collection is imperfect and alternative phrasing may exist, we expect Eprecision to be an underestimate and Erecall to be an overestimate of actual errors.
Additional Results for Section 4.2.
We present a full set of metrics scores for subsampled E2E and SynthBio in Table 9 and Table 10. We make similar observations as in Section 4.2 : first, TempLM is the most faithful system on E2E, never producing any precision error; second, TempLM is more fluent than other template systems, achieves better scores with the most of the metrics (BLEU, NIST, CIDEr), and on-par scores with METEOR and ROUGE-L.
We carry out the same experiment on E2E with models trained on the full dataset and present the results in Table 11. We observe that similar to TempLM is the only model that never produces unfaithful on both the test set and the validation set. BART becomes more faithful with more training data. Similar to the experiments on the subsampled training set, TempLM achieves better fluency than NTemp and SUB. One different observation from Table 11 is that TempClassic achieves much better fluency and faithfulness.
This is because by leveraging the full training data, TempClassic obtains a large number of templates
(39964). While using a large number of templates is helpful, it makes PLM-based inference infeasibly slow, requiring hours of computation to perform inference on the test and validation sets. Having many templates also makes the template set less interpretable by human inspectors. Therefore, we consider TempClassic an impractical baseline.
Qualitative Examples of Template Refinement. To better explain the inner workings of TempLM, we visualize one example of refinement in Figure 4. We color each word according to its generalizability, measured by a token-level generalizability (see Appendix A.1). From Figure 4, we first observe that our generalizability measure is reliable, successfully distinguishing the lexicalized value "south korea" and disfluent span "married" from the rest of the template. Second, we observe that the refinement step correctly fixes both errors by replacing "south korea" with more generalizable, nonterminal fields and inserting "was" to fix the grammatical error. Figure 4 demonstrates the effectiveness of template refinement and helps explain why refinement leads to a substantial performance gain on SynthBio in Table 2.
From Figure 4, we also observe that the words after "and" often appear less generalizable. This is because there are many alternative "branches" that could continue the prefix in these positions and each alternative option will receive a lower probability under a left-to-right PLM p 0 ( x | d ). We find that the infilling PLM p θ ( x ′ , d ) is robust to these false positives and typically will leave these spans unchanged.
This illustrates the benefits of combining a left-to-right and an infilling PLMs in template refinement.
## Out-Of-Domain Evaluation B.4
Table 12 displays the list of entities we used for creating the 54 OOD examples we used in our evaluation.
Table 13 shows example outputs from the BART model finetuned on the downsampled E2E data with OOD input. We find that BART often confuses the entity in the area field with name or ignores the input value and hallucinates "city centre."
## B.5 Human Study
We present a full list of metric scores that we used to evaluate our human study in Table 14. We have similar observations as in Section 4.4 that TempLM extracts more fluent templates than our template writers. We append our instructions for template writers and screenshots of our interface to the end of this document.
| field | value | phrasing Fast food |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| food | Fast food | fast food |
| is family friendly is kid friendly is children friendly is family-friendly is child friendly is a family friendly is a kid friendly is a children friendly is a family-friendly is a child friendly for a family friendly for a kid friendly for a children friendly for a family-friendly for a child friendly | | |
| familyFriendly | yes | not family friendly not kid friendly not children friendly not family-friendly not child friendly non family-friendly non-family-friendly non family friendly non-family friendly non children friendly non child friendly |
| familyFriendly | no | 1 out of 5 |
| low customer rating one star 1 star | | |
| customer rating | 1 out of 5 | 3 out of 5 |
| customer rating is average average customer rating three star moderate customer rating 3 star | | |
| customer rating | 3 out of 5 | 5 out of 5 |
| high customer rating five star 5 star | | |
| customer rating | 5 out of 5 | |
| field | value | phrasing 5 out of 5 |
|---------------------------------------------------------------------------------------------------------------|---------------|--------------------------------------------------------------------------|
| high customer rating five star 5 star | | |
| customer rating | high | 3 out of 5 |
| customer rating is average average customer rating three star 3 star | | |
| customer rating | average | 1 out of 5 |
| low customer rating one star 1 star | | |
| customer rating | low | less than £20 cheap low price range low-priced low priced |
| priceRange | less than £20 | £20-25 |
| moderate price range average price range moderately priced moderate prices average priced | | |
| priceRange | £20-25 | more than £30 high price range high priced expensive price range is high |
| priceRange | more than £30 | low price range |
| priceRange | low | low-priced cheap |
| priceRange | cheap | low price range low priced |
| moderate price range moderately priced price range is moderate moderate prices average prices | | |
| priceRange | moderate | high price range high priced expensive price range is high |
| priceRange | high | |
| Table 8: A collection of common paraphrases of given input data. We use this phrasing collection to perform a | | |
Table 8: A collection of common paraphrases of given input data. We use this phrasing collection to perform a matching-based faithfulness evaluation for E2E. The first half of this table is in Table 7.
Split Methods BLEU↑ NIST↑ METEOR↑ ROUGE-L↑ CIDEr↑ Eprecision ↓ Erecall ↓
Test
BART 66.2 ± 0.5 8.5 ± 0.0 43.1 ± 0.2 68.4 ± 0.7 2.2 ± 0.0 6.0 ± 2.9 376.3 ± 48.1
TempLM 61.5 ± 1.0 8.0 ± 0.1 41.0 ± 0.8 64.5 ± 0.8 2.1 ± 0.1 0.0 ± 0.0 471.7 ± 62.9
NTemp† 55.17 7.14 41.91 65.70 1.70 7 539
TempClassic 52.1 ± 2.0 7.3 ± 0.1 41.7 ± 1.0 62.2 ± 2.3 1.9 ± 0.1 46.7 ± 25.4 451.7 ± 36.9
SUB 45.3 ± 1.9 6.9 ± 0.2 40.0 ± 0.2 55.6 ± 2.4 1.4 ± 0.1 110.7 ± 36.2 421.0 ± 12.7
Valid.
BART 70.8 ± 0.7 8.3 ± 0.1 47.0 ± 0.1 72.8 ± 0.2 2.4 ± 0.0 5.0 ± 1.5 182.0 ± 11.8
TempLM 64.8 ± 0.6 8.0 ± 0.0 43.1 ± 0.4 67.8 ± 0.2 2.2 ± 0.0 0.0 ± 0.0 308.7 ± 4.3
NTemp† 64.53 7.66 42.46 68.60 1.82 7 539
TempClassic 52.2 ± 0.6 7.2 ± 0.0 40.9 ± 0.2 60.7 ± 0.9 1.7 ± 0.0 92.7 ± 6.1 401.0 ± 13.2
SUB 43.0 ± 0.4 6.6 ± 0.1 39.4 ± 0.2 55.0 ± 0.4 1.3 ± 0.0 85.3 ± 16.9 409.7 ± 13.7
Table 9: Evaluation of systems trained on the subsampled E2E datasets.
Table 10: Automatic evaluation results on the SynthBio test and validation sets.
Table 11: Evaluation of systems trained on the full E2E training set.
Table 12: List of novel entities used for creating OOD examples.
| BLEU | BERTScore F1 | ROUGE-L | | |
|-------------|----------------|------------|------------|------------|
| BART | 40.8 ± 0.2 | 55.2 ± 0.1 | 48.4 ± 0.2 | |
| TempLM | 40.3 ± 0.3 | 54.3 ± 0.1 | 48.3 ± 0.1 | |
| TempClassic | 36.6 ± 0.2 | 48.8 ± 0.1 | 43.1 ± 0.1 | |
| SUB | 14.1 ± 0.1 | 18.9 ± 0.1 | 26.4 ± 0.1 | |
| Test | BART | 41.7 ± 0.3 | 55.6 ± 0.1 | 48.8 ± 0.1 |
| TempLM | 41.3 ± 0.2 | 55.2 ± 0.2 | 49.1 ± 0.2 | |
| TempClassic | 35.1 ± 0.2 | 47.7 ± 0.1 | 42.0 ± 0.1 | |
| SUB | 14.0 ± 0.1 | 19.0 ± 0.1 | 26.4 ± 0.0 | |
| Valid | | | | |
| Split | Methods | BLEU↑ | NIST↑ | METEOR↑ | ROUGE-L↑ | CIDEr↑ | Eprecision ↓ | Erecall ↓ | #. Templates |
|-------------|------------|------------|------------|------------|------------|-------------|----------------|--------------|----------------|
| BART | 67.1 ± 0.2 | 8.7 ± 0.0 | 45.2 ± 0.0 | 69.5 ± 0.1 | 2.3 ± 0.0 | 0.0 ± 0.0 | 110.7 ± 5.2 | N/A | |
| Test | TempLM | 57.4 ± 0.6 | 7.6 ± 0.0 | 41.0 ± 0.3 | 65.8 ± 0.3 | 2.0 ± 0.0 | 0.0 ± 0.0 | 506.7 ± 15.6 | 509 |
| NTemp† | 55.17 | 7.14 | 41.91 | 65.70 | 1.70 | 7 | 539 | N/A | |
| TempClassic | 58.2 ± 0.0 | 7.5 ± 0.0 | 43.7 ± 0.0 | 67.6 ± 0.0 | 2.2 ± 0.0 | 0.0 ± 0.0 | 516.0 ± 1.0 | 39964 | |
| SUB | 36.8 ± 0.2 | 5.9 ± 0.0 | 39.5 ± 0.1 | 51.2 ± 0.2 | 0.81 ± 1.6 | 183.7 ± 3.2 | 416.3 ± 1.5 | 39964 | |
| BART | 69.8 ± 0.1 | 8.4 ± 0.0 | 47.6 ± 0.1 | 74.3 ± 0.1 | 2.5 ± 0.0 | 0.3 ± 0.3 | 256.3 ± 5.8 | N/A | |
| Valid. | TempLM | 65.5 ± 0.1 | 7.8 ± 0.0 | 45.7 ± 0.1 | 71.9 ± 0.2 | 2.4 ± 0.0 | 0.0 ± 0.0 | 365.7 ± 9.4 | 509 |
| NTemp† | 64.53 | 7.66 | 42.46 | 68.60 | 1.82 | 7 | 539 | N/A | |
| TempClassic | 64.6 ± 0.1 | 7.8 ± 0.0 | 46.0 ± 0.0 | 71.3 ± 0.0 | 2.4 ± 0.0 | 4.0 ± 0.0 | 425.7 ± 0.9 | 39964 | |
| SUB | 35.9 ± 0.1 | 5.6 ± 0.0 | 38.8 ± 0.1 | 51.7 ± 0.1 | 0.73 ± 0.4 | 136.0 ± 3.8 | 374.0 ± 1.7 | 39964 | |
| Area | Times Square, Central Park, Union Station |
|------------|---------------------------------------------|
| Food | Thai, Russian, German |
| Restaurant | McDonald's, Subway, Starbucks, KFC |
| Near | McDonald's, Subway, Starbucks, KFC |
| {area: Times Square, eatType: restaurant, food: Thai, | |
|---------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
| Input data | name: McDonald's, near: Subway} |
| BART output | Near Subway in the city center is a restaurant that serves Thai food called the Mcdonald's. {area: Central Park, eatType: restaurant, food: Thai, |
| Input data | name: McDonald's, near: Subway} |
| BART output | Central park is a restaurant that serves thai food. it is located near subway. {area: Times Square, eatType: restaurant, food: Russian, |
| Input data | name: McDonald's, near: KFC} |
| BART output | Near KFC in the city center is a restaurant called the Times Square. it serves Russian food. {area: Union Station, eatType: restaurant, food: German, |
| Input data | name: Subway, near: Starbucks} |
| BART output | In the center of the city near Starbucks is a restaurant called Subway. |
Table 13: Examples OOD output from the BART model finetuned on the downsampled E2E dataset. We color unfaithful information red.
| BLEU | BERTScore F1 | ROUGE-1 | ROUGE-2 | ROUGE-L | | |
|----------------|----------------|------------|------------|------------|------------|------------|
| Human | 37.3 ± 1.5 | 51.3 ± 2.3 | 64.5 ± 1.1 | 41.1 ± 1.6 | 44.9 ± 1.7 | |
| Human Ensemble | 39.1 | 54.0 | 63.7 | 44.1 | 47.3 | |
| BART | 44.0 ± 0.2 | 58.5 ± 0.2 | 70.6 ± 0.3 | 45.8 ± 0.3 | 50.9 ± 0.2 | |
| TempLM | 44.3 ± 1.3 | 58.8 ± 1.0 | 68.6 ± 1.1 | 46.8 ± 1.3 | 51.8 ± 0.7 | |
| Writer Cluster | Human | 24.9 ± 2.0 | 42.2 ± 4.4 | 54.8 ± 2.0 | 34.8 ± 0.6 | 40.5 ± 1.2 |
| Human Ensemble | 32.1 | 48.5 | 57.2 | 37.2 | 40.7 | |
| BART | 40.5 ± 0.4 | 55.4 ± 0.1 | 68.2 ± 0.4 | 42.7 ± 0.3 | 46.5 ± 0.1 | |
| TempLM | 34.4 ± 2.4 | 50.8 ± 0.9 | 61.4 ± 0.9 | 39.8 ± 1.2 | 44.1 ± 0.4 | |
| Spy | | | | | | |
| Cluster | | | | | | |
# Designing Templates For Data To Text **Conversion**
## Goal: Write (Ideally Ten Or More) **Templates** That Generate Realistic Biography Time: 30 Minutes 1. What Is This **Task?**
Your goal is to write a set of *templates* that can be used to automatically convert data into text. For example, consider this *data* which have three field and value pairs:
| Field | Value |
|-------------|--------------|
| name | Ramazan Inal |
| nationality | Turkish |
| occupation | writer |
In order to automatically generate this *text* from the data:
Ramazan Inal is a Turkish writer.
we can create this template:
[name] is a [nationality] [occupation].
and our system will deterministically replace each field with the value specified in the data.
[name] → Ramazan Inal
[nationality] → Turkish
[occupation] → writer
[name] is a [nationality] [occupation]. → Ramazan Inal is a Turkish writer.
Because we want to make templates *flexible* so that they can account for potential grammatical changes necessary for different values (e.g. "a Turkish writer" vs. "an English writer"), we added these additional fields and possible values to all input data:
| Field | Value |
|---------|---------|
| be | One of the following: is, are, was, were |
|---------|---------------------------------------------------------------------------------|
| article | One of the following: a, an |
| number | One of the following: One, two, three, four, five, six, seven, eight, nine, ten |
Therefore, the final template with these additional fields and values will be:
[name] [be] [article] [nationality] [occupation].
[name] → Ramazan Inal
[be] → is
[article] → a
[nationality] → Turkish
[occupation] → writer
[name] [be] [article] [nationality] [occupation]. → Ramazan Inal is a Turkish writer.
Note that sometimes, not all fields are *used* to generate the text. In the previous example, the number field is not used anywhere in the text, hence no need to be specified in the template.
## 2. What Is The **Goal?**
Given hundreds of pairs of such data and desired texts, your goal is to write ten or more templates that can best represent the given data and text *pairs* as well as can be *used* to generate realistic biography for new *data*.
For example, the previous template can be used with new data to generate biography as follows:
Template:
[name] [be] [article] [nationality] [occupation].
New data:
| Field | Value |
|---------|-------------|
| name | Joseph Duch |
| gender | non-binary |
|-------------|---------------------------------------------------------------------------------|
| nationality | Andorran |
| occupation | writer |
| be | One of the following: is, are, was, were |
| article | One of the following: a, an |
| number | One of the following: One, two, three, four, five, six, seven, eight, nine, ten |
## 3. How Do I Do This **Task?**
1. Click one of the links to start: [writer][spy]
![20_image_0.png](20_image_0.png)
a. Please do not refresh your window! The timer will be reset and you will start over.
b. We suggest that you maximize the window and zoom out so that you can browse the data easily.
multiple data and desired texts at the same time. Please enclose the field names with
![21_image_0.png](21_image_0.png)
brackets (e.g. [name]). Valid field names will be colored in **orange**.
a. Each time you write a template, click the "add a template" button in the right
![21_image_1.png](21_image_1.png)
![21_image_2.png](21_image_2.png)
![21_image_3.png](21_image_3.png)
panel, copy and paste your template, and click the "save" button.
c. If necessary, you can delete templates by clicking the close button next to each
![22_image_0.png](22_image_0.png)
template in the list.
4. On the bottom of the screen, you will see a counter for the number of templates and a timer.
5. When you are done, click the finish button next to the timer to save your templates. Share the verification code you got with Mina and share the templates you wrote with Tianyi.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
after conclusion before references
✓ A2. Did you discuss any potential risks of your work?
Left blank.
✗ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 4.1 specified the kind of model used.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4.1 discussed experimental setup and section B.2 provides hyperparameter details
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Table 1 and Table 2 provide error bars
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not in the paper but clear from code release (will be made avaiable after anonymous. period)
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
instruction appended to page 19 onward
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
section 4.4
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
see instruction appended
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? see section 4.4 |
vasylenko-etal-2023-incorporating | Incorporating Graph Information in Transformer-based {AMR} Parsing | https://aclanthology.org/2023.findings-acl.125 | Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at [\url{http://www.github.com/sapienzanlp/LeakDistill}](\url{http://www.github.com/sapienzanlp/LeakDistill}). | # Incorporating Graph Information In Transformer-Based Amr Parsing
Pavlo Vasylenko1 **Pere-Lluís Huguet Cabot**1,2∗
Abelardo Carlos Martínez Lorenzo1,2∗ **Roberto Navigli**1 1 Sapienza NLP Group, Sapienza University of Rome 2 Babelscape, Rome [email protected]
{martinez, huguetcabot}@babelscape.com [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART
or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data.
We release the code at http://www.github.com/
sapienzanlp/LeakDistill.
## 1 Introduction
Creating a machine-interpretable representation of meaning lies at the core of Natural Language Understanding and has been framed as the Semantic Parsing task. Multiple formalisms have been proposed over the years, e.g., Prague Czech-English Dependency Treebank (Hajic et al. ˇ , 2012), Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013), BabelNet Meaning Representation (Navigli et al., 2022; Martínez Lorenzo et al., 2022); however, Abstract Meaning Representation (Banarescu et al., 2013, AMR) has received more attention thanks to the large corpus available and a well-defined structure. AMR captures text semantics in the form of a directed acyclic graph
(DAG), with nodes representing concepts and edges representing semantic relationships between them
(see Figure 1). Currently, AMR is widely employed
∗ Equal contributions.
Figure 1: Top: sentence. Middle: AMR graph. Bottom:
Linearized graph. Alignment is represented by colours.
in a plethora of NLP domains, such as Information Extraction (Rao et al., 2017), Text Summarization (Hardy and Vlachos, 2018; Liao et al., 2018),
Question Answering (Lim et al., 2020; Bonial et al.,
2020b; Kapanipathi et al., 2021), Human-Robot Interaction (Bonial et al., 2020a), and Machine Translation (Song et al., 2019), among others.
Until a short while ago, autoregressive models proved to be the best approach for semantic parsing because of their outstanding performance without relying on sophisticated ad-hoc architectures (Bevilacqua et al., 2021). Then, more recently, several approaches have emerged to increase performance by including structural information in the model (Chen et al., 2022), adding extra Semantic Role Labeling tasks (Bai et al., 2022) or by ensembling strategies (Lam et al., 2021; Lee et al.,
2022).
In this paper, following the effort of strengthening the model's learning phase by incorporating meaningful structural information, we investigate the use of structural adapters (Ribeiro et al., 2021a)
that are basically Graph Neural Networks (GNNs)
embedded in the encoder of a Transformer EncoderDecoder architecture. The structural information is derived from intrinsic concept-node alignments from which we build a word-based graph with a structure similar to the original AMR. Leveraging such a graph implies partial data leakage: the graph structure is revealed to a model during training.
To overcome the lack of the leaked information at inference time, we explore Knowledge Distillation (KD), a technique that transfers knowledge from a teacher model to a student model (Hinton et al., 2015). The word-based graph is employed with the structural adapters to obtain soft targets
(the teacher path), which are then used for selfdistillation, transferring the knowledge to the student, which only has access to the text.
Our main contributions are: i) exploring how to add structural information to the AMR parsing model using structural adapters and self-knowledge distillation, ii) state-of-the-art results in AMR parsing for AMR 2.0 and AMR 3.0 datasets, and iii)
competitive base models for AMR parsing.
## 2 Related Work
Over the years, multiple trends have appeared to parse AMR graphs: using statistical methods (Flanigan et al., 2014, 2016; Wang et al.,
2015), neural-transition based parsers (Ballesteros and Al-Onaizan, 2017; Liu et al., 2018; Fernandez Astudillo et al., 2020; Zhou et al., 2021) or bidirectional Transformers (Lyu and Titov, 2018; Zhang et al., 2019; Cai and Lam, 2020) based on BERT (Devlin et al., 2019).
Recently, autoregressive models based on BART (Lewis et al., 2020) have emerged as a dominant approach for AMR parsing, since they obtained state-of-the-art performance without complex pipelines. One notable example is SPRING (Bevilacqua et al., 2021), which frames AMR parsing as a neural machine translation task, where text is translated into a linearized version of the graph. Subsequently, several works extended SPRING using a variety of different strategies.
Procopio et al. (2021) leverages multitask learning to improve cross-lingual AMR parsing results.
ATP (Chen et al., 2022) expands the dataset with extra auxiliary tasks such as Semantic Role Labeling and Dependency Parsing, with pseudo-AMR graphs constructed based on a particular task. AMRBART (Bai et al., 2022) uses a pre-training strategy based on Masked Language Modeling where both text and graph need to be denoised, using 200k graphs generated by SPRING. However, despite their efforts to enhance SPRING's performance, all these systems rely on additional external data. Although Ancestor (Yu and Gildea, 2022), which modifies ancestor information during decoding, and BiBL (Cheng et al., 2022), that adds a secondary graph masking task while training, do not rely on extra data, their performance improvements remain relatively limited. Our proposed model effectively bridges the gap in performance between "with" and "without" extra data by integrating explicit structural information during the training phase.
## 3 Word-Aligned Graph
Our goal is to incorporate graph-structured information into the encoder of a Transformer-based parser. However, the model only has access to the input sentence at that stage, with no hidden representation of AMR-specific nodes and relations.
Thus, we simplify the AMR structure to a wordbased graph by exploiting a pre-existing alignment between spans in text and semantic units in the corresponding AMR graph (see Figure 1).
First, starting with the source AMR graph, we replace the labels of the AMR nodes and relations with the words of the corresponding sentence as provided by the alignment (Figure 2, left). Next, we convert each edge into a node and connect it to its original endpoints (see Figure 2, center). Moreover, following what Ribeiro et al. (2021b) did for AMR graphs, we split each multi-token node (e.g.,
freedom in Figure 2) into a parent node represented by the first token and children nodes connected to it which contain the remaining tokens. We name the resulting graph representation the Word-Aligned Graph (WAG).
We will leverage WAGs to enrich the encoder's hidden representations of words with the AMR
graph's structural information. Unfortunately, a problem arises with non-aligned nodes (e.g., the
:location relation in Figure 2), since they will not have associated hidden states. Therefore, we have two alternatives: i) remove nodes for which we do not have hidden states (*Contracted WAG*), or ii) create new hidden states for them (*Full WAG*).
Contracted WAG As a first option, we remove non-aligned nodes from the graph. However, deleting the nodes from the original graph would pro-
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
duce a disconnected graph. To obtain a connected structure similar to the original graph, we contract nodes rather than removing them. A contracted WAG (*CWAG*) is a graph in which non-aligned nodes are merged with their closest parent node along with all their relations. Figure 2 (right) depicts a CWAG.
Full WAG Alternatively, we preserve the nodes without alignment (e.g., the node "location" in Figure 2 (center)). This type of graph is referred to as a Full WAG (FWAG), Figure 2 (center) shows an example of FWAG.
## 4 Structural Adapters For Amr Parsing
In this section, we describe the main components of our structure-enhanced approach to AMR parsing.
## 4.1 Parsing With Bart
AMR parsing can be defined as a sequence-tosequence (seq2seq) problem where the input x =
(x1*, ..., x*n) is a sequence of n words (or subwords)
and the output g = (e1*, ..., e*m) is a linearized graph with m elements. Our goal is to learn a function that models the conditional probability:
$$p(g|x)=\prod_{t=1}^{m}p(e_{t}|e_{<t},x),\qquad\qquad(1)$$
where e<t are the tokens of the linearized graph g before step t.
Suppose we have a dataset D of size |D| which consists of pairs (x i, gi), with each g i having length mi. Our objective is then to minimize a negative log-likelihood loss function:
$$\begin{split}L_{nll}^{D}&=L_{nll}(D)=-\sum_{i=1}^{|D|}\log p(g^{i}|x^{i})=\\ &=-\sum_{i=1}^{|D|}\sum_{t=1}^{m^{i}}\log p(e_{t}^{i}|e_{<t}^{i},x^{i})\end{split}\tag{2}$$
We use BART as our seq2seq model implementing the above formulation and, following Blloshmi et al. (2021, SPRING), add special tokens corresponding to i) AMR-related tokens, ii) variable names <R0>, <R1>, ... <Rn>, and iii) other tokens needed for the graph linearizations. Then, we fine-tune BART with the input x and the target g.
## 4.2 Structural Adapters
To incorporate AMR structural information into the encoder, we embed the WAGs - obtained from AMR graphs as illustrated in Section 3 - into adapters that encode the graph structure imposed by them. Structural adapters, as introduced by Ribeiro et al. (2021b), are a modification of the Transformer architecture that improves pre-trained language models for modeling graph information. They consist of a Graph Convolutional (GraphConv) layer and a feed-forward layer, which are connected
![3_image_0.png](3_image_0.png)
through a residual connection. Moreover, we remove layer normalization and set GELU as an activation function (see Figure 3).
Structural adapters are inserted after each encoder's layer (see Figure 4). For each hidden representation h l v ∈ R
bfrom the encoder layer l and the set of edges E in the WAG, we define the GraphConv operation as:
$$\mathrm{GraphConv}_{l}(\mathbf{h}_{v}^{l},{\mathcal{E}})=\sum_{u\in{\mathcal{N}}(v)}{\frac{1}{\sqrt{d_{u}d_{v}}}}\mathbf{W}_{g}^{l}\mathbf{h}_{u}^{l}\ \ (3)$$
where N (v) is the set of node v's adjacent nodes in the WAG (including v itself), dv is the degree of v, and Wlg ∈ R
b×bis a parameter matrix. Then, the updated hidden states z lv are computed as:
$$\begin{array}{l}{{\mathbf{g}_{v}^{l}=\mathrm{GraphConv}_{l}(\mathbf{h}_{v}^{l},{\mathcal{E}})}}\\ {{\mathbf{z}_{v}^{l}=\mathbf{W}_{a}^{l}\sigma(\mathbf{g}_{v}^{l})+\mathbf{h}_{v}^{l},}}\end{array}\qquad\qquad(4)$$
where σ is the GELU activation function and Wla ∈
R
b×bis the feed-forward layer parameter matrix.
## 5 Our Models 5.1 Graph Leakage Model
We bring together the two main components described in Section 4 by incorporating structural adapters in each layer of the encoder of a BARTbased AMR parsing model (see Figure 4 (left) and Algorithm 1). Here, a WAG, together with the hidden representations of tokens in the sentence, are input to the adapters. Since WAGs are constructed using gold AMR graphs, this constitutes a form of information leakage. We name this model the Graph Leakage Model (GLM), with the idea that it will serve as a study of the impact on performance when including WAGs (be they contracted or full, cf. Section 3).
To use FWAGs as input to the adapter, we need representations for non-aligned nodes that do not have an associated hidden state. Therefore, for nodes with labels corresponding to AMR special tokens (e.g., :location) we use their embedding.
For other nodes, we tokenize the label and take the average embedding. Furthermore, these representations are concatenated after the hidden states in the first adapter layer. After each adapter block, we split representations into two groups: i) the updated hidden states for the original input tokens, which serve as inputs of the subsequent Transformer layer, ii) the updated hidden states for the non-aligned nodes, which are concatenated again in the next adapter block (see Algorithm 1).
Then, for both CWAG and FWAG, the input to each adapter layer l consists of a matrix of hidden
## Algorithm 1 Modified Bart Encoder
Input: E - set of WAG edges, S
0- states for
non-aligned nodes, H0- initial hidden states of
the input sequence
**The input sequence for $l\in\{1,...,12\}$ do** $H^{l}\leftarrow$BARTLayer${}_{l}(H^{l-1})$ **if** Leak Mode then **if** Full WAG then $G^{l}\leftarrow$Concat$(H^{l},S^{l-1})$ **else** $G^{l}\gets H^{l}$ **end if** $\widetilde{G}^{l}\leftarrow$StructAdapt${}_{l}(G^{l},\mathcal{E})$ **if** Full WAG then $[\widetilde{H}^{l};S^{l}]\leftarrow$Split$(\widetilde{G}^{l})$ **else** $\widetilde{H}^{l}\leftarrow\widetilde{G}^{l}$ **end if** $\widetilde{H}^{l}\leftarrowH^{l}$ **end if** $H^{l}\leftarrow\widetilde{H}^{l}$ **end for**
states Hland a set of edges E. Note that the set of edges E does not change through layers. Finally, the loss function for GLM is:
Lleak = Lnll(D˜) = − X |D˜| i=1 log q(g i|x i, wi), (5)
where D˜ is the updated dataset consisting of pairs
((x i, wi), gi), q is the probability for GLM, w iis the WAG.
## 5.2 Knowledge Distillation
GLM leverages the alignment information to improve the model's understanding of the graph structure and enhance its (the model's) performance in AMR parsing. Unfortunately, as discussed in the previous section, this constitutes a form of leakage at inference time. Therefore, following the idea of Knowledge Distillation (Hinton et al., 2015, KD),
we set the fine-tuned GLM as a teacher model, which receives both the sentence and WAG as inputs, and our plain BART parser as the student (see Section 4.1). Then, the knowledge acquired by the teacher model is transferred to the student model, which only has access to the sentence. This enables the utilization of WAGs during training while avoiding their use during inference. Hence, our objective is to achieve the following:
$$p(g|x)=q(g|x,w)$$
$$\mathbf{\ddot{}}$$
where p and q are probabilities of the student and the teacher, respectively, and w is the WAG, used only at training time.
As is common in KD, we employ Kullback–Leibler divergence to match the student and the teacher probabilities:
$$L_{K L}=K L(p,q)=\sum_{k=0}^{C-1}p_{k}\log(\frac{p_{k}}{q_{k}})$$
$$\quad(7)$$
) (7)
where C is the number of classes, i.e. our token vocabulary. Usually, the loss L
D
nll for the original task is added to the total loss, thus becoming:
$$L_{K D}=L_{n l l}^{D}+\alpha L_{K L}=$$
$$L_{K D}=L_{n l l}+\alpha L_{K L}=$$ $$=-\sum_{i=1}^{|D|}\sum_{t=1}^{m^{k}}\sum_{k=0}^{C-1}(\delta_{t}^{i}(k)\log p_{t,k}^{i}-\alpha\,p_{t,k}^{i}\log(\frac{p_{t,k}^{i}}{q_{t,k}^{i}})),$$ $$p_{t,k}^{i}=p(e_{t}^{i}{=}k\,|\,e_{<t}^{i},x^{i}),$$ $$q_{t,k}^{i}=q(e_{t}^{i}{=}k\,|\,e_{<t}^{i},x^{i},w^{i})\tag{8}$$
where δ i t(k) is 1 when k is a target class at step t and 0 otherwise; α is a hyperparameter.
There are only architectural differences between the teacher and the student model at the encoder, since the teacher additionally includes the structural adapters. Therefore, we copy the GLM decoder to the student model and freeze the decoder parameters.
## 5.3 Leakdistill
We anticipate that, in our experimentation, KD will have failed to properly transfer the structural information to the student model. Therefore, we propose a single model approach that can be trained by performing two forward passes at each training step, one with and one without the WAG structural information (see Figure 4 and Algorithm 2). We force the two passes to learn the same distribution by adding a Kullback–Leibler divergence loss to the output logits. As a result, the total loss becomes:
1999
$$\begin{array}{c}{{L_{L e a k D i s t i l}=L_{n l l}^{D}+\beta L_{l e a k}+\alpha L_{K L}=}}\\ {{=-\sum_{i=1}^{|D|}\sum_{t=1}^{m^{k}}\sum_{k=0}^{C-1}(\delta_{t}^{i}(k)\log p_{t,k}^{i}+\beta\,\delta_{t}^{i}(k)\log q_{t,k}^{i}}}\\ {{\qquad\qquad-\alpha\,p_{t,k}^{i}\log(\frac{p_{t,k}^{i}}{q_{t,k}^{i}})),}}\end{array}$$
$\eqref{eq:walpha}$.
where L*leak* is the loss for the first pass (basically, GLM), with leaked information, L
D
nll is the loss for the second pass (basically, BART), which is the original negative log-likelihood loss, and finally LKL is the above-described Kullback–Leibler divergence loss. α and β are hyperparameters to control each loss scale.
The above formulation implements what is called self-knowledge distillation (Hahn and Choi, 2019, SKD). Specifically, in our work we project the knowledge via leveraging data leakage in the first pass rather than computing soft target probabilities. Moreover, we calculate KL divergence for all classes to obtain more knowledge. Finally, based on the intuition that there is not enough information to distill at the beginning of training, we schedule a gradual decrease of L*leak*'s multiplier β.
## 6 Experimental Setup
To demonstrate the benefits of incorporating structural information in AMR parsing, we devise a set of experiments to assess its performance in comparison to state-of-the-art models. Before delving into details, we provide information regarding the datasets (Section 6.1), the metrics (Section 6.2) and the model (Section 6.3) used in our experiments.
## 6.1 Datasets
We test on two AMR benchmark datasets: i) AMR
2.0, which has 36521, 1368, and 1371 sentenceAMR pairs in the training, validation, and test sets, respectively, and ii) AMR 3.0, which contains 55635, 1722, and 1898 sentence-AMR pairs in the training, validation, and test sets, respectively (see Appendix E). Furthermore, we test on The Little Prince (TLP) and the Bio AMR out-of-distribution datasets.
Alignment Our approach relies directly on the structural information extracted from the wordconcept alignment. There are several alignment
| Model | AMR 3.0 |
|----------------|-----------|
| SPRING (ours) | 84.55 |
| Contracted WAG | 86.01 |
| Full WAG | 89.58 |
standards: first, Information Sciences Institute (ISI) provides extended AMR 2.0 and AMR 3.0 datasets with alignments of all the graph semantic units that are directly related to the sentences' spans (Pourdamghani et al., 2014). Second, Linguistically Enriched AMR (Blodgett and Schneider, 2021, LEAMR) achieves full graph-alignment coverage by aligning all the graph semantic units to a corresponding span in the sentence.
Silver Data Following Bevilacqua et al. (2021),
we explore the same strategy to generate a dataset with 140k silver sentence-graph pairs. The silver LEAMR alignments are generated using the approach of Huguet Cabot et al. (2022).
## 6.2 Metrics
We evaluate our models using the SMATCH metric
(see Appendix D for more details). Additionally we also perform evaluation with two additional metrics: S2MATCH (Opitz et al., 2020) and WWLK
(Opitz et al., 2021). For WWLK we use WWLKk3e2n introduced in Opitz et al. (2021).
## 6.3 Models
We use SPRING (Bevilacqua et al., 2021) as our baseline, and an auto-regressive model based on BART (Lewis et al., 2020) for predicting linearized versions of AMR graphs. Our models are built on top of this model, inheriting some hyperparameters
(see Table 9).
In order to address the issue of overfitting, we implement a masking strategy which is used in conjunction with dropout and weight decay. For each batch, input tokens are masked with a varying probability p*mask*, which is uniformly sampled from the specified masking range (see Appendix A for details). The strategy is used for all models including SPRING (ours). In the following paragraphs, we explain the specific setup per each model.
Graph Leakage Model We explore two different settings for GLM: i) Contracted WAG, and ii) Full WAG (see Section 3).
| Model | AMR 3.0 | |
|---------------|-----------------------|-------|
| SPRING (ours) | 84.55 | |
| KD | Full WAG (89.58) | 83.90 |
| D | | |
| Lleak + L nll | 84.47 | |
| LeakDistill | Lleak + LKL | 85.03 |
| (Self-KD) | Lleak + L D nll + LKL | 85.04 |
Knowledge Distillation We test KD on the GLM
with the highest SMATCH among CWAG and FWAG (see Table 1).
LeakDistill As done for GLM, we first examine the difference in performance between Contracted WAG and Full WAG. Then, we test Full WAG with i) β scheduling, ii) the silver data, iii) the combination of the silver data and the β scheduling. In the case of the scheduling of β, we start from β = 90 and decrease it linearly at each iteration for 21k iterations in total until it reaches 10. The hyperparameter α is set to 20. The value of β for the case i) and other hyperparameters are listed in Table 9.
## 7 Results
In this section, we provide our experimental findings. All tables show single-run results.
Graph Leakage Model Table 1 shows results for the Graph Leakage Model. While this setup relies on information being leaked from the final graph structure, it sets an upper bound on how encoding such information can improve performance. Here, we observe an increase of around five SMATCH points when using FWAG, whereas CWAG improvements are much smaller. While the model is certainly taking advantage of the leaked information, this is provided through the hidden states of the encoder. Therefore, we need to explore whether some of this performance gain can be kept implicitly without any information leak. Moreover, it is necessary to investigate the persistence of any performance disparity between CWAG and FWAG. This information is intriguing, as CWAG
and FWAG differ in the context of additional information availability. CWAG only possesses a structure akin to the original graph, while FWAG
not only exhibits a greater degree of structural similarity but also includes the original labels for nonaligned nodes.
KD and LeakDistill Table 2 compares the results between applying KD with GLM as the teacher versus the LeakDistill approach, explained in Section 5.3.We see how KD alone falls short of taking full advantage of the performance gains of GLM. On the other hand, LeakDistill, especially when including the KL loss, leads to about a 0.5 SMATCH
point increase on the development set. Hence, we focus on LeakDistill as our main approach. Table 5 shows a breakdown of the experiments with LeakDistill, such as scheduling the KL loss or adding a silver data pretraining phase. It is evident that the performance difference between CWAG
and FWAG remains, paving the way for more indepth research into the types of information that prove advantageous for LeakDistill. Additionally, the final row of Table 5 presents the outcome when the adaptors are active (the green path). It is noticeable that, despite the green path essentially being the GLM, it fails to match the performance level of 89.58.
Main results Tables 3 and 4 shows results for our proposed model, based on BART-large. Our system performs better than any previous single model parser, and, most notably, does so even without extra data, i.e. silver sentence-graph pairs. For AMR 2.0, we see up to 0.7 SMATCH increase over AMRBART and 0.4 on AMR 3.0. The use of extra data only leads to a small improvement, showing the efficiency of our approach, which is able to outperform previous state-of-the-art systems that relied on up to 200K extra samples. In the breakdown performance, we see how our system performs worse than ATP on Reentrancies, Negation and notably SRL. We believe this is due to the multitask nature of ATP, where SRL is explicitly included as a task.
This opens the door to future work exploring the interaction between our approach and the inclusion of auxiliary tasks.
It is worth noting that our system relies on alignment information which is openly discussed at various stages in the paper. We do not consider this information as extra data since it is generated based on the existing data.
Out-of-distribution evaluation Table 6 shows the Out-of-Distribution of LeakDistill. We see a smaller improvement on TLP, 0.3 over AMRBART. On the harder BioAMR, performance increased by over a point, showing how the model is able to generalize well on different domains.
| Model | Extra Data | Smatch | Unlab. | NoWSD | Conc. | Wiki | NER | Reent. | Neg. | SRL |
|---------------|--------------|-------------|----------|---------|---------|--------|-------|----------|--------|-------|
| SPRING (ours) | ✘ | 84.4 | 87.4 | 84.8 | 90.4 | 84.1 | 90.9 | 71.6 | 73.5 | 80.1 |
| BiBL | ✘ | 84.6 | 87.8 | 85.1 | 90.3 | 83.6 | 92.5 | 74.4 | 73.9 | 83.1 |
| Ancestor | ✘ | 84.8 | 88.1 | 85.3 | 90.5 | 84.1 | 91.8 | 75.1 | 74.0 | 83.4 |
| LeakDistill | ✘ | 85.7s,o | 88.6 | 86.2 | 91.0 | 83.9 | 91.1 | 74.2 | 76.8 | 81.8 |
| SPRING | 200K | 84.3 | 86.7 | 84.8 | 90.8 | 83.1 | 90.5 | 72.4 | 73.6 | 80.5 |
| ATP | 40K | 85.2s | 88.3 | 85.6 | 90.7 | 83.3 | 93.1 | 74.7 | 74.9 | 83.3 |
| AMRBART | 200K | 85.4s | 88.3 | 85.8 | 91.2 | 81.4 | 91.5 | 73.5 | 74.0 | 81.5 |
| LeakDistill | 140K | 86.1s,o,b,a | 88.8 | 86.5 | 91.4 | 83.9 | 91.6 | 75.1 | 76.6 | 82.4 |
| Model | Extra Data | Smatch | Unlab. | NoWSD | Conc. | Wiki | NER | Reent. | Neg. | SRL |
|---------------|--------------|-------------|----------|---------|---------|--------|-------|----------|--------|-------|
| SPRING | ✘ | 83.0 | 85.4 | 83.5 | 89.5 | 81.2 | 87.1 | 71.3 | 71.7 | 79.1 |
| SPRING (ours) | ✘ | 83.8 | 86.7 | 84.3 | 89.9 | 81.5 | 87.2 | 71.4 | 71.5 | 79.8 |
| Ancestor | ✘ | 83.5 | 86.6 | 84.0 | 89.5 | 81.5 | 88.9 | 74.2 | 72.6 | 82.2 |
| BiBL | ✘ | 83.9s | 87.2 | 84.3 | 89.8 | 83.7 | 93.2 | 73.8 | 68.1 | 81.9 |
| LeakDistill | ✘ | 84.5s,o,a | 87.5 | 84.9 | 90.5 | 80.7 | 88.5 | 73.1 | 73.7 | 80.7 |
| ATP | 40K | 83.9s | 87.0 | 84.3 | 89.7 | 81.0 | 88.4 | 73.9 | 73.9 | 82.5 |
| AMRBART | 200K | 84.2s,o,a | 87.1 | 84.6 | 90.2 | 78.9 | 88.5 | 72.4 | 72.1 | 80.3 |
| LeakDistill | 140K | 84.6s,o,b,a | 87.5 | 84.9 | 90.7 | 81.3 | 87.8 | 73.4 | 73.0 | 80.9 |
Table 5: Performance of LeakDistill models on the development set of AMR 3.0.
| Model | AMR 3.0 |
|-----------------------------------------|-----------|
| SPRING (ours) | 84.55 |
| Contracted WAG | 84.90 |
| Full WAG | 85.04 |
| + β scheduling | 85.08 |
| + Silver | 85.34 |
| + Silver + β scheduling | 85.28 |
| The green path (Figure 4) FWAG + Silver | 86.09 |
BART base Our state-of-the-art system relies on BART-large, which has 400M parameters. While it shows very strong performance, it has a big computational footprint, especially at inference time due to its auto-regressive generative nature. This makes the need for lighter, more compute efficient models an important step towards better Semantic Parsers. Table 7 shows the performance of our approach when trained on top of BART-base, which has 140M parameters, achieving 83.5 SMATCH
points on AMR 3.0, 1 point higher than AMRBART and, noticeably, surpassing SPRING-large performance by half a point. We believe it is crucial to have close to state-of-the-art performance base models, closing the gap from 2 points to 1 when compared to their large counterparts.
Other metrics Recent studies have shown that achieving a higher SMATCH score does not necessarily result in better performance of an AMR
parser, as demonstrated by Opitz and Frank (2022).
To address this issue, we use two additional evaluation metrics, namely S2MATCH and WWLKk3e2n (WWLK), which measure graded concept similarity and edge label importance, respectively.
Our experiments reveal that S2MATCH correlates well with SMATCH, as expected for monolingual
Model TLP BioAMR SPRING 81.3 61.6 BiBL 78.6 61.1 ATP 78.9 61.2 AMRBART 82.3 63.4
LeakDistill 82.6 64.5
Table 7: BART-base versions performance.
| Model | AMR 2.0 | AMR 3.0 |
|-------------|-----------|-----------|
| SPRING | 82.8 | - |
| AMRBART | 83.6 | 82.5 |
| LeakDistill | 84.7 | 83.5 |
| Model | SMATCH | S2MATCH | WWLK |
|-------------|----------|-----------|--------|
| SPRING | 83.0 | 84.2 | 84.8 |
| BiBL | 83.9 | 84.6 | 82.3 |
| ATP | 83.9 | 84.7 | 85.7 |
| AMRBART | 84.2 | 85.1 | 83.9 |
| LeakDistill | 84.6 | 85.5 | 85.9 |
parsers. Conversely, WWLK is specifically designed for monolingual AMR parsing and emphasizes edge labels. Interestingly, our findings suggest that ATP performs well, second only to our proposed system, LeakDistill. This may be due to the fact that both systems place greater emphasis on edges, with ATP leveraging semantic role labeling data and LeakDistill utilizing structural information such as edges in the FWAGs. In contrast, AMRBART and BiBL exhibit a significant drop in performance compared to the SPRING baseline, possibly due to their use of masking as an additional signal, as their masking strategies may not be beneficial for edge labels.
## 8 Performance Analysis
Seq2seq parsers show decreased performance for longer sentences since a single error at decoding time in an early step can lead to compound errors and suffer from exposure bias. We explore how this affects our model compared to SPRING, ATP
and AMRBART. Figure 5 shows the performance
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
SMATCH
on AMR 3.0 test set for buckets of 200 sentences split by the number of words. While performance is similar on shorter sentences, with AMRBART
showing slightly better performance, in longer sentences of over 14 words LeakDistill fares better, especially compared to the baseline, which drops to 80 SMATCH points. This experiment also shows how performance is relatively stable for mediumlength sentences (10-30 words, oscillating around 85 points), while it starts deteriorating for longer ones. The high performance on short sentences is likely due to easy-to-parse structures, such as single date sentences.
## 9 Conclusion
We presented a new approach to training the Transformer architecture where partial information of the target sequence can be learned via self-knowledge distillation: the information can be leaked in the encoder implicitly through Transformer adapters which improve training but are switched off during inference. By employing this approach in AMR
parsing, we achieved state-of-the-art results among non-ensemble methods. Moreover, we produced a lightweight AMR parser that outperforms SPRING
while having four times fewer parameters. We also showed that, for all methods, performance degrades as the number of words increases.
Interestingly, our approach can potentially be used in other tasks, such as Relation Extraction, where alignments between input and target sequence elements exist, or structural information is unavailable at inference time.
## 10 Limitations
Our approach for training the Transformer architecture using self-knowledge distillation is promising, but there are still some limitations that need to be addressed in future work. One limitation is that our approach is only tested on the task of AMR parsing, and more evaluations are needed to see if it generalizes well to other tasks, such as Relation Extraction. Additionally, our approach, as is also the case for other current methods, exhibits performance degradation as the number of words in the sentence increases. This may be an indication of the current methods' limitation or lack of robustness to longer sentences.
Another limitation is the added complexity and extra parameters required by the use of Transformer adapters, which increases the overall complexity of the architecture and training time. Even though our approach still achieves state-of-the-art results and it is as lightweight as previous systems at inference time, this fact should be considered by researchers if they should decide to adopt it for other tasks.
In summary, our approach presents an innovative way to train the Transformer architecture and achieve state-of-the-art results in AMR parsing.
However, more work is needed to further improve the performance of the model and to apply it to other tasks as well.
## 11 Ethical Considerations
In considering the ethical and social implications of our proposed approach to AMR parsing, we acknowledge that there are several important considerations to take into account.
One significant concern is the potential for bias in the training data and models, which can result in unfair or discriminatory outcomes for certain groups of individuals. Additionally, the training and test data may not be representative of the population that the model will be applied to, potentially leading to poor performance in specific domains.
Furthermore, our approach relies on the use of Transformer-based models, which have been shown to perpetuate societal biases present in the data used for training. It is, therefore, crucial to ensure that the data used for training is diverse and unbiased.
Moreover, the use of techniques such as selfknowledge distillation may lead to data leakage, where the model overfits the training data and performs poorly on new data, which could have negative impacts on the predictions.
In conclusion, even if we consider our approach does not have negative implications, it is important to note that bias and fairness are complex issues that require ongoing attention and improvement.
## Acknowledgments
The authors gratefully acknowledge the support of the European Union's Horizon 2020 research project Knowledge Graphs at Scale (KnowGraphs) under the Marie Marie Skłodowska-Curie grant agreement No 860801.
The last author gratefully acknowledges the support of the PNRR MUR project PE0000013-FAIR.
## References
Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cognitive Annotation (UCCA). In *Proceedings of the 51st Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 228–238, Sofia, Bulgaria. Association for Computational Linguistics.
Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022.
Graph pre-training for AMR parsing and generation.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland.
Association for Computational Linguistics.
Miguel Ballesteros and Yaser Al-Onaizan. 2017. AMR
parsing using stack-LSTMs. In *Proceedings of the* 2017 Conference on Empirical Methods in Natural Language Processing, pages 1269–1275, Copenhagen, Denmark. Association for Computational Linguistics.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to Rule Them Both:
Symmetric AMR semantic Parsing and Generation without a Complex Pipeline. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12564–12573.
Rexhina Blloshmi, Michele Bevilacqua, Edoardo Fabiano, Valentina Caruso, and Roberto Navigli. 2021.
SPRING Goes Online: End-to-End AMR Parsing and Generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language*
Processing: System Demonstrations, pages 134–142, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Austin Blodgett and Nathan Schneider. 2021. Probabilistic, structure-aware algorithms for improved variety, accuracy, and coverage of AMR alignments.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3310–3321, Online. Association for Computational Linguistics.
Claire Bonial, Lucia Donatelli, Mitchell Abrams, Stephanie M. Lukin, Stephen Tratz, Matthew Marge, Ron Artstein, David Traum, and Clare Voss. 2020a.
Dialogue-AMR: Abstract Meaning Representation for dialogue. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 684–
695, Marseille, France. European Language Resources Association.
Claire Bonial, Stephanie M. Lukin, David Doughty, Steven Hill, and Clare Voss. 2020b. InfoForager:
Leveraging semantic search with AMR for COVID19 research. In *Proceedings of the Second International Workshop on Designing Meaning Representations*, pages 67–77, Barcelona Spain (online). Association for Computational Linguistics.
Deng Cai and Wai Lam. 2020. AMR parsing via graphsequence iterative inference. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 1290–1301, Online. Association for Computational Linguistics.
Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In *Proceedings of the 51st Annual Meeting of the Association* for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, and Baobao Chang, editors. 2022. ATP:
AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs. Association for Computational Linguistics.
Ziming Cheng, Zuchao Li, and Hai Zhao. 2022. BiBL:
AMR parsing and generation with bidirectional Bayesian learning. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 5461–5475, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Marco Damonte, Shay B. Cohen, and Giorgio Satta.
2017. An incremental parser for Abstract Meaning Representation. In *Proceedings of the 15th Conference of the European Chapter of the Association* for Computational Linguistics: Volume 1, Long Papers, pages 536–546, Valencia, Spain. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Ramón Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, and Radu Florian. 2020.
Transition-based parsing with stack-transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1001–1007, Online.
Association for Computational Linguistics.
Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8:
Graph-based AMR parsing with infinite ramp loss.
In *Proceedings of the 10th International Workshop on* Semantic Evaluation (SemEval-2016), pages 1202–
1206, San Diego, California. Association for Computational Linguistics.
Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the Abstract Meaning Representation. In *Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 1426–1436, Baltimore, Maryland. Association for Computational Linguistics.
Sangchul Hahn and Heeyoul Choi. 2019. Selfknowledge distillation in natural language processing.
In *Proceedings of the International Conference on* Recent Advances in Natural Language Processing
(RANLP 2019), pages 423–430, Varna, Bulgaria. INCOMA Ltd.
Jan Hajic, Eva Haji ˇ cová, Jarmila Panevová, Petr Sgall, ˇ
Ondˇrej Bojar, Silvie Cinková, Eva Fucíková, Marie ˇ
Mikulová, Petr Pajas, Jan Popelka, Jiˇrí Semecký, Jana Šindlerová, Jan Štepánek, Josef Toman, Zde ˇ nka ˇ
Urešová, and Zdenek Žabokrtský. 2012. ˇ Announcing Prague Czech-English Dependency Treebank 2.0. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12),
pages 3153–3160, Istanbul, Turkey. European Language Resources Association (ELRA).
Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 768–773, Brussels, Belgium. Association for Computational Linguistics.
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean.
2015. Distilling the knowledge in a neural network.
ArXiv, abs/1503.02531.
Pere-Lluís Huguet Cabot, Abelardo Carlos Martínez Lorenzo, and Roberto Navigli. 2022.
AMR Alignment: Paying Attention to CrossAttention. *ArXiv*, abs/2206.07587.
Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Salim Roukos, Alexander Gray, Ramón Fernandez Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, Dinesh Garg, Alfio Gliozzo, Sairam Gurajada, Hima Karanam, Naweed Khan, Dinesh Khandelwal, Young-Suk Lee, Yunyao Li, Francois Luus, Ndivhuwo Makondo, Nandana Mihindukulasooriya, Tahira Naseem, Sumit Neelam, Lucian Popa, Revanth Gangi Reddy, Ryan Riegel, Gaetano Rossiello, Udit Sharma, G P Shrivatsa Bhargav, and Mo Yu. 2021. Leveraging Abstract Meaning Representation for knowledge base question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3884–3894, Online. Association for Computational Linguistics.
Laura Baranescu Claire Bonial Madalina Bardocz Kira Griffitt Ulf Hermjakob Daniel Marcu Martha Palmer Tim O'Gorman Nathan Schneider Kevin Knight, Bianca Badarau. 2020. Abstract meaning representation (amr) annotation release 3.0.
Hoang Thanh Lam, Gabriele Picco, Yufang Hou, YoungSuk Lee, Lam M. Nguyen, Dzung T. Phan, Vanessa López, and Ramon Fernandez Astudillo. 2021. Ensembling Graph Predictions for AMR Parsing.
Young-Suk Lee, Ramón Astudillo, Hoang Thanh Lam, Tahira Naseem, Radu Florian, and Salim Roukos.
2022. Maximum Bayes Smatch ensemble distillation for AMR parsing. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5379–5392, Seattle, United States. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract Meaning Representation for multi-document summarization. In *Proceedings of the 27th International Conference on Computational Linguistics*,
pages 1178–1190, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Jungwoo Lim, Dongsuk Oh, Yoonna Jang, Kisu Yang, and Heuiseok Lim. 2020. I know what you asked:
Graph path learning using AMR for commonsense reasoning. In *Proceedings of the 28th International* Conference on Computational Linguistics, pages 2459–2471, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transitionbased parser. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 2422–2430, Brussels, Belgium. Association for Computational Linguistics.
Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 397–407, Melbourne, Australia. Association for Computational Linguistics.
Abelardo Carlos Martínez Lorenzo, Marco Maru, and Roberto Navigli. 2022. Fully-Semantic Parsing and Generation: the BabelNet Meaning Representation.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1727–1741, Dublin, Ireland.
Association for Computational Linguistics.
Roberto Navigli, Rexhina Blloshmi, and Abelardo Carlos Martinez Lorenzo. 2022. BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers. Proceedings of the AAAI
Conference on Artificial Intelligence, 36.
Juri Opitz, Angel Daza, and Anette Frank. 2021.
Weisfeiler-leman in the bamboo: Novel AMR graph metrics and a benchmark for AMR graph similarity.
Transactions of the Association for Computational Linguistics, 9:1425–1441.
Juri Opitz and Anette Frank. 2022. Better Smatch = better parser? AMR evaluation is not so simple anymore.
In *Proceedings of the 3rd Workshop on Evaluation* and Comparison of NLP Systems, pages 32–43, Online. Association for Computational Linguistics.
Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020.
AMR similarity metrics from principles. *Transactions of the Association for Computational Linguistics*, 8:522–538.
Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning English strings with Abstract Meaning Representation graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 425–429, Doha, Qatar. Association for Computational Linguistics.
Luigi Procopio, Rocco Tripodi, and Roberto Navigli.
2021. SGL: Speaking the graph languages of semantic parsing via multilingual translation. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 325–337, Online. Association for Computational Linguistics.
Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daumé III. 2017. Biomedical event extraction using Abstract Meaning Representation. In *BioNLP 2017*,
pages 126–135, Vancouver, Canada,. Association for Computational Linguistics.
Leonardo F. R. Ribeiro, Yue Zhang, and Iryna Gurevych.
2021a. Structural adapters in pretrained language models for AMR-to-Text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4269–4282, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Leonardo F. R. Ribeiro, Yue Zhang, and Iryna Gurevych.
2021b. Structural adapters in pretrained language models for amr-to-text generation.
Stefan Riezler and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for MT. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 57–64, Ann Arbor, Michigan. Association for Computational Linguistics.
Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. *Transactions of the Association for Computational Linguistics*, 7:19–31.
Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015.
Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 857–862, Beijing, China.
Association for Computational Linguistics.
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zeroshot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 6397–6407, Online. Association for Computational Linguistics.
Chen Yu and Daniel Gildea. 2022. Sequence-tosequence AMR parsing with ancestor information.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 2: Short Papers), pages 571–577, Dublin, Ireland.
Association for Computational Linguistics.
Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR parsing as sequence-tograph transduction. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 80–94, Florence, Italy. Association for Computational Linguistics.
Jiawei Zhou, Tahira Naseem, Ramón Fernandez Astudillo, and Radu Florian. 2021. AMR parsing with action-pointer transformer. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5585–5598, Online. Association for Computational Linguistics.
## Appendices A Model Hyperparameters
Table 9 lists hyperparameters and search space for the experiments:
- LR sched. - learning rate scheduler
- KL temp. - Kullback–Leibler divergence temperature
- AMR 3 aligns. - type of alignments for AMR
3.0
- Mask. range - masking range. For each batch, we mask the input tokens with probability p*mask*, the value for which is sampled uniformly from the masking range. For instance, the [0; 0.15] range means p*mask* ∼ U(0, 0.15)
The LeakDistill experiments detailed in Table 2 were performed utilizing the final set of hyperparameters listed in Table 9. However, it should be noted that the experiment that did not involve KL
loss did not necessitate the use of the variable α.
## B Hardware And Size Of The Model
We performed experiments on a single NVIDIA
3090 GPU with 64GB of RAM and Intel® Core™
i9-10900KF CPU. The total number of trainable parameters of LeakDistill is 434,883,596. Training the model on the silver data took 33 hours, whereas further fine-tuning took 16 hours.
## C Blink
All systems from Tables 3 and 4 use BLINK (Wu et al., 2020) for wikification. For this purpose, we used the *blinkify.py* script from the SPRING
repository.
## D Metric
We evaluate AMR parsing using the SMATCH metric (Cai and Knight, 2013) and extra scores of Damonte et al. (2017): i) Unlabel, compute on the predicted graphs after removing all edge labels, ii) No WSD, compute while ignoring Propbank senses (e.g., duck-01 vs duck-02), iii) Wikification, F-score on the wikification (:wiki roles), iv) NER,
F-score on the named entity recognition (:name roles), v) Negations, F-score on the negation detection (:polarity roles), vi) Concepts, F-score on the concept identification task, vii) Reentrancy, computed on reentrant edges only, viii) Semantic Role Labeling (SRL), computed on :ARG-i roles only.
| Group | Parameter | Values |
|--------------------|---------------------|----------------|
| Optimizer | RAdam | |
| Batch size | 500 | |
| Dropout | 0.25 | |
| Attent. dropout | 0 | |
| Grad. accum. | 10 | |
| Weight decay | 0.004 | |
| LR | 0.00005 | |
| Beamsize | 5 | |
| Inherited (SPRING) | LR sched. | const., linear |
| SPRING (ours) | Mask. range | [0; {0, 0.15}] |
| Beamsize | 5, 10 | |
| Encoder layers | 1-12 | |
| Adapter | Activation | GELU |
| Dropout | 0.01, 0.1 | |
| LR | 0.00005, 0.0001 | |
| GLM | LR sched. | const., linear |
| Mask. range | [0; 0.15] | |
| α | 10 | |
| LR | 0.00005, 0.0001 | |
| LR sched. | const., linear | |
| Weight decay | 0.004, 0.0001 | |
| Decoder | train, freeze | |
| Mask. range | [0; 0.15] | |
| KD | LR sched. | const., linear |
| KL temp. | 1, 2 | |
| α | 1, 5, 10, 20 | |
| β | 1, 5, 10, sched. | |
| AMR 3 aligns. | ISI, LeAMR | |
| Mask. range | [0; {0, 0.1, 0.15}] | |
| Beamsize | 5, 10 | |
| LeakDistill | | |
Table 9: Final hyperparameters and search space for the experiments. All groups have the same parameters as original SPRING if they are not overwritten. For instance, SPRING (ours) and for LeakDistill have the same learning rate of 0.00005.
## E Data
The AMR 3.0 (Kevin Knight, 2020) data used in this paper is licensed under the *LDC User Agreement for Non-Members* for LDC subscribers, which can be found here. The *The Little Prince* Corpus can be found here from the Information Science Institute of the University of Southern California.
## F Algorithms
Algorithm 2 shows one training step of the LeakDistill model.
Algorithm 2 One training step of the LeakDistill model Input: X - batch of input sequences and WAGs, Y - batch of target graphs Set Model to Normal Mode L
D
nll, P robs1 ← Model(*X, Y* )
Set Model to Leak Mode Lleak, P robs2 ← Model(*X, Y* )
LKL ← KLDiv (P robs1*, P robs*2)
L ← αLKL + βL*leak* + L
D nll Optimization step of L
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 10
✓ A2. Did you discuss any potential risks of your work?
Section 10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 6
✓ B1. Did you cite the creators of artifacts you used?
Section 2, 4, 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix E
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Self-evident given that AMR is widely used as a dataset for Semantic Parsing systems
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? our work is based on the AMR dataset from the LDC
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? out of scope of our work
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 6.1
## C ✓ **Did You Run Computational Experiments?** Section 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 7, we run significance tests to compare the difference systems.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? no usage of such libs D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
yang-etal-2023-rethinking | Rethinking the Word-level Quality Estimation for Machine Translation from Human Judgement | https://aclanthology.org/2023.findings-acl.126 | Word-level Quality Estimation (QE) of Machine Translation (MT) aims to detect potential translation errors in the translated sentence without reference. Typically, conventional works on word-level QE are usually designed to predict the quality of translated words in terms of the post-editing effort, where the word labels in the dataset, i.e., OK or BAD, are automatically generated by comparing words between MT sentences and the post-edited sentences through a Translation Error Rate (TER) toolkit. While the post-editing effort can be used to measure the translation quality to some extent, we find it usually conflicts with human judgment on whether the word is well or poorly translated. To investigate this conflict, we first create a golden benchmark dataset, namely \textit{HJQE} (Human Judgement on Quality Estimation), where the source and MT sentences are identical to the original TER-based dataset and the expert translators directly annotate the poorly translated words on their judgments. Based on our analysis, we further propose two tag-correcting strategies which can make the TER-based artificial QE corpus closer to \textit{HJQE}. We conduct substantial experiments based on the publicly available WMT En-De and En-Zh corpora. The results not only show our proposed dataset is more consistent with human judgment but also confirm the effectiveness of the proposed tag-correcting strategies.For reviewers, the corpora and codes can be found in the attached files. | # Rethinking The Word-Level Quality Estimation For Machine Translation From Human Judgement
Zhen Yang, Fandong Meng, Yuanmeng Yan, and Jie Zhou Pattern Recognition Center, WeChat AI, Tencent Inc, Beijing, China
{zieenyang, fandongmeng, withtomzhou}@tencent.com
## Abstract
Word-level Quality Estimation (QE) of Machine Translation (MT) aims to detect potential translation errors in the translated sentence without reference. Typically, conventional works on word-level QE are usually designed to predict the quality of translated words in terms of the post-editing effort, where the word labels in the dataset, i.e., OK or BAD, are automatically generated by comparing words between MT sentences and the post-edited sentences through a Translation Error Rate (TER)
toolkit. While the post-editing effort can be used to measure the translation quality to some extent, we find it usually conflicts with human judgment on whether the word is well or poorly translated. To investigate this conflict, we first create a golden benchmark dataset, namely *HJQE* (Human Judgement on Quality Estimation), where the source and MT sentences are identical to the original TER-based dataset and the expert translators directly annotate the poorly translated words on their judgments. Based on our analysis, we further propose two tag-correcting strategies which can make the TER-based artificial QE corpus closer to *HJQE*. We conduct substantial experiments based on the publicly available WMT En-De and En-Zh corpora. The results not only show our proposed dataset is more consistent with human judgment but also confirm the effectiveness of the proposed tag-correcting strategies.1
## 1 Introduction
Quality Estimation of Machine Translation aims to automatically estimate the translation quality of the MT systems with no reference available. The sentence-level QE predicts a score indicating the overall translation quality, and the word-level QE
needs to predict the quality of each translated word as OK or BAD. Recently, the word-level QE attracts much attention for its potential ability to directly 1Corpus of *HJQE* can be found at: https://github.
com/ZhenYangIACAS/HJQE
![0_image_0.png](0_image_0.png)
Overall Human Translation Error Rate (HTER score): **0.82**
detect poorly-translated words and alert the user with concrete translation errors. Currently, the collection of the word-level QE datasets mainly relies on the Translation Error Rate (TER) toolkit (Snover et al., 2006). Specifically, given the machine translations and their corresponding post-edits (PE, generated by human translators or target sentences of the parallel corpus as the pseudo-PE), the rulebased TER toolkit is used to generate the wordlevel alignment between the MT and the PE based on the principle of minimal editing (Tuan et al.,
2021; Lee, 2020). All MT words not aligned to PE
are annotated as BAD (shown in Figure 1). Such annotation is also referred to as post-editing effort
(Fomicheva et al., 2020a; Specia et al., 2020).
The post-editing effort measures the translation quality in terms of the efforts the translator needs to spend to transform the MT sentence to the golden reference. However, in our previous experiments and real applications, we find it usually conflicts with human judgments on whether the word is well or poorly translated. Two examples in Figure 2 show the conflicts between the TER-based annotation and human judgment. In figure 2a, the translated words, namely "我", "很", "高兴" and "发 言", are annotated as BAD by TER since they are not exactly in the same order with their counterparts in the PE sentence. However, from human judgment, the reordering of these words does not
![1_image_0.png](1_image_0.png)
hurt the meaning of the translation and even makes the MT sentence polished. And the word "要求" is also regarded as a good translation by human judgment as it is the synonym of the word "邀 请". In figure 2b, the clause "扎波罗齐安海特曼 号" in a very good translation of "The Zaporizhian Hetman " from human judgment. However, it is annotated as BAD by TER since it is not aligned with any words in the PE sentence. In many application scenarios and downstream tasks, it is usually important even necessary to detect whether the word is well or poorly translated from the human judgment (Yang et al., 2021). However, most previous works still use the TER-based dataset for training and evaluation, which makes the models' predictions deviate from human judgment.
In the recent WMT22 word-level QE shared task, several language pairs, such as English-to-German, Chinese-to-English and English-to-Russian, tried to evaluate the model with the corpus based on the annotation of Multilingual Quality Metrics (MQM)
which is introduced from the Metrics shared task.2 However, the conflict between the TER-based annotation and human judgment and its effects are still unclear to the researchers. To investigate this conflict and overcome the limitations stated above, We first collect a high-quality benchmark dataset, named *HJQE*, where the source and MT sentences are directly taken from the original TER-based dataset and the human annotators annotate the text spans that lead to translation errors in MT sentences. With the identical source and MT sentences, it is easier for us to make insight into the underline causes of the conflict. Then, based on our deep analysis, we further propose two tag-correcting strategies, namely tag refinement strategy and tree-based annotation strategy, which make the TER-based annotations more consistent with human judgment.
Our contributions can be summarized as follows:
1) We collect a new dataset called *HJQE* that directly annotates the word-level translation errors on MT sentences. We conduct detailed analyses and demonstrate the differences between *HJQE*
and the TER-based dataset. 2) We propose two automatic tag-correcting strategies which make the TER-based artificial dataset more consistent with human judgment. 3) We conduct experiments on HJQE dataset as well as its TER-based counterpart.
Experimental results of the automatic and human evaluation show that our approach achieves higher consistency with human judgment.
## 2 Data Collection And Analysis 2.1 Data Collection
To make our collected dataset comparable to TER-generated ones, we directly take the source and MT texts from MLQE-PE (Fomicheva et al.,
2020a), the widely used official dataset for WMT20 QE shared tasks. MLQE-PE provides the TERgenerated annotations for English-German (En-De)
and English-Chinese (En-Zh) translation directions.
The source texts are sampled from Wikipedia documents and the translations are obtained from the Transformer-based system (Vaswani et al., 2017).
Our data collection follows the following process. First, we hire a number of translator experts, where 5 translators for En-Zh and 6 for En-De.
| Dataset | Split | English-German | English-Chinese | | | | | | |
|-------------|---------|------------------|-------------------|----------------|--------------|-------------|-----------------|----------------|---------------|
| samples | tokens | MT BAD tags | MT Gap BAD tags | samples | tokens | MT BAD tags | MT Gap BAD tags | | |
| MLQE-PE | train | 7000 | 112342 | 31621 (28.15%) | 5483 (4.59%) | 7000 | 120015 | 65204 (54.33%) | 10206 (8.04%) |
| valid | 1000 | 16160 | 4445 (27.51%) | 716 (4.17%) | 1000 | 17063 | 9022 (52.87%) | 1157 (6.41%) | |
| train | 7000 | 112342 | 10804 (9.62%) | 640 (0.54%) | 7000 | 120015 | 19952 (16.62%) | 348 (0.27%) | |
| valid | 1000 | 16160 | 1375 (8.51%) | 30 (0.17%) | 1000 | 17063 | 2459 (14.41%) | 8 (0.04%) | |
| HJQE (ours) | test | 1000 | 16154 | 993 (6.15%) | 28 (0.16%) | 1000 | 17230 | 2784 (16.16%) | 11 (0.06%) |
![2_image_0.png](2_image_0.png)
They are all graduated students who major in translation and have the professional ability in the corresponding translation direction. For En-Zh, the translations are tokenized as MLQE-PE. To make the annotation process as fair and unbiased as possible, each annotator is provided only the source sentence and its corresponding translation (the human annotators are not allowed to access the PE sentences in MLQE-PE). For each sample, we randomly distribute it to two annotators. After one example has been annotated by two translators, we check whether the annotations are consistent. If they have annotation conflicts, we will re-assign the sample to the other two annotators until we get consistent annotations. For the annotation protocol, we ask human translators to find words, phrases, clauses, or even whole sentences that contain translation errors in MT sentences and annotate them as BAD tags. Here, the translation error means the translation distorts the meaning of the source sentence but excludes minor mismatches such as synonyms and punctuation. Meanwhile, if the translation does not conform to the target language's grammar, they should also find them and annotate them as BAD. The annotation and distribution of samples are automatically conducted through the annotation system. After all the samples are annotated, we ask another translator to check the annotation accuracy by sampling a small proportion (400 samples) of the full dataset and ensure the accuracy is above 98%.
## 2.2 Statistics And Analysis
Overall Statistics. In Table 1, we show detailed statistics of the collected *HJQE*. For comparison, we also present the statistics of MLQE-PE. First, we see that the total number of BAD tags decreases heavily when human's annotations replace the TERbased annotations (from 28.15% to 9.62% for EnDe, and from 54.33% to 16.62% for En-Zh). It indicates that the human annotations tend to annotate OK as long as the translation correctly expresses the meaning of the source sentence, but ignores the secondary issues like synonym substitutions and constituent reordering. Second, we find the number of BAD tags in the gap (indicating a few words are missing between two MT tokens) also greatly decreases. It's because human annotations tend to regard the missing translations (i.e., the BAD
gaps) and the translation errors as a whole but only annotate BAD tags on MT tokens3.
Unity of BAD Spans. To reveal the unity of the human annotations, we group the samples according to the number of BAD spans in every single sample, and show the overall distribution. From Figure 3, we can find that the TER-based annotations follow the Gaussian distribution, where a large proportion of samples contain 2, 3, or even more BAD spans, indicating the TER-based annotations are fragmented. However, our collected annotations on translation errors are more unified, with only a small proportion of samples including more than 2 BAD spans. Besides, we find a large number of samples that are fully annotated as OK in human annotations. However, the number is extremely small for TER-based annotations (78 in English-3As a result, we do not include the sub-task of predicting gap tags in *HJQE*.
![3_image_0.png](3_image_0.png)
German and 5 for English-Chinese). This shows a large proportion of BAD spans in TER-based annotations do not really destroy the semantics of translations and are thus regarded as OK by human annotators.
Based on the above statistics and the examples in Figure 2, we conclude the two main issues that result in the conflicts between the TER-based annotations and human judgment. First, the PE sentences often substitute some words with better synonyms and reorder some constituents for polish purposes.
These operations do not destroy the meaning of the translated sentence, but make some words mistakenly annotated under the exact matching criterion of TER; Second, when a fatal error occurs, the human annotator typically takes the whole sentence or clause as BAD. However, the TER toolkit still tries to find trivial words that align with PE, resulting in fragmented and wrong annotations.
## 2.3 Difference From Mqm
In the recent WMT22 word-level QE shared task, several language pairs began to use MQM-based annotation introduced from the Metrics shared task as the quality estimation (Freitag et al., 2021a,c).
There are two main differences between the proposed *HJQE* and the MQM-based corpus: 1) The MQM-based corpus is mainly collected to evaluate the metrics of MT. To temper the effect of long segments, only five errors per segment are imposed for segments containing more errors. However, as *HJQE* is collected to evaluate the quality of each translated word, we impose all errors in each segment; 2) *HJQE* are collected by taking the identical source and MT sentences to the TER-based benchmark dataset, namely MLQE-PE,
which makes it more straightforward to perform comparison and analysis.
## 3 Approach
This section first introduces the model backbone and the self-supervised pre-training approach based on the large-scale MT parallel corpus. Then, we propose two correcting strategies to make the TERbased artificial tags closer to human judgment.
## 3.1 Model Architecture
Following (Ranasinghe et al., 2020; Lee, 2020; Moura et al., 2020; Ranasinghe et al., 2021),
we select the XLM-RoBERTa (XLM-R) (Conneau et al., 2020) as the backbone of our model. XLM-R is a transformer-based masked language model pre-trained on large-scale multilingual corpus and demonstrates state-ofthe-art performance on multiple cross-lingual downstream tasks. As shown in Figure 4a, we concatenate the source sentence and the MT
sentence together to make an input sample: xi =
<s>w src 1
, . . . , wsrc m </s><s>w mt 1
, . . . , wmt n </s>,
where m is the length of the source sentence (src)
and n is the length of the MT sentence (mt). <s> and </s> are two special tokens to annotate the start and the end of the sentence in XLM-R,
respectively.
For the j-th token w mt jin the MT sentence, we take the corresponding representation from XLMR for binary classification to determine whether wj belongs to good translation (OK) or contains translation error (BAD) and use the binary classification loss to train the model:
$$\begin{array}{c}{{s_{i j}=\sigma(\mathbf{w}^{\mathsf{T}}\mathbf{XLM}\mathbf{\mbox{-}}\mathbf{R}_{j}(\mathbf{x}_{i}))}}\\ {{{\mathcal{L}}_{i j}=-(y\cdot\log s_{i j}+(1-y)\cdot\log(1-s_{i j}))}}\\ {{{\mathrm{(2)}}}}\end{array}$$
where XLM-Rj (xi) ∈ R
d(d is the hidden size of XLM-R) indicates the representation output by
![4_image_0.png](4_image_0.png)
XLM-R corresponding to the token w mt j
, σ is the sigmoid function, w ∈ R
d×1is the linear layer for binary classification and y is the ground truth label.
## 3.2 Self-Supervised Pre-Training Approach
Since constructing the golden corpus is expensive and labor-consuming, automatically building the synthetic corpus based on the MT parallel corpus for pre-training is very promising and has widely been used by conventional works (Tuan et al., 2021; Zheng et al., 2021). As shown in Figure 4b, the conventional approaches first split the parallel corpus into the training and the test set. The NMT
model is trained with the training split and then used to generate translations for all sentences in the test split. Then, a large number of triplets are obtained, each consisting of source, MT, and target sentences. Finally, the target sentence is regarded as the pseudo-PE, and the TER toolkit is used to generate word-level annotations.
## 3.3 Tag-Correcting Strategies
As we discussed above, the conflicts between the TER-based annotation and human judgment limit the performance of the conventional selfsupervised pre-training approach on the proposed HJQE. In this section, we introduce two tag correcting strategies, namely tag refinement and treebased annotation, that target these issues and make the TER-generated synthetic QE annotations more consistent with human judgment.
Tag Refinement Strategy. In response to the first issue (i.e., wrong annotations due to the synonym substitution or constituent reordering), we propose the tag refinement strategy, which corrects the false BAD tags to OK. Specifically, as shown in Figure 5a, we first generate the alignment between the MT sentence and the reference sentence
(i.e., the pseudo-PE) using FastAlign4(Dyer et al.,
2013). Then we extract the phrase-to-phrase alignment by running the phrase extraction algorithm of NLTK5(Bird, 2006). Once the phrase-level alignment is prepared, we substitute each BAD span with the corresponding aligned spans in the pseudo-PE
and use the language model to calculate the change of the perplexity ∆ppl after this substitution. If |∆ppl| < α, where α is a hyper-parameter indicating the threshold, we regard that the substitution has little impact on the semantic and thus correct the BAD tags to OK. Otherwise, we regard the span does contain translation errors and keep the BAD
tags unchanged (Figure 5b).
Tree-based Annotation Strategy. Human direct annotation tends to annotate the *smallest* constituent that causes fatal translation errors as a whole (e.g., the whole words, phrases, clauses, etc.). However, TER-based annotations are often fragmented, with the translation being split into multiple BAD spans. Besides, the BAD spans are often not well-formed in linguistics i.e., the words in the BAD span from different linguistic constituents.
To address this issue, we propose the constituent tree-based annotation strategy. It can be regarded as an enhanced version of the tag refinement strategy that gets rid of the TER-based annotation. As shown in Figure 5c, we first generate the constituent tree for the MT sentences. Each internal node (i.e., the non-leaf node) in the constituent tree represents a well-formed phrase such as a noun phrase (NP), verb phrase (VP), prepositional phrase
(PP), etc. For each node, we substitute it with
| Model | English-German (En-De) | English-Chinese (En-Zh) | | | | | | |
|--------------------------------------------------------------------------|--------------------------|---------------------------|------------|-------|--------|-------|------------|-------|
| MCC | F-OK | F-BAD | F-BAD-Span | MCC | F-OK | F-BAD | F-BAD-Span | |
| Baselines | | | | | | | | |
| FT on HJQE only | 26.29 | 95.08 | 31.09 | 20.97 | 38.56 | 90.76 | 47.56 | 26.66 |
| PT (TER-based) | 9.52 | 34.62 | 13.54 | 3.09 | 15.17 | 36.66 | 31.53 | 2.40 |
| + FT on HJQE | 24.82 | 94.65 | 29.82 | 18.52 | 39.09 | 91.29 | 47.04 | 25.93 |
| Pre-training only with tag correcting strategies (ours) | | | | | | | | |
| PT w/ Tag Refinement | 10.12* | 49.33 | 14.32 | 3.62 | 19.36* | 53.16 | 34.10 | 3.79 |
| PT w/ Tree-based Annotation | 8.94 | 84.50 | 15.84 | 6.94 | 21.53* | 59.21 | 35.54 | 6.32 |
| Pre-training with tag correcting strategies + fine-tuning on HJQE (ours) | | | | | | | | |
| PT w/ Tag Refinement + FT | 27.54* | 94.21 | 35.25 | 21.13 | 40.35* | 90.88 | 49.33 | 25.60 |
| PT w/ Tree-based Annotation + FT | 27.67* | 94.44 | 32.41 | 21.38 | 41.33* | 91.22 | 49.82 | 27.21 |
Table 2: Performance on the test set of *HJQE*. PT indicates pre-training and FT indicates fine-tuning. Results are all reported by ×100. The numbers with * indicate the significant improvement over the corresponding baseline with p
< 0.05 under t-test (Semenick, 1990). The results on the validation sets are presented in Appendix B.
the corresponding aligned phrase in the pseudoPE. Then we still use the change of the perplexity
∆ppl to indicate whether the substitution of this phrase improves the fluency of the whole translation. To only annotate the smallest constituents that exactly contain translation errors, we normalize ∆ppl by the number of words in the phrase and use this value to sort all internal nodes in the constituent tree: ∆pplnorm =∆ppl r−l+1 , where l and r indicate the left and right positions of the phrase, respectively. The words of a constituent node are integrally labeled as BAD only if |∆pplnorm| < β as well as there is no overlap with nodes that are higher ranked. β is a hyper-parameter.
## 4 Experiments
Datasets. To verify the effectiveness of the proposed corpus and approach, we conduct experiments on both *HJQE* and MLQE-PE. Note that MLQE-PE and *HJQE* share the same source and MT sentences, thus they have exactly the same number of samples. We show the detailed statistics in Table 1. For the pre-training, we use the parallel dataset provided in the WMT20 QE shared task to generate the artificial QE dataset.
Baselines. To confirm the effectiveness of our proposed self-supervised pre-training approach with tag-correcting strategies, we mainly select two baselines for comparison. In the one, we do not use the pre-training, but only fine-tune XLM-R on the training set of *HJQE*. In the other, we pre-train the model on the TER-based artificial QE dataset and then fine-tune it on the training set of *HJQE*.
Implementation and Evaluation. The QE
model is implemented based on an open-source framework, OpenKiwi6. We use the large-sized XLM-R model released by the hugging-face.7 We use the KenLM8to train the language model on all target sentences in the parallel corpus. For the tree-based annotation strategy, we obtain the constituent tree through LTP9(Che et al., 2010) for Chinese and through Stanza10 (Qi et al., 2020) for German. We set α to 1.0 and β to -3.0 based on the empirical results on the evaluation sets. 11 Following the WMT20 QE shared task, we use Matthews Correlation Coefficient (MCC) as the main metric and also report the F1 score (F) for OK, BAD and BAD spans. We refer the readers to Appendix A for implementation details.
## 4.1 Main Results
The results are shown in Table 2. We can observe that the TER-based pre-training only brings very limited performance gain or even degrade the performance when compared to the "FT on *HJQE*
only" setting (-1.47 for En-De and +0.53 for EnZh). It suggests that the inconsistency between TER-based and human annotations leads to the limited effect of pre-training. However, when applying the tag-correcting strategies to the pre-training dataset, the improvement is much more significant
(+2.85 for En-De and +2.24 for En-Zh), indicating that the tag correcting strategies mitigate such inconsistency, improving the effect of pre-training.
| Evaluate on → | MLQE-PE | HJQE | | | |
|-----------------------------------------|-----------|--------|-------|-------|-------|
| Fine-tune on ↓ | MCC* | MCC | F-BAD | MCC | F-BAD |
| WMT20's best | 59.28 | - | - | - | - |
| No pre-training (fine-tuning only) | | | | | |
| MLQE-PE | 58.21 | 46.81 | 75.02 | 22.49 | 34.34 |
| HJQE | 49.77 | 23.68 | 36.10 | 45.76 | 53.77 |
| TER-based pre-training | | | | | |
| w/o fine-tune | 56.51 | 33.58 | 73.85 | 11.38 | 27.41 |
| MLQE-PE | 61.85 | 53.25 | 78.69 | 21.93 | 33.75 |
| HJQE | 41.39 | 29.19 | 42.97 | 47.34 | 55.43 |
| Pre-training with tag refinement | | | | | |
| w/o fine-tune | 55.03 | 28.89 | 70.73 | 18.83 | 31.39 |
| MLQE-PE | 61.35 | 48.24 | 77.17 | 21.85 | 33.31 |
| HJQE | 39.56 | 25.06 | 67.40 | 47.61 | 55.22 |
| Pre-training with tree-based annotation | | | | | |
| w/o fine-tune | 55.21 | 26.79 | 68.11 | 20.98 | 32.84 |
| MLQE-PE | 60.92 | 48.58 | 76.18 | 22.34 | 34.13 |
| HJQE | 40.30 | 26.22 | 39.50 | 48.14 | 56.02 |
On the other hand, when only pre-training is applied, the tag-correcting strategies can also improve performance. It shows our approach can also be applied to the unsupervised setting, where no humanannotated dataset is available for fine-tuning.
Tag Refinement v.s. Tree-based Annotation.
When comparing two tag-correcting strategies, we find the tree-based annotation strategy is generally superior to the tag refinement strategy, especially for En-Zh. The MCC improves from 19.36 to 21.53 under the *pre-training only* setting and improves from 40.35 to 41.33 under the *pre-training then* fine-tuning setting. This is probably because the tag refinement strategy still requires the TER-based annotation and fixes based on it, while the treebased annotation strategy actively selects the wellformed constituents to apply phrase substitution and gets rid of the TER-based annotation.
Span-level Metric. Through the span-level metric (F-BAD-Span), we want to measure the unity and consistency of the model's prediction against human judgment. From Table 2, we find our models with tag correcting strategies also show higher F1 score on BAD spans (from 26.66 to 27.21 for En-Zh), while TER-based pre-training even do harm to this metric (from 26.66 to 25.93 for EnZh). This phenomenon also confirms the aforementioned fragmented issue of TER-based annotations, and our tag-correcting strategies, instead, improve the span-level metric by alleviating this issue.
| Scores | En-De | En-Zh | | |
|----------------|---------|---------|------|------|
| TER | Ours | TER | Ours | |
| 1 (terrible) | 3 | 1 | 5 | 0 |
| 2 (bad) | 36 | 16 | 34 | 6 |
| 3 (neutral) | 34 | 20 | 29 | 21 |
| 4 (good) | 26 | 61 | 24 | 59 |
| 5 (excellent) | 1 | 2 | 8 | 14 |
| Average score: | 2.86 | 3.47 | 2.96 | 3.81 |
| % Ours ≥ TER: | 89% | 91% | | |
## 4.2 Analysis
Comparison with MLQE-PE. To demonstrate the difference between the MLQE-PE and our *HJQE*
datasets, and analyze how the pre-training and finetuning influence the results on both datasets, we compare the performance of different models on MLQE-PE and *HJQE* respectively. The results for En-Zh are shown in Table 3. When comparing results in each group, we find that fine-tuning on the training set identical to the evaluation set is necessary for achieving high performance. Otherwise, fine-tuning provides marginal improvement (e.g.,
fine-tuning on MLQE-PE and evaluating on *HJQE*)
or even degrades the performance (e.g., fine-tuning on *HJQE* and evaluating on MLQE-PE). This reveals the difference in data distribution between HJQE and MLQE-PE. Besides, Our best model on MLQE-PE outperforms WMT20's best model
(61.85 v.s. 59.28) using the same MCC* metric, showing that the modeling ability of our model is strong enough even under the TER-based setting.
On the other hand, we compare the performance gain of different pre-training strategies. When evaluating on MLQE-PE, the TER-based pre-training brings higher performance gain (+6.44) than pretraining with two proposed tag correcting strategies (+1.43 and +1.77). While when evaluating on *HJQE*, the case is the opposite, with the TERbased pre-training bringing lower performance gain
(+1.58) than tree-based annotation (+2.38) strategies. In conclusion, the pre-training always brings performance gain, no matter evaluated on MLQEPE or *HJQE*. However, the optimal strategy depends on the consistency between the pre-training dataset and the downstream evaluation task.
Human Evaluation. To evaluate and compare the models pre-trained on TER-based tags and corrected tags more objectively, human evaluation is conducted for both models. For En-Zh and En-De, we randomly select 100 samples from the validation set and use two models to predict word-level tags for them. Then, the human translators (without participating the annotation process) are asked to give a score for each prediction, between 1 and 5, where 1 indicates the predicted tags are fully wrong, and 5 indicates the tags are fully correct. Table 4 shows the results. We can see that the model pretrained on corrected tags (Ours) achieves higher human evaluation scores than that pre-trained on TER-based tags. For about 90% of samples, the prediction of the model pre-trained on the corrected dataset can outperform or tie with the prediction of the model pre-trained on the TER-based dataset.
The results of the human evaluation show that the proposed tag-correcting strategies can make the TER-based annotation closer to human judgment.
The case study is also presented in Appendix C.
Limitation We analyze some samples that are corrected by our tag-correcting strategies and find a few bad cases. The main reasons are: 1) There is noise from the parallel corpus. 2) The alignment generated by FastAlign contains unexpected errors, making some entries in the phrase-level alignments missing or misaligned. 3) The scores given by KenLM, i.e., the perplexity changes, are sometimes not sensitive enough. We propose some possible solutions to the above limitations as our future exploration direction. For the noise in the parallel corpus, we can use parallel corpus filtering methods that filter out samples with low confidence. For the alignment errors, we may use more accurate neural alignment models (Lai et al., 2022).
## 5 Related Work
Early approaches on QE, such as QuEst (Specia et al., 2013) and QuEst++ (Specia et al., 2015),
mainly pay attention to feature engineering. They aggregate various features and feed them to machine learning algorithms. Kim et al. (2017) first propose the neural-based QE approach, called Predictor-Estimator. They first pre-train an RNNbased predictor on the large-scale parallel corpus that predicts the target word given its context and the source sentence. Then, they extract the features from the pre-trained predictor and use them to train the estimator for the QE task. This model achieves the best performance on the WMT17 QE
shard task. After that, many variants of PredictorEstimator are proposed (Fan et al., 2019; Moura et al., 2020; Cui et al., 2021; Esplà-Gomis et al.,
2019). Among them, Bilingual Expert (Fan et al.,
2019) replaces RNN with multi-layer transformers as the architecture of the predictor. It achieves the best performance on WMT18. Kepler et al.
(2019) release an open-source framework for QE,
called OpenKiwi, that implements the most popular QE models. Recently, with the development of pre-trained language models, many works select the cross-lingual language model as the backbone (Ranasinghe et al., 2020; Lee, 2020; Moura et al., 2020; Rubino and Sumita, 2020; Ranasinghe et al., 2021; Zhao et al., 2021). Many works also explore the joint learning or transfer learning of the multilingual QE task (Sun et al., 2020; Ranasinghe et al., 2020, 2021). Meanwhile, Fomicheva et al. (2021) propose a shared task with the newcollected dataset on explainable QE, aiming to provide word-level hints for sentence-level QE score.
Freitag et al. (2021b) also study multidimensional human evaluation for MT and collect a large-scale dataset for evaluating the metrics of MT. Additionally, Fomicheva et al. (2020b); Cambra and Nunziatini (2022) evaluate the translation quality from the features of the NMT systems directly.
The QE model can be applied to the post-editing process. Wang et al. (2020) and Lee et al. (2021)
use the QE model to identify which parts of the MT
sentence need to be corrected. Yang et al. (2021)
needs the QE model to determine error spans before giving translation suggestions.
## 6 Conclusion
In this paper, we focus on the task of word-level QE in machine translation and target the inconsistency issues between TER-based annotation and human judgment. We collect and release a benchmark dataset called *HJQE* which has identical source and MT sentences with the TER-based corpus and reflects the human judgment on the translation errors in MT sentences. Besides, we propose two tagcorrecting strategies, which make the TER-based annotations closer to human judgment and improve the final performance on the proposed benchmark dataset *HJQE*. We conduct thorough experiments and analyses, demonstrating the necessity of our proposed dataset and the effectiveness of our proposed approach. Our future directions include improving the performance of phrase-level alignment.
We hope our work will provide some help for future research on quality estimation.
## References
Steven Bird. 2006. Nltk: the natural language toolkit.
In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, pages 69–72.
Jon Cambra and Mara Nunziatini. 2022. All you need is source! a study on source-based quality estimation for neural machine translation. In *Proceedings of the* 15th Biennial Conference of the Association for Machine Translation in the Americas (Volume 2: Users and Providers Track and Government Track), pages 210–220.
Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp:
A chinese language technology platform. In *Coling* 2010: Demonstrations, pages 13–16.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451.
Qu Cui, Shujian Huang, Jiahuan Li, Xiang Geng, Zaixiang Zheng, Guoping Huang, and Jiajun Chen. 2021.
Directqe: Direct pretraining for machine translation quality estimation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 12719–12727.
Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013.
A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648.
Miquel Esplà-Gomis, Felipe Sánchez-Martínez, and Mikel L. Forcada. 2019. Predicting insertion positions in word-level machine translation quality estimation. *Applied Soft Computing*, 76:174–192.
Kai Fan, Jiayi Wang, Bo Li, Fengming Zhou, Boxing Chen, and Luo Si. 2019. "bilingual expert" can find translation errors. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pages 6367–6374.
Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021. The eval4nlp shared task on explainable quality estimation: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP
Systems, pages 165–178.
Marina Fomicheva, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, and André FT Martins.
2020a. Mlqe-pe: A multilingual quality estimation and post-editing dataset. *arXiv preprint* arXiv:2010.04480.
Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:539–555.
Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation. *Transactions of the Association for Computational Linguistics*, 9:1460–1474.
Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021b.
Experts, errors, and context: A large-scale study of human evaluation for machine translation. arXiv preprint arXiv:2104.14478.
Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021c. Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain. In *Proceedings of the Sixth Conference on Machine Translation*,
pages 733–774, Online. Association for Computational Linguistics.
Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, and André FT Martins. 2019. Openkiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 117–122.
Hyun Kim, Hun-Young Jung, Hongseok Kwon, JongHyeok Lee, and Seung-Hoon Na. 2017. Predictorestimator: Neural quality estimation based on target word prediction for machine translation. ACM
Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 17(1):1–22.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. *arXiv preprint* arXiv:1412.6980.
Siyu Lai, Zhen Yang, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou. 2022. Cross-align: Modeling deep cross-lingual interactions for word alignment.
arXiv preprint arXiv:2210.04141.
Dongjun Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 1024–1028.
Dongjun Lee, Junhyeong Ahn, Heesoo Park, and Jaemin Jo. 2021. Intellicat: Intelligent machine translation post-editing with quality estimation and translation suggestion. *arXiv preprint arXiv:2105.12172*.
Joao Moura, Miguel Vera, Daan van Stigt, Fabio Kepler, and André FT Martins. 2020. Ist-unbabel participation in the wmt20 quality estimation shared task.
In *Proceedings of the Fifth Conference on Machine* Translation, pages 1029–1036.
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. *arXiv preprint arXiv:2003.07082*.
Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. Transquest: Translation quality estimation with cross-lingual transformers. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5070–5081.
Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2021. An exploratory analysis of multilingual word-level quality estimation with cross-lingual transformers. *arXiv preprint arXiv:2106.00143*.
Raphael Rubino and Eiichiro Sumita. 2020. Intermediate self-supervised learning for machine translation quality estimation. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 4355–4360.
Doug Semenick. 1990. Tests and measurements: The ttest. *Strength & Conditioning Journal*, 12(1):36–37.
Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In *Proceedings of the 7th Conference of the* Association for Machine Translation in the Americas: Technical Papers, pages 223–231.
Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmán, and André F. T. Martins. 2020. Findings of the WMT
2020 shared task on quality estimation. In *Proceedings of the Fifth Conference on Machine Translation*,
pages 743–764, Online. Association for Computational Linguistics.
Lucia Specia, Gustavo Paetzold, and Carolina Scarton.
2015. Multi-level translation quality prediction with quest++. In *Proceedings of ACL-IJCNLP 2015 System Demonstrations*, pages 115–120.
Lucia Specia, Kashif Shah, José GC De Souza, and Trevor Cohn. 2013. Quest-a translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79–84.
Shuo Sun, Marina Fomicheva, Frédéric Blain, Vishrav Chaudhary, Ahmed El-Kishky, Adithya Renduchintala, Francisco Guzmán, and Lucia Specia. 2020. An exploratory study on multilingual quality estimation.
In *Proceedings of the 1st Conference of the AsiaPacific Chapter of the Association for Computational* Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 366–
377.
Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchintala, Vishrav Chaudhary, Francisco Guzmán, and Lucia Specia. 2021. Quality estimation without humanlabeled data. In Proceedings of the 16th Conference
of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 619–625, Online. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in neural information processing systems*, pages 5998–6008.
Ke Wang, Jiayi Wang, Niyu Ge, Yangbin Shi, Yu Zhao, and Kai Fan. 2020. Computer assisted translation with neural quality estimation and auotmatic postediting. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing:
Findings, pages 2175–2186.
Zhen Yang, Fandong Meng, Yingxue Zhang, Ernan Li, and Jie Zhou. 2021. Wets: A benchmark for translation suggestion. *arXiv preprint arXiv:2110.05151*.
Mingjun Zhao, Haijiang Wu, Di Niu, Zixuan Wang, and Xiaoli Wang. 2021. Verdi: Quality estimation and error detection for bilingual corpora. In *Proceedings* of the Web Conference 2021, pages 3023–3031.
Yuanhang Zheng, Zhixing Tan, Meng Zhang, Mieradilijiang Maimaiti, Huanbo Luan, Maosong Sun, Qun Liu, and Yang Liu. 2021. Self-supervised quality estimation for machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3322–3334, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A Implementation Details
In the pre-processing phase, we filter out parallel samples that are too long or too short, and only reserve sentences with 10-100 tokens. We pre-train the model on 8 NVIDIA Tesla V100 (32GB) GPUs for two epochs, with the batch size set to 8 for each GPU. Then we fine-tune the model on a single NVIDIA Tesla V100 (32GB) GPU for up to 10 epochs, with the batch size set to 8 as well. Early stopping is used in the fine-tuning phase, with the patience set to 20. We evaluate the model every 10% steps in one epoch. The pre-training often takes more than 15 hours and the fine-tuning takes 1 or 2 hours. We use Adam (Kingma and Ba, 2014)
to optimize the model with the learning rate set to 5e-6 in both the pre-training and fine-tuning phases.
For all hyper-parameters in our experiments, we manually tune them on the validation set of *HJQE*.
## B Main Results On The Validation Set
In Table 5, we also report the main results on the validation set of *HJQE*.
## C Case Study
In Figure 6, we show some cases from the validation set of the English-Chinese language pair.
From the examples, we can see that the TER-based model (noted as PE Effort Prediction) often annotates wrong BAD spans and is far from human judgment. For the first example, the MT sentence correctly reflects the meaning of the source sentence, and the PE is just a paraphrase of the MT
sentence. Our model correctly annotates all words as OK, while the TER-based one still annotates many BAD words. For the second example, the key issue is the translation of "unifies" in Chinese.
Though "统一" is the direct translation of "unifies" in Chinese, it can not express the meaning of winning two titles in the Chinese context. And our model precisely annotated the "统一 了" in the MT sentence as BAD. For the third example, the MT model fails to translate the "parsley" and the "sumac" to "欧芹" and "盐肤木" in Chinese, since they are very rare words. While the TERbased model mistakenly predicts long BAD spans, our model precisely identifies both mistranslated parts in the MT sentence.
| Model | English-German (En-De) | English-Chinese (En-Zh) | | | | | | |
|---------------------------------------|--------------------------|---------------------------|------------|-------|-------|-------|------------|-------|
| MCC | F-OK | F-BAD | F-BAD-Span | MCC | F-OK | F-BAD | F-BAD-Span | |
| Baselines | | | | | | | | |
| FT on HJQE only | 34.69 | 94.28 | 40.38 | 28.65 | 45.76 | 91.96 | 53.77 | 29.84 |
| PT (TER-based) | 13.13 | 37.30 | 18.80 | 4.72 | 11.38 | 25.91 | 27.41 | 2.16 |
| + FT on HJQE | 35.02 | 94.00 | 40.86 | 26.68 | 47.34 | 91.30 | 55.43 | 28.53 |
| With tag correcting strategies (ours) | | | | | | | | |
| PT w/ Tag Refinement | 13.26 | 52.43 | 19.78 | 6.42 | 18.83 | 53.29 | 31.39 | 3.48 |
| + FT on HJQE | 37.70 | 94.08 | 43.32 | 30.83 | 47.61 | 92.39 | 55.22 | 28.33 |
| PT w/ Tree-based Annotation | 13.92 | 84.79 | 22.75 | 9.64 | 20.98 | 59.32 | 32.84 | 6.53 |
| + FT on HJQE | 37.03 | 94.46 | 42.54 | 31.21 | 48.14 | 91.88 | 56.02 | 28.17 |
| PT w/ Both | 13.12 | 39.68 | 18.94 | 5.26 | 21.39 | 56.76 | 32.74 | 5.72 |
| + FT on HJQE | 38.90 | 94.44 | 44.35 | 32.21 | 48.71 | 90.74 | 56.47 | 25.51 |
Table 5: The word-level QE performance on the validation set of *HJQE* for two language pairs, En-De and En-Zh.
PT indicates pre-training and FT indicates fine-tuning.
| Source: To win, a wrestler must strip their opponent's tuxedo off. MT: 要 想 获胜 , 摔跤 运动员 必须 把 对手 的 礼服 脱下来 . MT Back: To win, the wrestler had to take his opponent's dress off. PE: 要 赢得 胜利 , 摔跤 运动员 必须 脱掉 对手 的 燕尾服 。 PE Back: To win the victory, the wrestler had to remove his opponent's tuxedo. TER-based: 要 想 获胜 , 摔跤 运动员 必须 把 对手 的 礼服 脱下来 . Ours: 要 想 获胜 , 摔跤 运动员 必须 把 对手 的 礼服 脱下来 . Source: April 28 Juan Díaz unifies the WBA and WBO Lightweight titles after defeating Acelino Freitas. MT: 4 月 28 日 , 胡安 · 迪亚斯 在 击败 阿 切利 诺 · 弗雷 塔斯 后 统一 了 WBA 和 WBO 轻量级 冠军 . MT Back: On April 28, Juan Díaz Unified the WBA and WBO lightweight titles after defeating Acelino Freitas. PE: 4 月 28 日 , Juan Díaz 在 击败 Acelino Freitas 之后 , 将 W 世界 拳击 协会 和 世界 拳击 组织 的 轻量级 冠军 揽于 一身 。 PE Back: On April 28, Juan Díaz won both the WBA and WBO lightweight titles after defeating Acelino Freitas. TER-based: 4 月 28 日 , 胡安 · 迪亚斯 在 击败 阿 切利 诺 · 弗雷 塔斯 后 统一 了 WBA 和 WBO 轻量级 冠军 . Ours: 4 月 28 日 , 胡安 · 迪亚斯 在 击败 阿 切利 诺 · 弗雷 塔斯 后 统一 了 WBA 和 WBO 轻量级 冠军 . Source: Fattoush is a combination of toasted bread pieces and parsley with chopped cucumbers, radishes, tomatoes and flavored by sumac. MT: 法杜什是 烤面包片 和 帕斯 莱 与 切碎 的 黄瓜 、 萝卜 、 西红柿 、 和 洋葱 以及 香味 的 消耗品 的 组合 。 MT Back: Fadush is a combination of toast and pasai with chopped cucumbers, radishes, tomatoes and onions and scented consumables. PE: Fattoush 是 烤面包片 和 欧芹 与 切碎 的 黄瓜 , 萝卜 , 西红柿 和 葱 的 组合 , 并 以 盐肤木 调味 。 PE Back: Fattoush is a combination of toast and parsley with chopped cucumbers, radishes, tomatoes and scallions, seasoned with rhus salt. TER-based: 法杜什是 烤面包片 和 帕斯 莱 与 切碎 的 黄瓜 、 萝卜 、 西红柿 、 和 洋葱 以及 香味 的 消耗品 的 组合 。 Ours: 法杜什是 烤面包片 和 帕斯 莱 与 切碎 的 黄瓜 、 萝卜 、 西红柿 、 和 洋葱 以及 香味 的 消耗品 的 组合 。 Figure 6: Examples of word-level QE from the validation set of English-Chinese language pair. |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
cui-etal-2023-pv2tea | {PV}2{TEA}: Patching Visual Modality to Textual-Established Information Extraction | https://aclanthology.org/2023.findings-acl.127 | Information extraction, e.g., attribute value extraction, has been extensively studied and formulated based only on text. However, many attributes can benefit from image-based extraction, like color, shape, pattern, among others. The visual modality has long been underutilized, mainly due to multimodal annotation difficulty. In this paper, we aim to patch the visual modality to the textual-established attribute in- formation extractor. The cross-modality integration faces several unique challenges: (C1) images and textual descriptions are loosely paired intra-sample and inter-samples; (C2) images usually contain rich backgrounds that can mislead the prediction; (C3) weakly supervised labels from textual-established ex- tractors are biased for multimodal training. We present PV2TEA, an encoder-decoder architecture equipped with three bias reduction schemes: (S1) Augmented label-smoothed contrast to improve the cross-modality alignment for loosely-paired image and text; (S2) Attention-pruning that adaptively distinguishes the visual foreground; (S3) Two-level neighborhood regularization that mitigates the label textual bias via reliability estimation. Empirical results on real-world e-Commerce datasets1 demonstrate up to 11.74{\%} absolute (20.97{\%} relatively) F1 increase over unimodal baselines. | # Pv2Tea: Patching Visual Modality To Textual-Established Information Extraction
Hejie Cui1∗, Rongmei Lin2, Nasser Zalmout2**, Chenwei Zhang**2, Jingbo Shang3, Carl Yang1**, Xian Li**2 1 Emory University, GA, USA
2 Amazon.com Inc, WA, USA
3 University of California, San Diego, CA, USA
{hejie.cui, j.carlyang}@emory.edu, [email protected]
{linrongm, nzalmout, cwzhang, xianlee}@amazon.com
## Abstract
Information extraction, e.g., attribute value extraction, has been extensively studied and formulated based only on text. However, many attributes can benefit from image-based extraction, like color, shape, pattern, among others.
The visual modality has long been underutilized, mainly due to multimodal annotation difficulty. In this paper, we aim to patch the visual modality to the textual-established attribute information extractor. The cross-modality integration faces several unique challenges: (C1)
images and textual descriptions are loosely paired intra-sample and inter-samples; (C2)
images usually contain rich backgrounds that can mislead the prediction; (C3) weakly supervised labels from textual-established extractors are biased for multimodal training.
We present PV2TEA, an encoder-decoder architecture equipped with three bias reduction schemes: (S1) Augmented label-smoothed contrast to improve the cross-modality alignment for loosely-paired image and text; (S2)
Attention-pruning that adaptively distinguishes the visual foreground; (S3) Two-level neighborhood regularization that mitigates the label textual bias via reliability estimation. Empirical results on real-world e-Commerce datasets1 demonstrate up to 11.74% absolute (20.97%
relatively) F1 increase over unimodal baselines.
## 1 Introduction
Information extraction, e.g., attribute value extraction, aims to extract structured knowledge triples, i.e., (*sample_id, attribute, value*), from the unstructured information. As shown in Figure 1, the inputs include text descriptions and images (optional)
along with the queried attribute, and the output is the extracted value. In practice, textual description has played as the main or only input in mainstream
∗Work was done when Hejie was an intern at Amazon.
1The code and the human-annotated datasets with finegrained source modality labels of gold values are available at https://github.com/HennyJie/PV2TEA.
![0_image_0.png](0_image_0.png)
Figure 1: Illustration of multimodal attribute extraction and the challenges in cross-modality integration.
approaches for automatic attribute value extraction
(Zheng et al., 2018; Xu et al., 2019; Wang et al.,
2020; Karamanolakis et al., 2020; Yan et al., 2021; Ding et al., 2022). Such models perform well when the prediction targets are inferrable from the text.
As the datasets evolve, interest in incorporating visual modality naturally arises, especially for image-driven attributes, e.g., Color, Pattern, *Item* Shape. Such extraction tasks rely heavily on visual information to obtain the correct attribute values.
The complementary information contained in the images can improve recall in cases where the target values are not mentioned in the texts. In the meantime, the cross-modality information can help with ambiguous cases and improve precision.
However, extending a single-modality task to multi-modality can be very challenging, especially due to the lack of annotations in the new modality. Performing accurate labeling based on multiple modalities requires the annotator to refer to multiple information resources, leading to a high cost of human labor. Although there are some initial explorations on multimodal attribute value extraction (Zhu et al., 2020; Lin et al., 2021; De la Comble et al., 2022), all of them are fully supervised and overlook the resource-constrained setting of building a multimodal attribute extraction framework based on the previous textual-established models. In this paper, we aim to patch the visual modality to attribute value extraction by leveraging textual-based models for weak supervision, thus reducing the manual labeling effort.
Challenges. Several unique challenges exist in visual modality patching: C1. Images and their textual descriptions are usually *loosely aligned* in two aspects: From the intra-sample aspect, they are usually weakly related considering the rich characteristics, making it difficult to ground the language fragments to the corresponding image regions; From the inter-samples aspect, it is commonly observed that the text description of one sample may also partially match the image of another. As illustrated in Figure 1, the textual description of the *mattress* product is fragmented and can also correspond to other images in the training data. Therefore, traditional training objectives for multimodal learning such as binary matching (Kim et al., 2021) or contrastive loss (Radford et al., 2021) that only treat the text and image of the same sample as positive pairs may not be appropriate. C2. Bias can be brought by the *visual input* from the *noisy contextual background*. The images usually not only contain the interested object itself but also demonstrate a complex background scene. Although the backgrounds are helpful for scene understanding, they may also introduce spurious correlation in a fine-grained task such as attribute value extraction, which leads to imprecise prediction (Xiao et al.,
2021; Kan et al., 2021). C3. Bias also exists in language perspective regarding the *biased weak* labels from textual-based models. As illustrated in Figure 1, the color label of *mattress* is misled by
'*green tea infused*' in the text. These noisy labels can be more catastrophic for a multimodal model due to their incorrect grounding in images. Directly training the model with these biased labels can lead to gaps between the stronger language modality and the weaker vision modality (Yu et al., 2021).
Solutions. We propose PV2TEA, a sequence-tosequence backbone composed of three modules:
visual encoding, cross-modality fusion and grounding, and attribute value generation, each with a bias-reduction scheme dedicated to the above challenges: S1. To better integrate the *loosely-aligned* texts and images, we design an augmented labelsmoothed contrast schema for cross-modality fusion and grounding, which considers both the intrasample weak correlation and the inter-sample potential alignment, encouraging knowledge transfer from the strong textual modality to the weak visual one. S2. During the visual encoding, we equip PV2TEA with an attention-pruning mechanism that adaptively distinguishes the distracting background and *attends to the most relevant regions* given the entire input image, aiming to improve precision in the fine-grained task of attribute extraction. S3. To mitigate the bias from *textual-biased weak labels*, a two-level neighborhood regularization based on visual features and previous predictions, is designed to emphasize trustworthy training samples while mitigating the influence of textual-biased labels. In this way, the model learns to generate more balanced results rather than being dominated by one modality of information. In summary, the main contributions of PV2TEA are three-fold:
- We propose PV2TEA, an encoder-decoder framework effectively patching up visual modality to textual-established attribute value extraction.
- We identify three unique challenges in patching visual modality for information extraction, with solutions for intra-sample and inter-samples loose alignment and bias from *complex visual* background and *textual-biased labels*.
- We release three human-annotated datasets with modality source labels of the gold values to facilitate fine-grained evaluation. Extensive results validate the effectiveness of PV2TEA.
## 2 Preliminaries 2.1 Problem Definition
We consider the task of automatic attribute extraction from multimodal input, i.e., textual descriptions and images. Formally, the input is a query attribute R and a text-image pairs dataset D = {Xn}
N
n=1 = {(In, Tn, cn)}
N
n=1 consisting of N samples (e.g., products), where In represents the profile image of Xn, Tn represents the textual description and cn is the sample category (e.g.,
product type). The model is expected to infer attribute value yn of the query attribute R for sample Xn. We consider the challenging setting with open-vocabulary attributes, where the number of candidate values is extensive and yn can contain either single or multiple values.
## 2.2 **Motivating Analysis On The Textual Bias Of** Attribute Information Extraction
Existing textual-based models or multimodal models directly trained with weak labels suffer from a strong bias toward the texts. As illustrated in Figure 1, the training label for the *color* attribute of the *mattress* is misled by '*green tea infused*'
![2_image_0.png](2_image_0.png)
from the textual profile. Models trained with such textual-shifted labels will result in a learning ability gap between modalities, where the model learns better from the textual than the visual modality.
To quantitatively study the learning bias, we conduct fine-grained source-aware evaluations on a real-world e-Commerce dataset with representative unimodal and multimodal methods, namely OpenTag (Zheng et al., 2018) with the classification setup and PAM (Lin et al., 2021). Specifically, for each sample in the test set, we collect the source of the gold value (i.e., text or image). Experiment results are shown in Figure 2, where label Source:
Text indicates the gold value is present in the text, while label *Source: Image* indicates the gold value is absent from the text and must be inferred from the image. It is shown that both the text-based unimodal extractor and multimodal extractor achieve impressive results when the gold value is contained in the text. However, when the gold value is not contained in the text and must be derived from visual input, the performance of all three metrics drops dramatically, indicating a strong textual bias and dependence of existing models.
## 3 Pv2Tea
We present the backbone architecture and three bias reduction designs of PV2TEA, shown in Figure 3.
The backbone is formulated based on visual question answering (VQA) composed of three modules:
(1) **Visual Encoding.** We adopt the Vision Transformer (ViT) (Dosovitskiy et al., 2021) as the visual encoder. The given image In is divided into patches and featured as a sequence of tokens, with a special token [CLS-I]appended at the head of the sequence, whose representation v cls nstands for the whole input image In.
(2) **Cross-Modality Fusion and Grounding.** Following the VQA paradigm, we define the question prompt as "What is the R *of the* cn?", with a special token [CLS-Q] appended at the beginning.
A unimodal BERT (Devlin et al., 2019) encoder is adopted to produce token-wise textual representation from sample profiles (title, bullets, and descriptions). The visual representations of P image patches vn = [v cls n, v 1n*, . . . ,* v Pn] are concatenated with the textual representation of T tokens tn = [t cls n, t 1n*, . . . ,* t Tn], which is further used to perform cross-modality fusion and grounding with the question prompt through cross-attention. The output qn = [q cls n, q 1n*, . . . ,* q Q
n] is then used as the grounded representation for the answer decoder.
(3) **Attribute Value Generation.** We follow the design from (Li et al., 2022a), where each block of the decoder is composed of a causal self-attention layer, a cross-attention layer, and a feed-forward network. The decoder takes the grounded multimodal representation as input and predicts the attribute value yˆn in a generative manner2.
Training Objectives. The overall training objective of PV2TEA is formulated as L = Lsc + Lct + Lr-mlm, (1)
where the three loss terms, namely augmented label-smoothed contrastive loss Lsc (Section 3.1),
category aware ViT loss Lct (Section 3.2), and neighborhood-regularized mask language modeling loss Lr-mlm (Section 3.3) correspond to each of the three prementioned modules respectively.
## 3.1 Augmented Label-Smoothed Contrast For Multi-Modality Loose Alignment (S1)
Contrastive objectives have been proven effective in multimodal pre-training (Radford et al., 2021)
by minimizing the representation distance between different modalities of the same data point while keeping those of different samples away (Yu et al., 2022). However, for attribute value extraction, the image and textual descriptions are typically *loosely* aligned from two perspectives: (1) *Intra-sample* weak alignment: The text description may not necessarily form a coherent and complete sentence, but a set of semantic fragments describing multiple facets. Thus, grounding the language to corresponding visual regions is difficult. (2) Potential inter-samples alignment: Due to the commonality of samples, the textual description of one sample may also correspond to the image of another. Thus, traditional binary matching and contrastive objectives become suboptimal for these loosely-aligned texts and images.
To handle the looseness of images and texts, we
![3_image_0.png](3_image_0.png)
augment the contrast to include sample comparison outside the batch with two queues storing the most recent M (M ≫ batch size B ) visual and textual representations, inspired by the momentum contrast in MoCo (He et al., 2020) and ALBEF (Li et al., 2021). For the *intra-sample weak alignment* of each given sample Xn, instead of using the onehot pairing label p i2t n
, we smooth the pairing target with the pseudo-similarity q i2t n
,
pe i2t n = (1 − α)p i2t n + αq i2t n , (2)
where α is a hyper-parameter and q i2t nis calculated by softmax over the representation multiplication of the [CLS] tokens, v
′cls nand t
′cls n, from momentum unimodal encoders F′v and F′t,
$$\mathbf{q}_{n}^{\mathrm{id}}=\sigma\left(\mathcal{F}^{\prime}\left(\mathcal{I}_{n}\right)^{\top}\mathcal{F}^{\prime}\left(\mathcal{T}_{n}\right)\right)=\sigma\left(\mathbf{v}_{n}^{\prime\mathrm{cls}^{\top}}\mathbf{t}_{n}^{\prime\mathrm{cls}}\right).\tag{3}$$ For _potential inter-samples pairing relations_, the
visual representation v
′cls nis compared with all textual representations T′in the queue to augment contrastive loss. Formally, the predicted image-totext matching probability of Xn is
$$\mathbf{d}_{n}^{\mathrm{i}2t}=\frac{\exp\left(\mathbf{v}_{n}^{\prime}\mathrm{cls}^{\top}\mathbf{T}_{m}^{\prime}/\tau\right)}{\sum_{m=1}^{M}\exp\left(\mathbf{v}_{n}^{\prime}\mathrm{cls}^{\top}\mathbf{T}_{m}^{\prime}/\tau\right)}.\tag{4}$$ With the smoothed targets from Equation (2), the
image-to-text contrastive loss Li2tis calculated as the cross-entropy between the smoothed targets pe i2t nand contrast-augmented predictions d i2t n
,
$$L_{i2t}=-\frac{1}{N}\left(\sum_{n=1}^{N}\widetilde{\mathbf{p}}_{n}^{i2t}\cdot\log\left(\mathbf{d}_{n}^{i2t}\right)\right),\tag{5}$$ and vice versa for the _text-to-image_ contrastive loss
Lt2i. Finally, the augmented label-smoothed contrastive loss Lsc is the average of these two terms, Lsc = (Li2t + Lt2i) /2. (6)
## 3.2 Visual Attention Pruning (S2)
Images usually contain not only the visual foreground of the concerned category but also rich background contexts. Although previous studies indicate context can serve as an effective cue for visual understanding (Doersch et al., 2015; Zhang et al., 2020; Xiao et al., 2021), it has been found that the output of ViT is often based on supportive signals in the background rather than the actual object (Chefer et al., 2022). Especially in a fine-grained task such as attribute value extraction, the associated backgrounds could distract the visual model and harm the prediction precision. For example, when predicting the color of *birthday* balloons, commonly co-occurring contexts such as *flowers* could mislead the model and result in wrongly predicted values.
To encourage the ViT encoder F focus on taskrelevant foregrounds given the input image In, we add a category-aware attention pruning schema, supervised with category classification,
$$L_{\text{ct}}=-\frac{1}{N}\left(\sum_{n=1}^{N}c_{n}\cdot\log\left(\mathcal{F}(\mathcal{I}_{n})\right)\right).\tag{7}$$ In real-world information extraction tasks, 'cate
In real-world information extraction tasks, 'category' denote classification schemas for organizing and structuring diverse data, exemplified by the broad range of product types in e-commerce, such as electronics, clothing, or books. These categories not only display vast diversity but also have distinct data distributions and properties, adding layers of complexity to the information extraction scenarios.
The learned attention mask M in ViT can gradually resemble the object boundary of the interested category and distinguishes the most important taskrelated regions from backgrounds by assigning different attention weights to the image patches (Selvaraju et al., 2017). The learned M is then applied on the visual representation sequences vn of the whole image, v pt n = vn ⊙ σ(M), (8)
to screen out noisy background and task-irrelevant patches before concatenating with the textual representation tn for further cross-modal grounding.
## 3.3 Two-Level Neighborhood-Regularized Sample Weight Adjustment (S3)
Weak labels from established models can be noisy and biased toward the textual input. Directly training the models with these labels leads to a learning gap across modalities. Prior work on self-training shows that embedding similarity can help to mitigate the label errors issue (Xu et al., 2023; Lang et al., 2022). Inspired by this line of work, we design a two-level neighborhood-regularized sample weight adjustment. In each iteration, sample weight s (Xn) is updated based on its label reliability, which is then applied to the training objective of attribute value generation in the next iteration,
$$\mathcal{L}_{\text{r-mlm}}=-\frac{1}{N}\left(\sum_{n=1}^{N}s\left(\mathcal{X}_{n}\right)\cdot g\left(y_{n},\hat{y}_{n}\right)\right),\tag{9}$$ where $g$ measures the element-wise cross entropy.
between the training label yn and the prediction yˆn.
As illustrated by the right example in Figure 3 3, where green arrows point to samples with the same training label as yn, and red arrows point to either visual or prediction neighbors, a higher consistency between the two sets indicates a higher reliability of yn, formally explained as below:
(1) Visual Neighbor Regularization. The first level of regularization is based on the consistency between the sample set with the same training label yn and visual feature neighbors of Xn. For each sample Xn with visual representation vn, we adopt the K-nearest neighbors (KNN) algorithm to find its neighbor samples in the visual feature space:
Nn = {Xn ∪ Xk ∈ KNN (vn, D*, K)*} , (10)
where KNN (vn, D, K) demotes K samples in D
with visual representation nearest to vn. Simultaneously, we obtain the set of samples in D with the same training label yj as that of the sample Xn, Yn =
Xn ∪ Xj ∈ Dyj=yn
. (11)
The reliability of sample Xn based on the visual 3See Appendix G for additional demo examples.
| Attr | # PT | Value Type | # Valid | # Train & Val | # Test |
|-----------|--------|--------------|-----------|-----------------|----------|
| Item Form | 14 | Single | 142 | 42,911 | 4,165 |
| Color | 255 | Multiple | 24 | 106,176 | 3,777 |
| Pattern | 31 | Single | 30 | 119,622 | 2,093 |
Table 1: Statistics of the attribute extraction datasets.
neighborhood regularization is sv(Xn) = |Nn ∩ Yn| /K. (12)
(2) Prediction Neighbor Regularization. The second level of regularization is based on the consistency between the sample set with the same training label and the prediction neighbors from the previous iteration, which represents the learned multimodal representation. Prediction regularization is further added after E epochs when the model can give relatively confident predictions, ensuring the predicted values are qualified for correcting potential noise. Formally, we obtain the set of samples in D whose predicted attribute value pj from the last iteration is the same as that of the sample Xn, Yˆn =
Xn ∪ Xj ∈ Dyˆj=ˆyn
. (13)
With the truth-value consensus set Yn from Equation (11), the reliability based on previous prediction neighbor regularization of the sample Xn is sp (Xn) =
Yˆn ∩ Yn
/
Yˆn ∪ Yn
. (14)
Overall, s(Xn) is initially regularized with visual neighbors and jointly with prediction neighbors after E epochs when the model predicts credibly,
$\hat{z}_{\mu}$
$$\mathbf{\Phi}(\mathbf{a})\mathbf{\Phi}(\mathbf{b})$$
s (Xn) = sv (Xn) *e < E,*
AVG (sv (Xn), sp (Xn)) e ≥ E.(15)
## 4 Experimental Setup 4.1 Dataset And Implementation Details
We build three multimodal attribute value extraction datasets by collecting profiles (title, bullets, and descriptions) and images from the public amazon.com web pages, where each dataset corresponds to one attribute R. The dataset information is summarized in Table 1, where **Attr** is the attribute R, **\# PT** represents the number of unique categories (i.e., product types), **Value Type** indicates whether yn contain single or multiple values, and **\# Valid** represents the number of valid values. To better reflect real-world scenarios, we use the attribute-value pairs from the product information section on web pages as weak training labels instead of highly processed data. We follow the same filtering strategy from prior text established work (Zalmout and Li, 2022) to denoise training data. For the testing, we manually annotate gold
| Type | Method | Dataset: Item Form | Dataset: Color | Dataset: Pattern | | | | | | |
|---------------|---------------|----------------------|------------------|--------------------|-------|-----------|--------|-------|-------|-------|
| Precision | Recall | F1 | Precision | Recall | F1 | Precision | Recall | F1 | | |
| OpenTagseq | 91.37 | 44.97 | 60.27 | 83.94 | 24.73 | 38.20 | 79.65 | 19.83 | 31.75 | |
| Unimodal | OpenTagcls | 89.40 | 51.67 | 65.49 | 81.13 | 28.61 | 42.30 | 78.10 | 24.41 | 37.19 |
| TEA | 82.71 | 60.98 | 70.20 | 67.58 | 47.80 | 55.99 | 60.87 | 37.40 | 46.33 | |
| ViLBERT | 75.97 | 65.67 | 70.45 | 60.22 | 51.12 | 55.30 | 60.10 | 40.52 | 48.40 | |
| LXMERT | 75.79 | 68.72 | 72.08 | 60.20 | 54.26 | 57.08 | 60.33 | 42.28 | 49.72 | |
| UNITER | 76.75 | 69.10 | 72.72 | 61.30 | 54.69 | 57.81 | 62.45 | 43.38 | 51.20 | |
| BLIP | 78.21 | 69.25 | 73.46 | 62.70 | 58.23 | 60.38 | 58.74 | 44.01 | 50.32 | |
| PAM | 78.83 | 74.35 | 76.52 | 63.34 | 60.43 | 61.85 | 61.80 | 44.29 | 51.60 | |
| Multimodal | PV2TEA w/o S1 | 80.03 | 72.49 | 76.07 | 71.00 | 58.41 | 64.09 | 60.03 | 45.59 | 51.82 |
| PV2TEA w/o S2 | 80.48 | 75.32 | 77.81 | 73.77 | 59.37 | 65.79 | 59.01 | 46.74 | 52.16 | |
| Ours | PV2TEA w/o S3 | 80.87 | 72.71 | 76.57 | 74.29 | 59.04 | 65.79 | 59.92 | 44.92 | 51.35 |
| PV2TEA | 82.46 | 75.40 | 78.77 | 77.44 | 60.19 | 67.73 | 62.10 | 46.84 | 53.40 | |
labels on the benchmark dataset to ensure preciseness. Besides, the label sources are marked down, indicating whether the attribute value is present or absent in the text, to facilitate fine-grained sourceaware evaluation. The human-annotated benchmark datasets will be released to encourage the future development of modality-balanced multimodal extraction models. See Appendix A for the implementation and computation details of PV2TEA.
## 4.2 Evaluation Protocol
We use Precision, Recall, and F1 score based on synonym normalized exact string matching. For single value type, an extracted value yˆn is considered correct when it exactly matches the gold value string yn. For multiple value type where the gold values for the query attribute R can contain multiple answers yn ∈
y 1n*, . . . , y*m n
, the extraction is considered correct when all the gold values are matched in the prediction. Macro-aggregation is performed across attribute values to avoid the influence of class imbalance. All reported results are the average of three runs under the best settings.
## 4.3 Baselines
We compare our proposed model with a series of baselines, spanning unimodal-based methods and multimodal-based ones. For unimodal baselines, OpenTag (Zheng et al., 2018) is considered a strong text-based model for attribute extraction.
OpenTagseq formulates the task as sequence tagging and uses the BiLSTM-CRF architecture with self-attention. OpenTagcls replaces the BiLSTM
encoder with a transformer encoder and tackles the task as classification. TEA is another text-only uni-
| Method | Gold Value Source | Precision | Recall | F1 | |
|------------|---------------------|-------------|----------|-------|-------|
| Text ✓ | 89.78 | 52.13 | 65.96 | | |
| OpenTagcls | Text ✗ | Image ✓ | 78.95 | 31.25 | 44.78 |
| GAP ↓ | 10.83 | 20.88 | 21.18 | | |
| Text ✓ | 79.16 | 74.53 | 76.78 | | |
| Text ✗ | Image ✓ | 66.67 | 58.33 | 62.22 | |
| PAM | GAP ↓ | 12.50 | 16.20 | 14.56 | |
| Text ✓ | 82.64 | 75.71 | 79.02 | | |
| Text ✗ | Image ✓ | 75.00 | 62.50 | 68.18 | |
| PV2TEA | GAP ↓ | 7.64 | 13.21 | 10.84 | |
modal generative model with the same architecture as PV2TEA but without the image patching, which is included to demonstrate the influence of the generation setting. For multimodal baselines, we consider discriminative encoder models, including ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019) with dual encoders, and UNITER (Chen et al., 2020) with a joint encoder. We also add generative encoder-decoder models for comparisons.
BLIP (Li et al., 2022a) adopts dual encoders and an image-grounded text decoder. PAM (Lin et al.,
2021) uses a shared encoder and decoder separated by a prefix causal mask.
## 5 Experimental Results 5.1 Overall Comparison
Table 2 shows the performance comparison of different types of extraction methods. It is shown that PV2TEA achieves the best F1 performance, especially compared to unimodal baselines, demonstrating the advantages of patching visual modality to this text-established task. Comparing the unimodal methods with multimodal ones, textualonly models achieve impressive results on precision while greatly suffering from low recall, which indicates potential information loss when the gold value is not contained in the input text. With the generative setting, TEA sort of mitigates the information loss and improves recall over OpenTag under the tagging and classification settings. Besides, adding visual information can further improve recall, especially for the multi-value attribute Color, where multimodal models can even double that of text-only ones. However, the lower precision performance of the multimodal models implies the challenges beneath cross-modality integration.
With the three proposed bias-reduction schemes, PV2TEA improves on all three metrics over multimodal baselines and balances precision and recall to a great extent compared with unimodal models. Besides the full PV2TEA, we also include three variants that remove one proposed schema at a time. It shows that the visual attention pruning module mainly helps with precision while the other two benefit both precision and recall, leading to the best F1 performance when all three schemes are equipped. We include several case studies in Section 5.3 for qualitative observation.
Source-Aware Evaluation. To investigate how the modality learning bias is addressed, we conduct fine-grained source-aware evaluation similarly to Section 2.2, as shown in Table 3 4. The performance gap between when the gold value is present or absent in the text is significantly reduced by PV2TEA
when compared to both unimodal and multimodal representative methods, which suggests a more balanced and generalized capacity of PV2TEA to learn from different modalities. When the gold value is absent in the text, our method outperforms OpenTagcls by more than twice as much on recall, and also outperforms on precision under various scenarios compared to the multimodal PAM.
## 5.2 Ablation Studies
Augmented Label-Smoothed Contrast. We look into the impact of label-smoothed contrast on both single- and multiple-value type datasets 5. Table
Method**Single Value Dataset Multiple Value Dataset**
P R F1 P R F1
w/o Lsc 80.03 72.49 76.07 71.00 58.41 64.09
w/o Smooth 81.42 74.41 77.76 75.06 59.99 66.68
PV2TEA 82.46 75.40 78.77 77.44 60.19 67.73
Table 4: Ablation study on the augmented labelsmoothed contrast for cross-modality alignment (%).
![6_image_0.png](6_image_0.png)
4 shows that removing the contrastive objective leads to a drop in both precision and recall. For the multiple-value dataset, adding the contrastive objective significantly benefits precision, suggesting it encourages cross-modal validation when there are multiple valid answers in the visual input. With label smoothing, the recall can be further improved.
This indicates that the augmented and smoothed contrast can effectively leverage the cross-modality alignment inter-samples, hence improving the coverage rate when making predictions.
In addition, we conduct cross-modality retrieval to study the efficacy of aligning objectives, i.e.,
binary matching and contrastive loss, for crossmodality alignment and the influence of the softness α, as shown in Figure 4. Across different datasets and metrics, the contrastive loss consistently outperforms the binary matching loss.
This consolidates our choice of contrasting objectives and highlights the potential benefits of labelsmoothing and contrast augmentation, given that both are neglected in a binary matching objective.
Retrieval performance under different smoothness values shows a trend of first rising and then falling.
We simply take 0.4 for α in our experiments.
Category Aware Attention Pruning. We study Table 5: Ablation study on the category supervised visual attention pruning (%).
| Method | Single Value Dataset | Multiple Value Dataset | | | | |
|---------------|------------------------|--------------------------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 | |
| w/o Lct | 80.48 | 75.32 | 77.81 | 73.77 | 59.37 | 65.79 |
| w/o Attn Prun | 80.61 | 75.49 | 77.97 | 74.60 | 59.42 | 66.15 |
| PV2TEA | 82.46 | 75.40 | 78.77 | 77.44 | 60.19 | 67.73 |
![7_image_0.png](7_image_0.png)
Figure 5: Visualization of learned attention mask with category (e.g., product type) aware ViT classification.
| Method | Single Value Dataset | Multiple Value Dataset | | | | |
|-------------|------------------------|--------------------------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 | |
| w/o NR | 80.87 | 72.71 | 76.57 | 74.29 | 59.04 | 65.79 |
| w/o Vis-NR | 81.87 | 73.54 | 77.48 | 77.07 | 59.99 | 67.47 |
| w/o Pred-NR | 81.81 | 73.18 | 77.25 | 76.71 | 59.44 | 66.98 |
| PV2TEA | 82.46 | 75.40 | 78.77 | 77.44 | 60.19 | 67.73 |
the influence of the category aware attention pruning, as shown in Table 5. The results imply that adding the category classification helps to improve precision performance without harming recall, and the learned attention mask can effectively highlight the foreground regions of the queried sample. Figure 5 presents several visualizations of the learned attention mask.
Neighborhood Regularization. We consider the influence of the two-level neighborhood regularization by removing the visual neighborhood regularization (Vis-NR), prediction neighborhood regularization (Pred-NR), or both (NR) from the full model. Results in Table 6 show all the metrics decrease when both regularizations are removed, indicating the validity of the proposed neighborhood regularized sample weight adjustment in mitigating the influence of hard, noisy samples. Besides, since the second-level prediction-based neighbor regularization is independent of the multimodal extraction framework, it can be incorporated flexibly into other frameworks as well for future usage.
Classification vs. Generation To determine which architecture is better for multimodal attribute value extraction, we compare the generation and classification settings for the module of the attribute
| Setting | D: Item Form | D: Color | D: Pattern | | | | | | |
|----------------|-------------------|-------------------|-------------------|-------|-------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 | P | R | F1 | |
| Classification | 79.93 | 70.47 | 74.90 | 72.21 | 50.18 | 59.21 | 59.08 | 42.16 | 49.21 |
| Generation | 82.46 75.40 78.77 | 77.44 60.19 67.73 | 62.10 46.84 53.40 | | | | | | |
Table 7: Attribute extraction performance comparison between the settings of classification and generation.
information extractor. The results are demonstrated in Table 7. It is shown that the setting of generation achieves significant advantages over classification. Especially on the recall performance for multi-value type attribute Color, where the gold value can be multiple, the improvement of recall can be up to 20% relatively. This indicates that the generation setting can extract more complete results from the multimodal input, leading to a higher coverage rate. Therefore, we choose the generation setting in the attribute value extraction module in the final architecture design of PV2TEA.
## 5.3 Case Study
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
Figure 6: Qualitatively case studies.
To qualitatively observe the extraction performance, we attach several case studies in Figure 6.
It shows that even when the attribute value is not contained in the text, PV2TEA can still perform the extraction reliably from images. In multiple value datasets such as Color, PV2TEA can effectively differentiate related regions and extract multiple values with comprehensive coverage.
## 6 Related Work
Attribute Information Extraction. Attribute extraction has been extensively studied in the literature primarily based on textual input. OpenTag (Zheng et al., 2018) formalizes it as a sequence tagging task and proposes a combined model leveraging bi-LSTM-CRF, and attention to perform end-to-end tagging. Xu et al. (2019) scales the sequence-tagging-based model with a global set of BIO tags. AVEQA (Wang et al., 2020) develops a question-answering model by treating each attribute as a question and extracting the best answer span from the text. TXtract (Karamanolakis et al., 2020) uses a hierarchical taxonomy of categories and improves value extraction through multitask learning. AdaTag (Yan et al., 2021) exploits an adaptive CRF-based decoder to handle multiattribute value extractions. Additionally, there have been a few attempts at multimodal attribute value extraction. M-JAVE (Zhu et al., 2020) introduces a gated attention layer to combine information from the image and text. PAM (Lin et al., 2021) proposes a transformer-based sequence-to-sequence generation model for multimodal attribute value extraction. Although the latter two use both visual and textual input, they fail to account for possible modality bias and are fully supervised.
Multi-modality Alignment and Fusion. The goal of multimodal learning is to process and relate information from diverse modalities. CLIP (Radford et al., 2021) makes a gigantic leap forward in bridging embedding spaces of image and text with contrastive language-image pretraining. ALBEF (Li et al., 2021) applies a contrastive loss to align the image and text representation before merging with cross-modal attention, which fits looselyaligned sample image and text. Using noisy picture alt-text data, ALIGN (Jia et al., 2021) jointly learns representations applicable to either visiononly or vision-language tasks. The novel VisionLanguage Pre-training (VLP) framework established by BLIP (Li et al., 2022a) is flexibly applied to both vision-language understanding and generation tasks. GLIP (Li et al., 2022b) offers a grounded language-image paradigm for learning semantically rich visual representations. FLAVA (Singh et al.,
2022) creates a foundational alignment that simultaneously addresses vision, language, and their interconnected multimodality. Flamingo (Alayrac et al., 2022) equips the model with in-context fewshot learning capabilities. SimVLM (Wang et al.,
2022b) is trained end-to-end with a single prefix language modeling and investigates large-scale weak supervision. Multi-way Transformers are introduced in BEIT-3 (Wang et al., 2022a) for generic modeling and modality-specific encoding.
## 7 Conclusion
In this work, we propose PV2TEA, a bias-mitigated visual modality patching-up model for multimodal information extraction. Specifically, we take attribution value extraction as an example for illustration. Results on our released sourceaware benchmarks demonstrate remarkable improvements: the augmented label-smoothed contrast promotes a more accurate and complete alignment for loosely related images and texts; the visual attention pruning improves precision by masking out task-irrelevant regions; and the neighborhoodregularized sample weight adjustment reduces textual bias by lowering the influence of noisy samples.
We anticipate the investigated challenges and proposed solutions will inspire future scenarios where the task is first established on the text and then expanded to multiple modalities.
## Limitations
There are several limitations that can be considered for future improvements: (1) In multimodal alignment and fusion, we only consider a single image for each sample, whereas multiple images can be available. A more flexible visual encoding architecture that can digest an indefinite number of input images can improve the visual information coverage; 2) The empirical results in this work focus on three attribute extraction datasets (i.e., item form, color, and pattern) that can clearly benefit from visual perspectives, while there are also various attribute types that rely more on the textual input.
Different traits of attributes may influence the preferred modalities during the modeling, which is out of scope for this work but serves as a natural extension of this study; 3) Currently there is no specific design to improve the efficiency based on the visual question answering architecture. It can be not scalable as the number of attributes increases.
There could be a dual-use regarding the attentionpruning mechanism, which can be a potential risk of this work that could arise and harm the result.
The attention-pruning mechanism encourages the model to focus on the task-relevant foreground on the given image selected with category supervision, which can improve the prediction precision given the input image is visually rich and contains noisy context background. While for some types of images, such as infographics, there may be helpful text information on the images or intentionally attached by providers. These additional texts may be overlooked by the attention-pruning mechanism, resulting in potential information losses. A possible mitigation strategy is to add an OCR component along with the visual encoder to extract potential text information from given images.
## Ethics Statement
We believe this work has a broader impact outside the task and datasets in the discussion. The studied textual bias problem in our motivating analysis and the potential of training a multimodal model with weakly-supervised labels from textestablished models are not restricted to a specific task. Also, it becomes common in the NLP domain that some tasks first established based on pure text input are expected to further include the consideration multimodal input. The discussion in this work can be generalized to a lot of other application scenarios. The proposed solutions for multimodal integration and modality bias mitigation are independent of model architecture, which we expect can be applied to other downstream tasks or inspire designs with similar needs.
Regarding the human annotation involved in this work, we create three benchmark datasets that are manually labeled by human laborers to facilitate the source-aware evaluation. The annotation includes both gold attribute value as well as label sources, i.e., image or text. The profiles and images are all collected based on the publicly accessible Amazon shopping website. We depend on internal qualityassured annotators with balanced demographic and geographic characteristics, who consent and are paid adequately based in the US. The data collection protocol is approved by the ethics review board.
We attach detailed human annotation instructions and usage explanations provided to the annotators in Appendix F for reference.
## Acknowledgements
We would like to thank Binxuan Huang and Yan Liang for their insightful advice and thank anonymous reviewers for their feedback. This work was partially supported by Amazon.com Services LLC,
internal funds by the Computer Science Department of Emory University, and the University Research Committee of Emory University.
## References
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning.
Hila Chefer, Idan Schwartz, and Lior Wolf. 2022. Optimizing relevance maps of vision transformers improves robustness. In *NeurIPS*.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *European conference on* computer vision, pages 104–120. Springer.
Aloïs De la Comble, Anuvabh Dutt, Pablo Montalvo, and Aghiles Salah. 2022. Multi-modal attribute extraction for e-commerce. arXiv preprint arXiv:2203.03441.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL*.
Yifan Ding, Yan Liang, Nasser Zalmout, Xian Li, Christan Grant, and Tim Weninger. 2022. Ask-and-verify:
Span candidate generation and verification for attribute value extraction. In *EMNLP*.
Carl Doersch, Abhinav Gupta, and Alexei A Efros. 2015.
Unsupervised visual representation learning by context prediction. In *ICCV*.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *CVPR*.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *ICML*.
Xuan Kan, Hejie Cui, and Carl Yang. 2021. Zero-shot scene graph relation prediction through commonsense knowledge integration. In *ECML PKDD*.
Giannis Karamanolakis, Jun Ma, and Xin Luna Dong.
2020. Txtract: Taxonomy-aware knowledge extraction for thousands of product categories. In ACL.
Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt:
Vision-and-language transformer without convolution or region supervision. In *International Conference on Machine Learning*, pages 5583–5594.
PMLR.
Hunter Lang, Aravindan Vijayaraghavan, and David Sontag. 2022. Training subset selection for weak supervision. In *NeurIPS*.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H.
Hoi. 2022a. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In *ICML*.
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation.
NeurIPS.
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2022b. Simvlm: Simple visual language model pretraining with weak supervision.
Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al.
2022b. Grounded language-image pre-training. In CVPR.
Rongmei Lin, Xiang He, Jie Feng, Nasser Zalmout, Yan Liang, Li Xiong, and Xin Luna Dong. 2021.
Pam: understanding product images in cross product category attribute extraction. In *SIGKDD*.
Ran Xu, Yue Yu, Hejie Cui, Xuan Kan, Yanqiao Zhu, Joyce Ho, Chao Zhang, and Carl Yang. 2023. Neighborhood-regularized self-training for learning with few labels. *AAAI*.
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee.
2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32.
Jun Yan, Nasser Zalmout, Yan Liang, Christan Grant, Xiang Ren, and Xin Luna Dong. 2021. Adatag:
Multi-attribute value extraction from product profiles with adaptive decoding. In ACL.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *NeurIPS*.
Yue Yu, Chenyan Xiong, Si Sun, Chao Zhang, and Arnold Overwijk. 2022. Coco-dr: Combating distribution shifts in zero-shot dense retrieval with contrastive and distributionally robust learning. *arXiv* preprint arXiv:2210.15212.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *ICML*.
Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2021. Fine-tuning pretrained language model with weak supervision: A
contrastive-regularized self-training approach. In NAACL.
Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2022. Flava: A foundational language and vision alignment model. In CVPR.
Mengmi Zhang, Claire Tseng, and Gabriel Kreiman.
2020. Putting visual object recognition in context. In CVPR.
Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In KDD.
Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. *arXiv preprint arXiv:1908.07490*.
Tiangang Zhu, Yue Wang, Haoran Li, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. Multimodal joint attribute prediction and value extraction for ecommerce product. In *EMNLP*.
Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, D Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020.
Learning to extract attribute value from product via question answering: A multi-task approach. In KDD.
Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. 2022a. Image as a foreign language: Beit pretraining for all vision and vision-language tasks.
arXiv preprint arXiv:2208.10442.
Kai Yuanqing Xiao, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. 2021. Noise or signal: The role of image backgrounds in object recognition. In ICLR.
Huimin Xu, Wenting Wang, Xinnian Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5214–
5223.
Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. In *ICLR*.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *ICLR*.
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization.
In *ICCV*.
Nasser Zalmout and Xian Li. 2022. Prototyperepresentations for training data filtering in weaklysupervised information extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
## A Implementation Details
Our models are implemented with PyTorch (Paszke et al., 2019) and Huggingface Transformer library and trained on an 8 Tesla V100 GPU node. The model is trained for 10 epochs, where the Item Form dataset takes around 12 hours, the Color dataset takes about 32 hours, and the Pattern dataset needs around 35 hours to run on a single GPU. The overall architecture of PV2TEA consists of 361M
trainable parameters, where a ViTbase (Dosovitskiy et al., 2021) is used as the image encoder and initialized with the pre-trained model on ImageNet of 85M parameters, and the text encoder is initialized from BERTbase (Devlin et al., 2019) of 123M parameters. We use AdamW (Loshchilov and Hutter, 2019) as the optimizer with a weight decay of 0.05.
The learning rate of each parameter group is set using a cosine annealing schedule (Loshchilov and Hutter, 2016) with the initial value of 1e-5. The model is trained for 10 epochs, with both training and testing batch sizes of 8. The memory queue size M is set as 57600 and the temperature τ of in Equation 4 is set as 0.07. We performed a grid search for the softness α from [0, 0.2, 0.4, 0.6, 0.8]
and used the best-performed 0.4 for reporting the final results. The K for two-level neighborhood regularization is set at 10. The input textual description is cropped to a maximum of 100 words. The input image is divided into 30 by 30 patches. The hidden dimension of both the visual and textual encoders is set to 768 to produce the representations of patches, tokens, or the whole image/sequence.
The epoch E for adding the second-level prediction neighbor regularization to reliability score s (Xn)
is set as 2.
| Method | Gold Value Source | D: Color | D: Pattern | | | | |
|----------------------|---------------------|-------------------|-------------------|-------|-------|-------|-------|
| P | R | F1 | P | R | F1 | | |
| OpenTagcls | Text ✓ | 85.06 | 43.28 | 57.37 | 85.00 | 42.96 | 57.07 |
| Text ✗ Image ✓ 66.28 | 10.24 | 17.74 | 66.23 | 12.02 | 20.35 | | |
| GAP ↓ | 18.78 33.04 39.63 | 18.77 30.94 36.72 | | | | | |
| PAM | Text ✓ | 73.20 | 71.88 | 72.53 | 75.00 | 57.04 | 64.80 |
| Text ✗ Image ✓ 50.30 | 45.45 | 47.75 | 51.82 | 36.23 | 42.64 | | |
| GAP ↓ | 22.90 26.43 24.78 | 23.18 20.81 22.16 | | | | | |
| PV2TEA | Text ✓ | 81.74 | 74.25 | 77.82 | 71.19 | 61.25 | 65.85 |
| Text ✗ Image ✓ 71.89 | 47.19 | 56.98 | 54.48 | 37.26 | 44.25 | | |
| GAP ↓ | 9.85 | 27.06 20.84 | 16.71 23.99 21.59 | | | | |
Table 8: Fine-grained source-aware evaluation for the Color and Pattern datasets.
The source-aware evaluation of the Color and Pattern datasets is shown in Table 8. We can observe that similarly to the discussions in Section 5.1, compared with the baselines, the proposed PV2TEA
effectively mitigates the performance gap of F1 when the gold value is not contained in the text.
More specifically, we observed that compared with the unimodal method, PV2TEA mainly reduces the recall performance gap across modalities, while compared with the multimodal method, the reduction happens mainly in precision, which all corresponds to the weaker metrics for each type of method. This indicates the stronger generalizability and more balanced learning ability of PV2TEA.
## C Ablation Studies On Pattern Dataset
We further include the ablation results on the singlevalue type dataset Pattern for each proposed mechanism in Table 9, Table 10, and Table 11, respectively. The observations are mostly consistent with the discussion in section 5.2, where all three proposed mechanisms support improvements in the overall performance of F1. It is noted that the recall performance with attention-pruning drops a bit compared with that without. This may indicate potential information losses on the challenging dataset such as Pattern with only the selected foreground. We discuss this potential risk in detail in the Limitation section.
Table 9: Ablations on the augmented label-smoothed contrast for cross-modality alignment (%).
Table 10: Ablation study on the category supervised visual attention pruning (%).
| Method | Single Value Dataset: Pattern Precision Recall F1 | | |
|-------------------|-----------------------------------------------------|-------|-------|
| PV2TEA w/o Lsc | 60.03 | 45.59 | 51.82 |
| PV2TEA w/o smooth | 61.87 | 45.72 | 52.58 |
| PV2TEA | 62.10 | 46.84 | 53.40 |
Table 11: Ablations on the two-level neighborhoodregularized sample weight adjustment (%).
| Method | Single Value Dataset: Pattern Precision Recall F1 | | |
|--------------------|-----------------------------------------------------|-------|-------|
| PV2TEA w/o NR | 59.92 | 44.92 | 51.35 |
| PV2TEA w/o Vis-NR | 61.59 | 46.24 | 52.82 |
| PV2TEA w/o Pred-NR | 60.77 | 45.11 | 51.78 |
| PV2TEA | 62.10 | 46.84 | 53.40 |
| Method | Single Value Dataset: Pattern Precision Recall F1 | | |
|----------------------------|-----------------------------------------------------|-------|-------|
| PV2TEA w/o Lct & Attn Prun | 59.01 | 46.74 | 52.16 |
| PV2TEA w/o Attn Prun | 60.14 | 46.98 | 52.75 |
| PV2TEA | 62.10 | 46.84 | 53.40 |
Type **(!):** makeup **Type** (!): makeup Type **(!):** steak **Type** (!): grain
![12_image_0.png](12_image_0.png)
Type **(!):** mattress Type **(!):** chair Type **(!):** Mug Type **(!):**
![12_image_1.png](12_image_1.png)
Product Type **(!):** shirt Product Type **(!):** scarf Product **Type** (!): tights Product Type **(!):** shirt
![12_image_2.png](12_image_2.png)
## D Retrieval Ablation On Pattern Dataset
Similar to Figure 4, we also demonstrate the crossmodality retrieval results on the pattern dataset in Figure 8. The conclusion is consistent with our observations mentioned in Section 5.2, where the contrastive objective demonstrates advantages in cross-modal alignment and fusion, and the best smoothness choice peaks at 0.4.
## E Visualizations Of Attention Pruning
Examples of visualization on the learned attention mask are demonstrated in Figure 7. It is observed that the visual foreground is highlighted under the supervision of category classification, which potentially encourages a higher prediction precision for fine-grained tasks like attribute extraction, as proved by the experimental results.
## F Human Annotation Instruction
We create source-aware fine-grained datasets with internal human annotators. Below are the instruction texts provided to annotators: The annotated attribute values are used for research model development of multimodal attribute information extraction and fine-grained error analysis. The datasets are named source-aware multimodal attribute extraction evaluation benchmarks and will be released to facilitate public testing and future studies in bias-reduced multimodal attribute value extraction model designs. All the given sample profiles (title, bullets, and descriptions) and images are collected from the public amazon.com web pages, so there is no potential legal or ethical risk for annotators. Specifically, the annotation requirements compose two tasks in order: (1) Firstly, for each given sample_id in the given ASINs set, first determine the category of the sample by referring to ID2Category.csv mapping file, then label the gold value for the queried attribute by selecting from the candidates given the category. The annotation answer candidates for the Item Form dataset can be referred to in Table 12. Note that this gold value annotation step requires reference to both sample textual titles, descriptions, and images;
(2) For each annotated ASIN, mark down which modality implies the gold value with an additional source label, with different meanings as below:
- 0: *the gold attribute value can be found in text.*
- 1: *the gold attribute value cannot be inferred* from the text but can be found in the image.
The annotated attribute values and source labels are assembled in fine-grained source-aware evaluation.
## G Neighborhood Regularization Demos
We provide two more demo examples for illustrating the two-level neighborhood-regularized sample weight adjustment in Figure 9. The example on the left demonstrates a higher consistency between the green arrows (which point to samples with the same training label as yn) and red arrows (which point
| face shaping makeup | powder, pencil, cream, liquid, stick, oil, spray, gel, cushion, blush, drop, balm, gloss |
|---------------------------|------------------------------------------------------------------------------------------------------------------------------------|
| herb | powder, root, leaf, thread, flake, seed, tea bag, stick, oil, slice, pod, ground, bean, paste |
| honey | jelly, capsule, lozenge, candy, cream, powder, granule, flake, liquid, stick, oil, crystal, butter, drop, syrup, comb |
| insect repellent | wipe, spray, band, granular, liquid, stick, candle, coil, oil, lotion, gel, capsule, tablet, powder, balm, patch, roll on |
| sauce | puree, jelly, paste, seed, liquid, gravy, ground, oil, powder, cream |
| skin cleaning agent | powder, capsule, toothpaste, wipe, cream, spray, mousse, bar, flake, liquid, lotion, gel, serum, mask, ground, balm, paste, foam |
| skin foundation concealer | powder, pencil, cream, mousse, liquid, stick, oil, lotion, spray, cushion, gel, drop, serum, balm, airbrush |
| sunscreen | wipe, cream, spray, mousse, liquid, ointment, stick, fluid, oil, lotion, milk, compact, gel, drop, serum, powder, balm, foam, mist |
![13_image_0.png](13_image_0.png)
![13_image_1.png](13_image_1.png)
![13_image_2.png](13_image_2.png)
to k-nearest neighbor samples in visual feature and previous prediction space), indicating a higher reliability of yn. Thus the sample weight of Xn will be increased in the next training epoch. In contrast, the training label neighbors and visual/prediction neighbors of the right example show a large inconsistency, which implies a relatively lower reliability of yn. Therefore, the sample weight s (Xn) of the right Xn will be degraded in the next epoch. This regularization process adjusts the sample weights of all the training samples in each epoch.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In the first paragraph of the dedicated section titled "Limitations" at the end of the paper, after the conclusion section and before the references.
✓ A2. Did you discuss any potential risks of your work?
In the second paragraph of the dedicated section titled "Limitations" at the end of the paper, after the conclusion section and before the references.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And Appendix A.
✓ B1. Did you cite the creators of artifacts you used?
Section 4 and Appendix A.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4 and Appendix A.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 4 and Appendix G.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4 and Appendix G.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4.
## C ✓ **Did You Run Computational Experiments?** Appendix A Implementation Details.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A Implementation Details.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A Implementation Details.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A Implementation Details.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Section 4 and the dedicated section titled "Ethics Statement" at the end of the paper, after the conclusion section and before the references.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix G.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
In the dedicated section titled "Ethics Statement" at the end of the paper, after the conclusion section and before the references.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix G and the dedicated section titled "Ethics Statement" at the end of the paper, after the conclusion section and before the references.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
In the dedicated section titled "Ethics Statement" at the end of the paper, after the conclusion section and before the references.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
In the dedicated section titled "Ethics Statement" at the end of the paper, after the conclusion section and before the references. |
chen-etal-2023-structural | Structural Contrastive Pretraining for Cross-Lingual Comprehension | https://aclanthology.org/2023.findings-acl.128 | To present, multilingual language models trained using various pre-training tasks like mask language modeling (MLM) have yielded encouraging results on a wide range of downstream tasks. Despite the promising performances, structural knowledge in cross-lingual corpus is less explored in current works, leading to the semantic misalignment. In this paper, we propose a new pre-training task named Structural Contrast Pretraining (SCP) to align the structural words in a parallel sentence, enhancing the models{'} ability to comprehend cross-lingual representations. Concretely, each structural word in source and target languages is regarded as a positive pair in SCP. Since contrastive learning compares positive and negative pairs, an increase in the frequency of negative pairings could enhance the performance of the resulting model. Therefore, we further propose Cross-lingual Momentum Contrast (CL-MoCo) to increase the number of negative pairs by maintaining a large size of the queue. CL-MoCo extends the original Moco approach into cross-lingual training and jointly optimizes the source-to-target language and target-to-source language representations, resulting in a more suitable encoder for cross-lingual transfer. We conduct extensive experiments to validate the proposed approach on three cross-lingual tasks across five datasets such as MLQA, WikiAnn, etc, and results prove the effectiveness of our method. | Structural Contrastive Pretraining for Cross-Lingual Comprehension Nuo Chen1, Linjun Shou2, Tengtao Song3, Ming Gong2**, Jian Pei**4 Jianhui Chang3, Daxin Jiang2**, Jia Li**1∗
1Hong Kong University of Science and Technology (Guangzhou),
Hong Kong University of Science and Technology 2STCA, Microsoft, Beijing, 3Peking University, China 4 Duke University, USA
[email protected], [email protected]
## Abstract
Multilingual language models trained using various pre-training tasks like mask language modeling (MLM) have yielded encouraging results on a wide range of downstream tasks. Despite the promising performances, structural knowledge in cross-lingual corpus is less explored in current works, leading to the semantic misalignment. In this paper, we propose a new pre-training task named Structural Contrast Pretraining (SCP) to align the structural words in a parallel sentence, improving the models' linguistic versatility and their capacity to understand representations in multilingual languages. Concretely, SCP treats each structural word in source and target languages as a positive pair. We further propose Crosslingual Momentum Contrast (CL-MoCo) to optimize negative pairs by maintaining a large size of the queue. CL-MoCo extends the original MoCo approach into cross-lingual training and jointly optimizes the source-to-target language and target-to-source language representations in SCP, resulting in a more suitable encoder for cross-lingual transfer learning. We conduct extensive experiments and prove the effectiveness of our resulting model, named XLM-SCP, on three cross-lingual tasks across five datasets such as MLQA, WikiAnn. Our codes are available at https://github.
com/nuochenpku/SCP.
## 1 Introduction
Following the promising results of the pre-training paradigm in the monolingual natural language domain, the efforts of multilingual pre-trained language models (xPLMs) (Huang et al., 2019; Liang et al., 2020; Conneau et al., 2019; Chi et al., 2021a; Chen et al., 2022) have been proposed rapidly.
In general, these xPLMs are always trained on large-scale multilingual corpora using various pretraining language modeling tasks, such as MLM
∗Corresponding Author
![0_image_0.png](0_image_0.png)
(Devlin et al., 2018; Lan et al., 2020), NSP (Pires et al., 2019), CLISM (Chen et al., 2022), and TRTD
(Chi et al., 2021c). In this manner, xPLMs acquire robust contextually relevant representations and, as a result, excel at a variety of downstream tasks, like question answering (Hermann et al., 2015; He et al., 2018; Chen et al., 2021a) and name entity recognition (Liang et al., 2021). For instance, Chen et al. (2022) propose to train xPLMs with CLISM
and MLM, achieving remarkable performances in multilingual sequence labeling tasks (Huang et al., 2019; Lewis et al., 2020; Artetxe et al., 2019a).
Although these pre-training tasks help xPLMs learn promising multilingual contextualized representations at hierarchical level (i.e., token or sentence-level) (Li et al., 2022a), they don't take structural knowledge into consideration. One obvious limitation of the above approaches is the semantic misalignment between structural words from different languages, leading to a bias in the understanding of the multilingual representations.
We showcase the parallel sentences in English and German in Figure 1 that are quite different in the syntax structure. The main components of this sentence are "Ebydos AG" (subject), "founded" (verb),
"subsidiary" (object) and "Wroclaw" (entity). Unfortunately, as one of the current state-of-the-art xPLMs: XLM-Roberta (XLM-R) (Conneau et al.,
2019) is incapable of capturing the alignment of these crucial words in German, leading to semantic deviation. Specifically, XLM-R pays less attention to the corresponding words of "founded" and "subsidiary" in German due to the sentence structure barrier between these two languages.
One step further, from the perspective of human behavior, when a language learner reads a sentence in another language, it can help him/her understand this sentence quickly and accurately by pointing out the structural words in a sentence, including subject, verb, object and entities. This effect will be more noticeable when the sentence is lengthy and complex. Similarly, by providing the extra clues of aligned crucial/informative words in the parallel sentence, the model can benefit from a closer gap of cross-lingual representations.
Motivated by the above factors, we design a Structural Contrastive Pretraining (SCP) task to enhance xPLMs' comprehension ability via contrastive learning, bridging the misalignment between structural words in a parallel corpus. Considering the facts that subject, verb, object (S-V-O)
are the backbone of a sentence and aligned entities in cross-lingual parallel sentences convey coreference and information short-cuts (Chen et al., 2022),
in this work, we consider **S-V-O** and **entities** as the structural words in a sentence, which are all insightful or crucial. Concretely, we divide the parallel corpus into a number of smaller groups. Each sub-group has two versions of the same sentence, one in the source language (high resource) and one in the target language (low resource). Each structural word in the source and target languages is considered as a positive pair.
Due to the nature of contrastive learning, wherein comparisons are made between positive and negative pairs, an increase in the number of negative pairings may potentially improve performances of the resulting model (Chen et al., 2020).
Inspired by momentum contrast in computer vision
(He et al., 2020), we keep a queue and employ the encoded embeddings from the previous mini-batch to increase the quantity of negative pairs. In this method, momentum contrast employs a pair of fast and slow encoders to encode the source language sentences and target language sentences, separately.
And the fast encoder is saved for fine-tuning on down-stream datasets. However, directly applying this approach to cross-lingual pre-training could lead to another problem: As the fast encoder only sees the source language during pre-training, the training becomes insensitive to other target languages. As a consequence, the resulting model may underperform on cross-lingual transfer. To address this issue, we creatively incorporate the original momentum contrast into the cross-lingual setting, naming it Cross-lingual Momentum Contrast (short for CL-MoCo). Specifically, CL-MoCo utilizes two pairs of fast/slow encoders to jointly optimize source-to-target language and target-tosource language representations, further bridging the cross-lingual gap. In light of the fact that almost all down-stream cross-lingual understanding tasks only need one encoder, the two fast encoders share parameters in our pre-training.
Based on the above two proposed strategies for building positive and negative pairs in SCP, our resulting model XLM-SCP can accurately capture the alignment of sentence structures across different languages, improving the performances on crosslingual understanding tasks. As seen in Figure 1
(b), ours successfully grasp the correspondence between sentence verbs ("founded"-"gegründet") and objects ("subsidiary"-"Ableger") in English and German. We conduct experiments with two different xPLMs encoders on three multilingual tasks to test the effectiveness of our approach: Name Entity Recognition (NER) (Sang, 2002; Pan et al.,
2017), Machine Reading Comprehension (MRC)
(Lewis et al., 2020; Artetxe et al., 2019b) and Partof-Speech Tagging (POS) (Zeman et al., 2019). Extensive results show our method can improve the baseline performances across 5 datasets in terms of all evaluated metrics. For example, ours initialize from XLM-R improves the baselines from 61.35%
to 63.39% on WikiAnn dataset (Pan et al., 2017).
In general, our contributions can be summarized as follows:
- We observe that misalignment of the informative and crucial structural words occurs in xPLMs, and design a new pre-trained task called SCP to alleviate this problem.
- We propose CL-MoCo via keeping a large queue to increase the amount of negative pairings via momentum updating, which pushes the model toward more nuanced learning in cross-lingual.
- We conduct extensive experiments on different tasks, demonstrating the effectiveness of our approaches.
## 2 Related Work
Multilingual Pre-trained Language Models To date, transformer-based large-scale PLMs have become the standard in natural language processing and generation (Devlin et al., 2018; Liu et al., 2019; Lan et al., 2020; Sun et al., 2020). Currently, more and more communities are working to bring PLMs into the actual world of various languages (xPLMs),
and several efforts have been proposed such as XLM-Roberta (Conneau et al., 2019) (short for XLM-R), info-XLM (Chi et al., 2021a), CLISM
(Chen et al., 2022). These works are pre-trained on a large multilingual corpus with token-level or sentence-level pre-training tasks. Despite their promising performances in multiple down-stream tasks, they all don't explicitly consider structural knowledge in the parallel corpus.
Contrastive Learning As a result of its potential to improve upon existing methods for learning effective representations, contrastive learning
(Hadsell et al., 2006) has gained popularity in recent years. It works by grouping representations that are semantically close together (*positives*) in an embedding space and then pushing apart others (*negatives*) that are not neighbors. Contrastive learning objective has been particularly successful in different contexts of natural language processing (Gao et al., 2021; Wu et al., 2020). Moreover, several efforts (Chen et al., 2021a, 2022; Gao et al., 2021; Chen et al., 2021b; You et al., 2021; You et al.; Chen et al., 2023b,a) are well-designed for cross-lingual language understanding. For instance, Liang et al. (2022) proposed multi-level contrastive learning towards cross-lingual spoken language understanding. Chen et al. (2022) employed contrastive learning to learn noise-invariant representation from multilingual corpora for downstream tasks. Different from previous works, we utilize contrastive learning to learn the alignments of the structural words (Tang et al., 2023; Li et al.,
2022b), leading to a more comprehensive and accurate understanding on the cross-lingual sentence.
Momentum Contrast Recently, several works
(Yang et al., 2021; Wu et al., 2022) have explored momentum contrast in natural language understanding tasks, such as sentence representation and passage retrieval. Specifically, Yang et al. (2021) propose xMoCo to learn a dual-encoder for querypassage matching via two pairs of fast/slow encoders. Although we share a similar topic on momentum contrast, our research questions, application areas, and methods differ. xMoco are designed for query-matching tasks while our proposed CLMoCo is tailored for cross-lingual representation learning. Moreover, Yang et al. (2021) employs two different encoders for query and passage, separately. However, we share parameters of the two fast encoders in our training. At last, we focus on the representation learning of cross-lingual transfer, but they only take monolingual into consideration.
Recent works Recently, several works (Schuster et al., 2019; Pan et al., 2021; Chi et al., 2021b; Ouyang et al., 2021) also focus on word alignment for multilingual tasks. For clarity, we list some key differences: All of them align each token in the parallel corpus in an "all-to-all" fashion, but we only consider structural words like S-V-O via contrastive learning. The motivations are: (1) In our pilot analysis and experiments, we have two different settings in the proposed SCP: a. training the model with only structural words; b. training the model with all tokens in the sentences. Experimentally, we observe that they achieve comparable performances on MRC tasks but the latter achieves slightly worse results on NER tasks. This is due to the fact that aligning some words with no precise meaning like stopwords may have visible side effects on token-level tasks like NER. (2) Futhermore, the latter could result in more computation cost than the current method. (3) From a human perspective, structural words are the backbone of each sentence, and a solid grasp of them is sufficient to strengthen the management of the majority of situations.
## 3 Methodology
In this section, we first illustrate our proposed Structural Contrastive Pretraining (SCP) in detail.
Then we introduce how to incorporate our method with momentum contrast. Due to the fact that our proposed methods are flexible and can be built on top of any xPLMs, we leverage E to represent a series of pre-trained language models, where E could be the E*fast* in Section 3.2. We aim at enhancing E's ability to capture consistency between parallel structural representations via SCP. The overview of our approach is illustrated in Figure 2.
## 3.1 Structural Contrastive Pretraining
Definition To bridge the misalignment between structural words from different languages, we for-
![3_image_0.png](3_image_0.png)
mulate a new pre-training task named Structural Contrastive Pretraining (SCP) from the unlabeled data. In this part, we introduce how to collect the structural words in the inputs. Given a source language input sentence s sand its target language counterpart s t, we start by using current online name entity recognition tools (e.g., Spacy) to select structural words in the source language, including the subject, verb, object, and entities in the sentence1. As some extracted words are illogical due to the performance limitations of commercially available NER tools, these uninformative words could result in sub-optimizing the model during pre-training. Hence, we follow (Chen et al., 2022) to filter out some uninformative spans:
- Any spans that include solely stop words will be eliminated.
- Selected structural words should not include any punctuation.
- The maximum sequence length of an entity is limited to 6.
As the translation of the same phrase may vary when it is entered independently or combined with a full sentence, we utilize an off-the-shelf alignment tool, GIZA++ (Pei et al., 2020) to align the 1If the extracted words of one sentence are none, we would remove it corresponding ones of the selected structural words in the target language. As a result, we can get structural words W
s = {w s1
, w s2
, ..., w s k} in s sand their counterparts W
t = {w t1
, w t2
, ..., w t k} in s t. Notice that the length of k could be more than 4 when there are multiple entities in the sentence.
Pre-training It is essential to obtain the representations of each word from W
sand W
tin SCP.
Before going further, we first formulate the input sequences as:
$$\mathbf{X}^{s}=\left\{\,[\,{\mathsf{C L S}}\,]\,\mathbf{s}^{s}\,[\,{\mathsf{S E P}}\,]\,\right\}$$ $$\mathbf{X}^{t}=\left\{\,[\,{\mathsf{C L S}}\,]\,\mathbf{s}^{t}\,[\,{\mathsf{S E P}}\,]\,\right\}$$
(1) (2) $\frac{1}{2}$ (3) $\frac{1}{2}$ (4) $\frac{1}{2}$ (5) $\frac{1}{2}$ (6) $\frac{1}{2}$ (7) $\frac{1}{2}$ (8) $\frac{1}{2}$ (9) $\frac{1}{2}$ (10) $\frac{1}{2}$ (11) $\frac{1}{2}$ (12) $\frac{1}{2}$ (13) $\frac{1}{2}$ (14) $\frac{1}{2}$ (15) $\frac{1}{2}$ (16) $\frac{1}{2}$ (17) $\frac{1}{2}$ (18) $\frac{1}{2}$
$\operatorname{cms}\lambda_{\operatorname{rad}}$
where [CLS] and [SEP] denote the special beginning and separated tokens. Xsand Xtrefer to the input sequences in source and target languages, respectively.
Then we can pass Xsand Xtinto the E, producing contextualized representations of each token in the sequences:
$${\mathcal{H}}^{s}={\mathcal{E}}(\mathbf{X}^{s})\qquad{\mathcal{H}}^{t}={\mathcal{E}}(\mathbf{X}^{t})$$
$$({\mathfrak{I}}{\mathfrak{J}})$$
where Hs ∈ Rl×d, Ht ∈ Rl×d, l and d represent the max sequence length and hidden size, separately. Subsequently, for each word w s i ∈ W
s, where i ∈ [1,k], we obtain its representation Hs i from Hs.
Similarly, we can get its positive pair representation Ht i from Ht. Notice that we can not directly 2045 employ Hs i and Ht i in our SCP because w s i and w t i may produce multiple sub-tokens after tokenization.
Therefore, we apply extra aggregation function F
on Hs i and Ht i to obtain the final representations:
$$\mathbf{r}_{i}^{s}={\mathcal{F}}({\mathcal{H}}_{i}^{s})\qquad\mathbf{r}_{i}^{t}={\mathcal{F}}({\mathcal{H}}_{i}^{t})\qquad\qquad(4)$$
where F refers to the average pooling of the beginning and ending tokens representations of Hs i and Ht i
. r s i and r t i belong to R1×d. Intuitively, (r s i
, r t i
)
are regarded as positive pairs in SCP.
## 3.2 Cross-Lingual Momentum Contrast
In this part, we first introduce how to apply momentum contrast on our method in a straight way.
Then we illustrate our proposed CL-MoCo.
MoCo As opposed to merely collecting from mini-batch negatives, we use the momentum contrast approach to increase the number of negatives by maintaining a queue of constant size. In particular, the queued embeddings are gradually replaced. When the current mini-batch's sentence embeddings are queued, the "oldest" ones in the queue are eliminated if the queue is full. Intuitively, when directly applying momentum contrast on cross-lingual training, we can employ a pair of encoders E*fast* and E*slow*. In one training step, E*fast* encodes s sinto Hsand E*slow* maps s tinto Ht. We employs momentum update on the encoder E*slow*, thereby turning E*slow* into a sluggish moving-average duplicate of the encoder E*fast*, to lessen the discrepancy. Formally, we update the E*slow* in the following way:
$$E_{s l o w}\longleftarrow\lambda E_{f a s t}+(1-\lambda)E_{s l o w}\qquad(5)$$
$\quad\therefore$ I know that $\quad$I = ?
where λ determines how quickly the slow encoder updates parameters and is normally set to a small positive value. After pre-training, only Efast (E*fast* is equal to E) is saved for fine-tuning and E*slow* will be discarded.
With the enqueued sentence embeddings, our optimized objective of (r s i
, r t i
) is formulated as Li:
$$\begin{array}{c}\alpha\varphi(\Psi(\mathbf{r}_{i}^{s},\mathbf{r}_{i}^{t})/\tau)\\ \sum_{j=1}^{N}(\omega\varphi(\Psi(\mathbf{r}_{i}^{s},\mathbf{r}_{j}^{t})/\tau)+\sum_{m=1}^{M}\omega\varphi(\Psi(\mathbf{r}_{i}^{s},\mathbf{r}_{m})/\tau)\end{array}\tag{6}$$
where N and M are the size of the mini-batch and the queue, respectively. rm denotes a sentence embedding in the momentum-updated queue, and τ represents the temperature. Moreover, Ψ refers to the cosine similarity function.
CL-MoCo In the above method, target language sentences are only encoded by the slow encoder, which is not directly affected by the gradients from the loss. Moreover, the fast encoder only encodes the source languages in pre-training, making it insensitive to the input sequences in other low-resource languages. These two problems could make the encoder sub-optimized and unable to learn reasonable cross-lingual representations.
Therefore, we propose CL-MoCo to alleviate the above issues. In particular, CL-MoCo employs two sets of fast/slow encoders: Es fast and Es slow for source languages and Et*fast* and Et*slow* for target languages. In addition, two separate queues Qsand Qtare used to store previous encoded sentence embeddings in source and target languages, respectively. The vectors encoded by Es slow and Et*slow* will be pushed into Qsand Qt, separately. In CL-MoCo, we jointly optimize the two sets of encoders to learn effective source-to-target language and target-to-source language representations, and Eq.5 can be extended as:
$\qquad E^{s}_{slow}\longleftarrow\lambda E^{s}_{fast}+(1-\lambda)E^{s}_{slow}\qquad(7)$ $\qquad E^{t}_{slow}\longleftarrow\lambda E^{t}_{fast}+(1-\lambda)E^{t}_{slow}\qquad(8)$ Hence, the optimized objective of positive pair.
(r s i
, r t i
) in source-to-target language can be formulated as Li(r s i
, r t i
):
$$\begin{array}{c}\alpha_{\rm exp}(\Psi({\bf r}_{i}^{s},{\bf r}_{i}^{t})/\tau)\\ \sum_{j=1}^{N}(\circ_{\rm exp}(\Psi({\bf r}_{i}^{s},{\bf r}_{j}^{t})/\tau)+\sum_{q^{x}\in Q^{x}}^{M}\circ_{\rm exp}(\Psi({\bf r}_{i}^{s},{\bf r}_{q^{x}})/\tau)\end{array}\tag{9}$$
Similarly, our CL-MoCo works in both ways, and the objective in target-to-source language Li(r t i
, r s i
) is:
$$\begin{array}{c}\alpha_{\Sigma\mathbb{P}}(\Psi(\mathbf{r}_{i}^{t},\mathbf{r}_{i}^{s})/\tau)\\ \sum_{j=1}^{N}(\alpha_{\Sigma\mathbb{P}}(\Psi(\mathbf{r}_{i}^{t},\mathbf{r}_{j}^{s})/\tau)+\sum_{q^{t}\in\mathcal{Q}^{t}}^{M}\alpha_{\Sigma\mathbb{P}}(\Psi(\mathbf{r}_{i}^{t},\mathbf{r}_{q^{t}})/\tau)\end{array}\tag{10}$$
For all selected structural words in s sand s t, the overall objective of our SCP can be summarized as:
$${\mathcal{L}}_{s c p}=\sum_{i=1}^{k}(({\mathcal{L}}_{i}(\mathbf{r}_{i}^{s},\mathbf{r}_{i}^{t})+({\mathcal{L}}_{i}(\mathbf{r}_{i}^{t},\mathbf{r}_{i}^{s}))/2\quad(11)$$
where k is the number of structural words in the input sentence. We share the parameters of two fast encoders and two slow encoders, because of the following facts: 1) We focus on cross-lingual understanding tasks rather than passage retrieval, which mostly only needs one encoder; 2) Two separated fast and slow encoders could result in more computation and training time.
| <en,es> | <en,ar> | <en,de> | <en,nl> | <en,hi> | Total |
|-----------|-----------|-----------|-----------|-----------|---------|
| 1M | 0.8M | 0.8M | 0.7M | 0.6M | 3.9M |
Table 1: Total parallel sentences used in pre-training.
## 3.3 Pre-Training Strategy
Following the line of (Liu et al., 2019; Chi et al.,
2021a), we also pre-train E with the mask language modeling (MLM) task. Concretely, we train the model in multi-task manner. The total objective in our pre-training can be defined as:
$${\mathcal{L}}={\mathcal{L}}_{s c p}+{\mathcal{L}}_{m l m}$$
$$(12)^{\frac{1}{2}}$$
L = Lscp + Lmlm (12)
## 4 Experiment
In this section, we first introduce how we collect the pre-training data for the proposed SCP. Then we illustrate experiment settings for pre-training and fine-tuning. At last, we present our experimental results on various corss-lingual datasets, including baseline introduction and main results.
## 4.1 Pre-Training Data
As aforementioned, our proposed task SCP requires parallel corpus. We choose MT dataset (Conneau and Lample, 2019) to construct our pre-training data. In contrast to earlier research (Chi et al.,
2021a) that used billion-level corpora across about one hundred languages to generate training corpus, we only use six languages from the MT dataset, including English(en), Spanish(es), Arabic(ar), German(de), Holland(nl), and Vietamese(vi), demonstrating that our approach also makes significant gains in languages where we do not have data. Given the promising performance of off-the-shelf NER techniques (e.g., Spacy) in English, we choose English as our source language, with the remaining five languages serving as target languages in turn. As a result, we get 3.9 million pre-training parallel sentences after using the rules in Section 3.1. The amount of distribution for each language is reported in Table 1.
## 4.2 Evaluation
We evaluate XLM-SCP on three cross-lingual tasks: cross-lingual machine reading comprehension (xMRC), cross-lingual name entity recognition (xNER) and cross-lingual Part-of-Speech
(xPOS). Concretely, we conduct experiments on five datasets: MLQA (Lewis et al., 2020), XQUAD
(Artetxe et al., 2019b), CoNLL (Sang, 2002) and WikiAnn (Pan et al., 2017) and UPDOS (Zeman et al., 2019). We introduce each dataset and test languages in Appendix A.1.
We use a *zero-shot* configuration to fine-tune our model for all datasets, which means that we just use the English training set to optimize the model, and then test the final model on other target languages. Besides, we also test the *cross-lingual* transfer ability of XLM-SCP on these datasets, that is, we also validate the model performances on some target languages that are not included in our pre-training data.
We employ two evaluation measures for the xMRC task: Exact Match (EM) and span-level F1 score, which are commonly used for MRC model accuracy evaluation. The span overlap between the ground-truth answer and the model predictions is measured by span-level F1. If the forecast is precisely the same as the ground truth, the exact match (EM) score is 1, otherwise 0. In the case of the xNER challenge, we employ entity-level F1 scores to evaluate our model, which demands that the boundary and type between the prediction and the ground-truth entity be exactly matched. Similarly, we also use F1 score to validate the model performances in UPDOS.
## 4.3 Training Details
Model Structure To show the generalization of our approach, we initialize our model from two commonly used xPLMs encoders: XLM-R and Info-XLM. The resulting model is named **XLMSCP** in our experiments. We use the base version checkpoints of the above two models from Hugging Face Transformers2. Our XLM-SCP contains 12 transformer layers, and the vector dimension size is set to 768.
Pre-training Details Our training codes are based on PyTorch 1.11 and Transformers 4.10.0.
Along the line of the research (Devlin et al., 2018),
we randomly mask 15% tokens of the input sequence3to implement MLM. In pre-training, we optimize our model using the Adam optimizer and a batch size of 128 for a total of 4 epochs. Moreover, learning rate is set to 1e-6 with 1.5K warmup steps. The max input sequence length is set to 128.
Experimentally, τ in Eq.10 is set to 0.05 and the queue size of Qsand Qtare both 20k. And λ is
| Model | xMRC | xNER | xPOS | Average | | |
|----------|-------------|-------------|---------|-----------|-------|-------|
| MLQA | XQUAD | CoNLL | WikiAnn | UPDOS | | |
| M-BERT | 57.80/42.40 | 69.63/53.72 | 78.20 | 62.21 | 70.31 | 67.63 |
| XLM | 61.70/44.20 | 70.93/53.18 | 79.00 | 61.22 | 70.12 | 68.58 |
| XLM-R | 63.24/45.88 | 73.54/57.55 | 78.48 | 61.35 | 74.21 | 70.16 |
| XLM-SCP* | 65.14/47.20 | 75.35/59.20 | 80.35 | 63.39 | 75.20 | 71.89 |
| Info-XLM | 65.25/47.63 | 75.79/59.50 | 79.52 | 63.01 | 74.71 | 71.66 |
| XLM-SCP♡ | 67.01/48.90 | 76.93/60.75 | 80.94 | 64.77 | 75.60 | 73.05 |
set to 0.99. We pre-train our model using 8×V10032G GPUs for about one day. Fine-tuning details can be seen in Appendix A.2.
## 4.4 Results
Baselines We compare our model with the following xPLM-based baselines: (1) M-BERT (Devlin et al., 2018) pre-trained with MLM and NSP
tasks on Wikipedia data over 104 languages; (2)
XLM (Conneau and Lample, 2019) is jointly optimized with MLM and TLM tasks in 100 languages during pre-training; (3) XLM-R (Conneau et al.,
2019), a multilingual version of Roberta which is pre-trained with MLM in large-scale CC-100 dataset; (4) Info-XLM (Chi et al., 2021a), another popular and effective xPLM which initializes from XLM-R with the proposed pre-training task XLCO
in 94 languages.
xMRC Results Table 2 compares our method to that of typical systems on five datasets. On two xMRC datasets, our models outperform these baselines by an interesting amount. For instance, ours built on XLM-R achieves 65.14%/47.20%
(vs. 63.24%/45.88%) in terms of F1/EM score on MLQA. Similarly, we also obtain 1.81%/1.65%
gains on XQUAD dataset. We can also draw another interesting conclusion: When compared to Info-XLM, which is both built on top of XLM-R
and continues to be pre-trained on 130 million data across 94 languages, our model initialized from XLM-R performs comparably. Nevertheless, XLMSCP only needs 3.9 million parallel corpora from six languages, demonstrating the efficacy of our proposed approaches (3.9M≪130M).
| Model | WikiAnn | XQUAD | MLQA |
|---------|-----------|-------------|-------------|
| XLM-R | 60.41 | 73.24/57.01 | 64.89/44.99 |
| XLM-SCP | 61.91 | 74.56/58.50 | 66.24/46.57 |
Table 3: Model performances under zero-shot crosslingual transfer. In the experiments, We initialize XLMSCP from XLM-R.
xNER Results As shown in Table 2, when compared with XLM-R, our XLM-SCP yields 1.87%/2.04% F1 score improvements on the CoNLL and WikiAnn datasets, separately. Importantly, when compared to Info-XLM on top of XLM-R, ours still outperform on xNER tasks. In other words, our approach has demonstrated its full potential using less than 4% of the corpus. Moreover, XLM-SCP initialized from Info-XLM also outperforms on these two datasets: 80.92% (vs.
79.52%) and 64.69% (vs. 63.01%).
xPOS Results We further test our model on xPOS tasks across 37 languages. Results from Table 2 show our model also obtains consistent gains of about 1% score on UPDOS dataset. Using Info-XLM as the basic encoder, ours can achieve the best results 75.60%. Overall, our experimental results on three tasks demonstrate the efficacy and generalizability of our proposed approach.
Zero-shot Cross-lingual Transfer Results We further test out the method under the setting of zero-shot cross-lingual transfer in other unseen targeted languages in pre-training such as Arabic
(ar), Afrikaans (af). Concretely, we conduct experiments to validate the resulting model's performances on the selected test sets in other languages from WikiAnn, XQUAD and MLQA that are not
| Algorithms | WikiAnn | XQUAD |
|--------------|-----------|-------------|
| XLM-SCP | 63.39 | 75.35/59.20 |
| w/o SCP | 62.11 | 74.02/58.01 |
| w/o CL-MoCo | 62.65 | 74.50/58.46 |
| w/o MLM | 62.58 | 74.44/58.11 |
included during pre-training. From Table 3, we can observe that XLM-SCP also achieves about 1.5% improvements on three datasets under the zero-shot cross-lingual transfer setting. In general, the results in Table 2 and Table 3 prove that our approach not only improves the performance in the languages that included in our SCP pre-training but also has better transferability capabilities in other low-resource languages.
## 5 Analysis
Aside from the high performances achieved by our proposed approaches, we are still concerned about the following questions: Q1: What are the effects of each key component in our XLM-SCP? Q2: Is CL-MoCo really superior to MoCo in cross-lingual understanding tasks? Q3: Does the size of the queue in CL-MoCo affect the performance of our model? Q4: What are the model performances with different τ in Eq.10? (Seen in Appendix C,
Figure 5) Q5: Within the chosen objects, verbs, objects, and entities in structural words, which part has the biggest effect on our XLM-SCP's performance? (Seen in Appendix C, Table 10) In this section, we conduct extensive experiments to answer the above questions.
Answer to Q1: Experiments are carried out to confirm the independent contributions of each component in our proposed pre-training scheme. Table 4 shows the model performances by removing each key component on WikiAnn and XQUAD. From the table, we can see that SCP plays the most important role in our architecture. Removing SCP
decreases the model performances from 63.39% to 61.35% on WikiAnn. Meanwhile, we can see that our pre-training system as a whole is effective since each part, including MLM and CL-MoCo, helps the model perform better. Notice that removing CL-MoCo means we only construct negative pairs from in-batch negatives.
Answer to Q2: We further conduct analysis to verify the effectiveness of **CL-MoCo vs. MoCo**
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
F1 Score
![7_image_2.png](7_image_2.png)
on cross-lingual understanding tasks. We conduct ablation experiments on three tasks across four datasets and show the results in Figure 3. We can find that our proposed CL-MoCo can achieve better results on all these datasets when compared with the original MoCo. The results further prove CL-MoCo has a stronger ability to learn effective cross-lingual representations.
Answer to Q3: The main assumption of CLMoCo is that the size of negative samples is important in contrastive learning. Here we empirically study this assumption in cross-lingual understanding tasks via varying the queue size of keeping negative pairs. As shown in Figure 4, we validate XLM-SCP with M ∈ {5k, 10k, 20k, 30k, 40k} on WikiAnn and MLQA datasets. We can draw the conclusion that the model performs slightly better as the queue size increases initially, especially for xMRC tasks. Interestingly, the model achieves best results on WikiAnn when M is equal to 20k, and its performances slightly decrease when M passes 20k. One possible explanation is that larger size of the queue may introduce some "false negative samples", which could have a more obvious side effect on xNER tasks. In light of the fact that the queue size has a negligible effect on training speed and memory use, we have chosen a queue size of 20k for all downstream datasets.
## 6 Conclusion
In this paper, we observe that misalignment of crucial structural words occurs in the parallel sentences of current xPLMs. We propose a new pretraining task called Structural Contrastive Pretraining (SCP) to alleviate this problem, enabling the model to comprehend the cross-lingual representations more accurately. We further incorporate momentum contrast into cross-lingual pre-training, named CL-MoCo. In particular, CL-MoCo employs two sets of fast/slow encoders to jointly learn the source-to-target language and target-to-source language cross-lingual representations. Because of this, the resulting model is better for cross-lingual transfer. Extensive experiments and analysis across various datasets show the effectiveness and generalizability of our approach. As an extension of our future work, we will apply our method to other natural language understanding tasks and find a proper way to reduce data preprocessing costs.
## Limitations
The main target of this paper is to utilize structural knowledge for cross-lingual comprehension. We present a new pre-training task named SCP in the hope of bridging the misalignment of structural words in the parallel corpus. More generally, we expect the proposed method can facilitate the research of cross-lingual understanding. Admittedly, the main limitation of this work is that we rely on off-the-shelf tools to extract and align words in different languages, which would result in some mistakes at some situations. For example, GIZA++
only achieves 80%-85% accuracy in aligning the corresponding words in another language. Currently, no tech can achieve this goal in 100% accuracy. As a result, some bias data in pre-training calls for further research and consideration when utilizing this work to build xPLMs.
## Acknowledgement
This research was supported by NSFC Grant No.
62206067, and Guangzhou-HKUST(GZ) Joint Funding Scheme 2023A03J0673.
## References
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama.
2019a. On the cross-lingual transferability of monolingual representations. *CoRR*, abs/1910.11856.
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama.
2019b. On the cross-lingual transferability of monolingual representations. *CoRR*, abs/1910.11856.
Nuo Chen, Linjun Shou, Min Gong, Jian Pei, and Daxin Jiang. 2021a. From good to best: Two-stage training for cross-lingual machine reading comprehension.
Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, and Jia Li. 2023a. Alleviating over-smoothing for unsupervised sentence representation. *arXiv preprint* arXiv:2305.06154.
Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, and Daxin Jiang. 2022. Bridging the gap between language models and cross-lingual sequence labeling.
In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1909–1923, Seattle, United States. Association for Computational Linguistics.
Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, and Jia Li. 2023b.
Bridge the gap between language models and tabular understanding. *arXiv preprint arXiv:2302.09302*.
Nuo Chen, Chenyu You, and Yuexian Zou. 2021b. Selfsupervised dialogue learning for spoken conversational question answering. *CoRR*, abs/2106.02182.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In ICML, volume 119 of *Proceedings of Machine Learning Research*, pages 1597–1607. PMLR.
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021a. Infoxlm: An information-theoretic framework for cross-lingual language model pre-training. In *NAACL-HLT*, pages 3576–3588. Association for Computational Linguistics.
Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, XianLing Mao, Heyan Huang, and Furu Wei. 2021b. Improving pretrained cross-lingual language models via self-labeled word alignment. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint
Conference on Natural Language Processing (Volume 1: Long Papers), pages 3418–3430, Online. Association for Computational Linguistics.
Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, and Furu Wei. 2021c. XLM-E: cross-lingual language model pre-training via ELECTRA. *CoRR*, abs/2106.16138.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *CoRR*, abs/1911.02116.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In *NeurIPS*,
pages 7057–7067.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence embeddings. *CoRR*, abs/2104.08821.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006.
Dimensionality reduction by learning an invariant mapping. In *CVPR (2)*, pages 1735–1742. IEEE
Computer Society.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision* and pattern recognition, pages 9729–9738.
Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2018. Dureader: a chinese machine reading comprehension dataset from real-world applications.
In *QA@ACL*, pages 37–46. Association for Computational Linguistics.
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *NIPS*, pages 1693–1701.
Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou.
2019. Unicoder: A universal language encoder by pre-training with multiple cross-lingual tasks. In EMNLP/IJCNLP (1), pages 2485–2494. Association for Computational Linguistics.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. ALBERT: A lite BERT for self-supervised learning of language representations. In *ICLR*. OpenReview.net.
Patrick S. H. Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: evaluating cross-lingual extractive question answering.
In ACL, pages 7315–7330. Association for Computational Linguistics.
Jia Li, Yongfeng Huang, Heng Chang, and Yu Rong.
2022a. Semi-supervised hierarchical graph classification. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony Man-Cho So, and Jose Blanchet.
2022b. Fast and provably convergent algorithms for gromov-wasserstein in graph learning. *arXiv preprint* arXiv:2205.08115.
Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, and Daxin Jiang. 2021. Calibrenet: Calibration networks for multilingual sequence labeling.
In *WSDM*, pages 842–850. ACM.
Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Xianglin Zuo, and Daxin Jiang. 2022.
Multi-level contrastive learning for cross-lingual spoken language understanding. *CoRR*, abs/2205.03656.
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In *EMNLP (1)*, pages 6008–6018. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021.
ERNIE-M: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 27–38, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Lin Pan, Chung-Wei Hang, Haode Qi, Abhishek Shah, Saloni Potdar, and Mo Yu. 2021. Multilingual BERT
post-pretraining alignment. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 210–219, Online.
Association for Computational Linguistics.
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In ACL
(1), pages 1946–1958. Association for Computational Linguistics.
Shichao Pei, Lu Yu, Guoxian Yu, and Xiangliang Zhang.
2020. REA: robust cross-lingual entity alignment between knowledge graphs. In KDD, pages 2175–
2184. ACM.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual bert? *arXiv* preprint arXiv:1906.01502.
Erik F. Tjong Kim Sang. 2002. Introduction to the conll2002 shared task: Language-independent named entity recognition. In *CoNLL*. ACL.
Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613, Minneapolis, Minnesota.
Association for Computational Linguistics.
Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE
2.0: A continual pre-training framework for language understanding. In *AAAI*, pages 8968–8975. AAAI
Press.
Jianheng Tang, Weiqi Zhang, Jiajin Li, Kangfei Zhao, Fugee Tsung, and Jia Li. 2023. Robust attributed graph alignment via joint structure learning and optimal transport. *ICDE*.
Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2022. ESimCSE: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3898–
3907, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. CLEAR: contrastive learning for sentence representation. *CoRR*,
abs/2012.15466.
Nan Yang, Furu Wei, Binxing Jiao, Daxing Jiang, and Linjun Yang. 2021. xMoCo: Cross momentum contrastive learning for open-domain question answering.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6120–6129, Online. Association for Computational Linguistics.
Chenyu You, Nuo Chen, and Yuexian Zou. Mrd-net:
Multi-modal residual knowledge distillation for spoken question answering.
Chenyu You, Nuo Chen, and Yuexian Zou. 2021. Selfsupervised contrastive cross-modality representation learning for spoken question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 28–39.
Daniel Zeman, Joakim Nivre, and Mitchell Abrams.
2019. Universal dependencies 2.5. lindat/clariah-cz digital library at the institute of formal and appliedlinguistics (ufal). Faculty of Mathematics and Physics.
Charles University.
## A Training Details A.1 Fine-Tuning Dataset
Cross-Lingual Machine Reading Comprehension MLQA and XQUAD are two popular xMRC
benchmarks, which share the same training set from SQUA and consists of different test sets in low-resource languages. In this work, we evaluate our methods on six languages: including *English,*
Arabic, German, Spanish, Hindi, Vietnamese.
Cross-lingual Name Entity Recognition CoNLL and WikiAnn are commonly-used xNER
benchmarks. We evaluate CoNLL on four language test sets: *Spanish, Dutch, English, German*. As for the WikiAnn challenge, we evaluate the model with 48 languages.
Cross-lingual Part-of-Speech Tagging UPDOS
is a typical dataset of POS in multilingual. Of note, UPDOS contains 37 languages, which all of them are used to test our model performances.
## A.2 Fine-Tuning Details
We use the official codes from Hugging Face Examples4to fine-tune and test our models. The detailed hyper-parameter setups are presented in Table 5.
## B Main Results
In this section, we present the model's performances on each language across five datasets.
xMRC Results Table 6 and Table 7 show the model performances on MLQA and XQUAD
datasets.
xNER Results Table 8 shows the model performances on WikiAnn dataset.
xMRC Results Table 9 represents the model performances on UDPOS dataset.
## C Analysis
Answer to Q4: Intuitively, it is essential to study the sensitivity analysis of the temperature τ in our SCP. Thereafter, we further conduct experiments to verify the impact of different τ on our model performances. We test out our XLM-SCP with τ ∈ {0.01, 0.05, 0.1, 0.5} on XQUAD, MLQA and WikiAnn datasets. From the Figure 5, we can observe that changing τ could cause the model to improve and decrease. Concretely, ours achieve best results when τ = 0.05.
4https://github.com/huggingface/transformers/examples
![11_image_0.png](11_image_0.png)
Answer to Q5: We further conduct analysis to find that which part of the chosen nouns, verbs, objects, and entities in structural words has the most impact on how well our model works? Hence, we remove each S-V-O and entity word in turn and test out the model's performances on xNER tasks and xPOS tasks. As the Table 10 shows, each component in the selected structural word has different impact on our XLM-SCP. Interestingly, the model's performance drops significantly on the WikiAnn dataset without entity while very somewhat on the UDPOS dataset without entity. The possible reason is that xNER tasks require the model has a stronger ability of entity-level understanding while xPOS tasks need more fine-grained understanding on token-level.
| Parameter | MLQA | XQUAD | WikiAnn | CoNLL | UPDOS |
|-----------------------------------------------------|--------|---------|-----------|---------|---------|
| Batch size | 32 | 32 | 32 | 16 | 16 |
| Learning Rate | 3e−5 | 3e−5 | 2e−5 | 2e−5 | 2e−5 |
| Epoch | 5 | 5 | 5 | 5 | 5 |
| Warm Up | 10% | 10% | 10% | 10% | 10% |
| Max Length | 384 | 384 | 128 | 128 | 128 |
| Table 5: Hyper-parameters setup during fine-tuning. | | | | | |
| Models | en | ar | de | vi | hi | es | Avg. |
|----------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
| Ours(XLM-R) | 79.74/65.93 | 53.80/35.12 | 61.59/45.65 | 67.98/47.00 | 60.97/42.11 | 66.35/45.01 | 65.14/47.20 |
| Ours(Info-XLM) | 80.84/67.95 | 53.84/35.35 | 60.90/45.14 | 66.57/46.70 | 60.86/44.48 | 66.70/45.88 | 67.01/48.90 |
Table 6: The performance of our models on MLQA datasets.
| Models | en | es | de | ar | hi | vi | Avg. |
|----------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
| Ours(XLM-R) | 78.82/63.91 | 74.63/60.41 | 74.34/59.92 | 67.57/49.23 | 68.11/50.67 | 72.72/50.82 | 75.35/59.20 |
| Ours(Info-XLM) | 79.65/67.30 | 76.12/60.05 | 73.21/60.89 | 70.31/52.98 | 69.10/51.33 | 72.42/50.34 | 76.93/60.75 |
Table 7: The performance of our models on XQUAD datasets.
| Model | ar | he | vi | id | jv | ms | tl | eu | ml | ta | te | af | nl | en | de | el | bn | hi | mr | ur | fa | fr | it | pt |
|---------|------|-----------|------|------|------|----------|-----------|-----------|------|------|------|-----------|-----------|-----------|------|------|------|-----------|------|------|------|------|------|------|
| Ours | 54.8 | 52.7 67.6 | 47.6 | 60.4 | 68.0 | 69.0 | 61.3 | 61.6 | 54.3 | 47.3 | 76.3 | 80.4 82.4 | 74.2 74.7 | 69.5 68.0 | 62.9 | 62.0 | 53.7 | 77.4 | 77.8 | 79.2 | | | | |
| es | bg | ru | ja | ka | ko | th | sw | yo | my | zh | kk | tr | et | fi | hu | qu | pl | uk | az | It | pa | gu | ro | Avg. |
| 75.1 | 77.7 | 62.4 | 19.4 | 66.6 | 48.7 | 2.2 66.2 | 48.7 56.5 | 69.1 40.6 | 75.0 | 71.2 | 75.6 | 77.8 | 59.2 | 78.2 | 77.6 | 62.9 | 72.4 | 52.3 57.8 | 76.3 | 62.8 | | | | |
Table 8: Results on WikiAnn named entity recognition.
Model af ar bg de el en es et eu fa fi fr he hi hu id it ja kk XLM-SCP 88.0 68.5 89.6 88.8 86.5 95.8 88.8 86.3 67.7 69.6 85.8 87.5 67.9 68.7 82.7 72.6 89.5 28.9 76.0 Model ko mr nl pt ru ta te th tl tr ur vi yo zh It pl uk ro Avg.
XLM-SCP 52.3 81.6 89.3 88.2 89.5 62.3 83.2 48.0 89.2 74.3 60.3 58.2 25.4 39.6 84.4 85.4 85.4 84.8 75.20 Table 9: Results on part-of-speech tagging.
| Algorithms | WikiAnn | UPDOS |
|--------------|-----------|---------|
| XLM-SCP | 63.39 | 75.20 |
| w/o subject | 63.12 | 74.72 |
| w/o verb | 63.01 | 74.84 |
| w/o object | 63.08 | 74.82 |
| w/o entity | 62.88 | 75.01 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section limitations
✓ A2. Did you discuss any potential risks of your work?
section limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section i1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section Experiments
✓ B1. Did you cite the creators of artifacts you used?
Section Experiments
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section Experiments
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section Experiments
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section Experiments
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section Experiments
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section Experiments
## C ✓ **Did You Run Computational Experiments?** Section Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section Experiments The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section Experiments
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section Experiments
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section Experiments D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
jia-etal-2023-reducing | Reducing Sensitivity on Speaker Names for Text Generation from Dialogues | https://aclanthology.org/2023.findings-acl.129 | Changing speaker names consistently throughout a dialogue should not affect its meaning and corresponding outputs for text generation from dialogues. However, pre-trained language models, serving as the backbone for dialogue-processing tasks, have shown to be sensitive to nuances. This may result in unfairness in real-world applications. No comprehensive analysis of this problem has been done in the past. In this work, we propose to quantitatively measure a model{'}s sensitivity on speaker names, and comprehensively evaluate a number of known methods for reducing speaker name sensitivity, including a novel approach of our own. Extensive experiments on multiple datasets provide a benchmark for this problem and show the favorable performance of our approach in sensitivity reduction and quality of generation. | # Reducing Sensitivity On Speaker Names For Text Generation From Dialogues
Qi Jia1, Haifeng Tang2**, Kenny Q. Zhu**3∗
1,3Shanghai Jiao Tong University, Shanghai, China 2China Merchants Bank Credit Card Center, Shanghai, China [email protected], [email protected], [email protected]
## Abstract
Changing speaker names consistently throughout a dialogue should not affect its meaning and corresponding outputs for text generation from dialogues. However, pre-trained language models, serving as the backbone for dialogue-processing tasks, have shown to be sensitive to nuances. This may result in unfairness in real-world applications. No comprehensive analysis of this problem has been done in the past. In this work, we propose to quantitatively measure a model's sensitivity on speaker names, and comprehensively evaluate a number of known methods for reducing speaker name sensitivity, including a novel approach of our own. Extensive experiments on multiple datasets provide a benchmark for this problem and show the favorable performance of our approach in sensitivity reduction and quality of generation.
## 1 Introduction
The safety and fairness issue of generations from dialogue models is a crucial concern in real applications. Previous work focuses on response generation from open-ended dialogue systems (Xu et al.,
2020; Henderson et al., 2018), such as offensive contents (Baheti et al., 2021), gender bias (Liu et al., 2020; Dinan et al., 2020) and other discriminated behavior (Sheng et al., 2021; Smith and Williams, 2021). For other text generation tasks where the whole dialogue is provided and the output shouldn't go beyond the dialogue, such as dialogue summarization (Gliwa et al., 2019) and dialogue reading comprehension (Li et al., 2020),
the fairness issue is still unexplored.
In these tasks, the input dialogues are selfcontained, and the names of the speakers do not carry any connotation from outside of the dialogue.
Therefore, changing the speaker names consistently in a dialogue should not affect the meanings of the
∗ The corresponding author.
![0_image_0.png](0_image_0.png)
dialogue and the desired outputs. This contrasts with response generation, where the dialogue is in progress and the output is expected to be different in styles or contents for various speakers.
Taking dialogue summarization (Gliwa et al., 2019; Chen et al., 2021) as an example for text generation from dialogues, it focuses on generating concise
"who-did-what" summaries in the third person. In Fig. 1, the two dialogues are identical except for the speaker names. The two summaries are expected to be the same modulo the speaker names.
Unfortunately, models nowadays, following the pretrain-finetune paradigm, are sensitive to trivial changes, which has been verified in other tasks.
In relation extraction, spurious correlations between entity mentions and relations lead to entity bias (Zhang et al., 2018, 2017; Wang et al.,
2022b). Other similar work includes the analysis of robustness by entity renaming for machine reading comprehension models on narrative texts (Yan et al., 2022) and name biases in machine translation with inflected languages (Wang et al., 2022a),
like German. Besides, Shwartz et al. (2020) claims that pre-trained language models do not treat given names as interchangeable or anonymous, showing unfairness in reading comprehension.
Obviously, dialogue understanding models are sensitive to speaker names according to Fig. 1 as well. The model tends to generate different information given different speaker names, such as
"don't want to go" and "doesn't like them". Incorrect content, "... Betsy don't want to go", is generated with the first group of speakers, while not with the other group. According to our pilot experiment with the vanilla BART fine-tuned on SAMSum, around 74.00% of generations are changed by switching speaker names and 69.82%
among them are due to distinct contents. Such uneven performances create unfairness among different speakers, especially in the aspect of information allocation. The model may also catch latent properties in names (Romanov et al., 2019) and lead to discrimination, raising the importance of research on the sensitivity on speaker names.
Previous work has also mentioned this problem. Different data pre-processing approaches are adopted during the construction of datasets to avoid using speaker names, such as "A" or "B" in Li et al.
(2017). Khalifa et al. (2021) replace speaker names with more common and frequent names that the model may have seen during pre-training. Data augmentation by changing speaker names is adopted by Liu and Chen (2021). However, all of them only attempted to attack this problem subjectively, without quantitive analysis and fair comparisons.
In this work, we systematically analyze speaker name sensitivity in text generation from dialogues.
We define the speaker name sensitivity and divide the approaches into offline and online ones.
Then, we propose two novel insensitivity losses, helping to reduce attention and hidden state distances of the same dialogue with different speaker names for transformer-based models during finetuning. These losses can be used in both kinds of approaches. Results on several tasks show that our losses reduce the sensitivity and get better generations. In summary, our contributions are:
- We are the first to investigate the speaker name sensitivity in text generation from dialogues (Sec. 2.1) with all of the codes and results open-sourced at https://github.com/
JiaQiSJTU/SpeakerNameSensitivity.
- We introduce two novel insensitivity losses as auxiliary training objectives for reducing sensitivity during fine-tuning (Sec. 3).
- Experiments on different tasks provide a benchmark with comprehensive analysis on speaker name sensitivity, and show state-ofthe-art performances of our approach (Sec. 5).
## 2 Background 2.1 Speaker Name Sensitivity
Speaker name sensitivity is the differences in the generations by a model, given the identical dialogues except for different speaker names. We define it as follows.
Let d denote the input dialogue. c denotes other input content, which can be empty for tasks like dialogue summarization, or a piece of text such as a question for reading comprehension. p refers to the set of speakers names in d. f is a one-to-one mapping which maps p into a set of names p0from a name pool P consisting of a set of candidate names to be substituted into the samples. The names p0 are sampled under the uniform distribution without the loss of generality. *The speaker name sensitivity* SS of a generation model M(·) *on this sample* is:
$$\begin{array}{c}{{S S({\mathcal{M}}|d,c)=\delta(\{{\mathcal{M}}(R e p(d,c|f))}}\\ {{|\forall f:p\to p^{\prime},p^{\prime}\subseteq{\mathcal{P}}\})}}\end{array}\tag{1}$$
where Rep(·) replaces names in the sample given f, i.e., from p to p0. δ(·) quantifies the differences among generations.
Then, the sensitivity SS *of a model* M(·) is the expectation E of over all samples from the realworld distribution D:
$$S S({\mathcal{M}})=\mathbb{E}_{(d,c)\sim D}[S S({\mathcal{M}}|d,c)]\qquad(2)$$
In practice, a dialogue dataset is regarded as a sampling from D for evaluations. Each sample in the dataset is provided with a reference output o for supervised training. We use Dtr, Dva and Dte to refer to training, validation and test sets. See detailed implementations and metrics in Sec. 4.1.
## 2.2 Existing Approaches
We investigate existing approaches that target on reducing the sensitivity and classify them into offline ones and online ones, where the former chases to reduce the sensitivity by exploring better model parameters and the latter pursues insensitivity by unification or simplification of input data. Thus, data processing steps are required before inputting into the model and after the inference during the test time and speaker names in Dtr, Dva and Dte are all changed for online approaches. The model needs fine-tuning for both approaches.
Offline approaches include:
Embedding Layer(Emb): Similar to (Gu et al.,
2020) and (He et al., 2021), an additional embedding layer can be adopted for representing whether the model should be sensitive to corresponding tokens. 2 embeddings are learned during fine-tuning.
Augmentation (Aug): Liu and Chen (2021)
proposed to do data augmentation by exchanging speaker names in training samples with names from Dtr. They aim to reduce unexpected inductive bias caused by speaker names, which is similar to our goal. The model is fine-tuned with augmented training data while Dva and Dte remain unchanged.
Online approaches are:
ID: Some works (Cui et al., 2020; Li et al., 2017)
replace speaker names with predefined IDs to avoid name bias. We use "Speaker[NUM]" similarly to Kim et al. (2019) and Chen et al. (2021), which is close to words seen during pre-training and fits different numbers of speakers. "[NUM]" is the index of a speaker's first occurrence.
Frequent (Fre): This refers to the approach proposed in Khalifa et al. (2021). They use 100 frequent male and 100 frequent female names online1 as the pool P for sampling replacements. This approach can be combined with Aug into **FreAug**.
## 3 Proposed Approach
We focus on the widely-accepted encoder-decoder architecture for pre-trained generation models and design two auxiliary insensitivity losses to take full advantage of augmented data on top of Aug. Given the dialogue sample with different speaker names, a model outputs distinct generations due to its different internal behaviors. Therefore, penalizing unexpected internal differences should help the model behave consistently and reduce the sensitivity.
With this intuition, we propose the crossattention loss and the decoder-hidden-state loss.
An illustration for them is in Appendix A. The former corresponds to cross-attention distributions that help the decoder make a soft information selection among encoder hidden states at each step and should be similar with different speaker names.
1https://www.ssa.gov/oact/babynames/decades/century.html The latter is based on the final decoder hidden states which are expected to be the same under the default teacher-forcing training strategy except for the speaker name tokens. We didn't consider the encoder attentions since according to our pilot analysis of the vanilla BART, the cross attentions distance of the different predictions is around 1.5 times of the same ones. However, there are no differences in the encoder attentions. Other intermediate hidden states are excluded since they are all affected by different input embeddings of speaker names, except that the final decoder hidden states are sure to be the same.
## 3.1 Cross-Attention Insensitivity Loss
We denote a model's input and output length, i.e.,
the number of tokens, as din and *dout*. During training, the cross attentions calculated for each output token are collected as CA ∈ RN×dout×din.
N is the number of heads for the multi-head attention mechanism, determined by the configuration of pre-trained models. We apply average pooling over the dimension of *dout*, to get the overall attention over the input tokens CA ∈ RN×din.
Given an original sample {di, ci, oi}, we construct K − 1 augmented samples by replacing speaker names. The averaged attentions for all samples are {CAk}
K
k=1. Since it is a default that each sample should go through the tokenizer before inputting to the model, {dink}
K
k=1 are not guaranteed to be identical in two cases. First, names may be tokenized into different token counts. For example, "John" and "Robinson" are tokenized into
{"John"} and {"Rob", "inson"} by BART tokenizer.
Replacing "John" with "Robinson" in di will increase the sequence length. Second, long inputs may be truncated at different tokens. So, we consider two corresponding functions for unification:
- Sum(·) sums up the attention values of tokens belonging to an occurrence of a speaker name.
- Pad(·) pads attentions into the same length dinu by concatenating zeros, which means that this part of contents is missing.
The unified {CAk}
K
k=1 is represented as
{CAgk}
K
k=1, where CAgk ∈ RN×dinu .
Finally, the loss is calculated as:
Lca = 1 K(K − 1) X K X K l=1,l6=k loss(CAgk, CAgl) k=1 (3)
where *loss*(·) measures the distances between a pair of attentions.
## 3.2 Decoder-Hidden-State Insensitivity Loss
Similarly, hidden states of the decoder's final output for all samples can be denoted as {DHk}
K
k=1, where DHk ∈ RH×*dout*k and H represents the hidden size. The lengths of them also vary due to the above two cases. We adopt two different functions:
- Del(·) ignores the hidden states whose predicted tokens belong to a speaker name.
- Trunc(·) truncates the redundant hidden states at the end without the paired ones.
Thus, the unified $\{DH_{k}\}_{k=1}^{K}$ is represented as: $\{\widetilde{DH}_{k}\}_{k=1}^{K}$, where $\widetilde{DH}_{k}\in R^{H\times dout_{u}}$. The loss is defined as:
$${\mathcal L}_{d h}={\frac{1}{K(K-1)}}\sum_{k=1}^{K}\sum_{l=1,l\neq k}^{K}l o s s(\widetilde{D H}_{k},\widetilde{D H}_{l})\quad.\tag{4}$$
We adopted the mean square error for both losses.
## 3.3 Learning Objective
Lca and Ldh are added to the vanilla generation loss Lgen with hyper-parameters α and β:
$${\mathcal{L}}_{t o t a l}={\mathcal{L}}_{g e n}+\alpha{\mathcal{L}}_{c a}+\beta{\mathcal{L}}_{d h}\qquad(5)$$
The insensitivity losses are only auxiliary finetuning objectives, leaving the inference time unchanged. They can be added on top of both Aug and FreAug, denoted as Ins and **FreIns**.
## 4 Experimental Setup
We define the evaluation metrics for sensitivity, introduce multiple text generation tasks with dialogue data and present implementation details.
## 4.1 Evaluation Metrics For Sensitivity
We uniformly sample names from P, which is specified later, to realize f without the loss of generality and re-sample the name if it is not in p but in the conversation. We avoid changing names mentioned during the conversation in case they are grounded entities. Since it's impossible to enumerate all possible f, we choose to substitute names of samples in Dte for T = 5 times. It should be noted that varying names in test data is different from the augmentation approach. The additional test data is fixed once constructed for comparing approaches by quantitatively measuring the sensitivity.
We introduce three kinds of δ(·) with taskspecific evaluation metric Score(·), such as Rouge and BertScore for dialogue summarization, and measure the speaker name sensitivity of a model similar to Prabhakaran et al. (2019)' work. **Pairwise Sensitivity(S-*)** is defined as:
$$E_{i=1}^{N^{t c}}E_{t_{1}=1}^{T}E_{t_{2}=1,t_{1}\neq t_{2}}^{T}[1-\mathrm{Score}(\hat{o}_{i}^{t_{1}},\hat{o}_{i}^{t_{2}})]\quad(6)$$
oˆ
t i is the generation where replaced names are changed back for evaluation. Nte is the number of samples in Dte. E(·) is the mean operator.
Dialogue models are also expected to get the same scores with task-specific evaluation metrics compared with the reference o. So, we can also add o as the input of δ(·) in Eq. 1 and define the following two metrics: **Score Range (R-*)** as
$$\begin{array}{c}{{E_{i=1}^{N^{t e}}[\operatorname*{max}(\{\mathrm{Score}(o_{i},\hat{o}_{i}^{t})|_{t=1}^{T}\})}}\\ {{-\operatorname*{min}(\{\mathrm{Score}(o_{i},\hat{o}_{i}^{t})|_{t=1}^{T}\})]}}\end{array}\quad(7)$$
## And **Score Deviation (D-*)** As
E
Nte i=1 [StdDev({Score(oi, oˆ
$$(o_{i},{\hat{o}}_{i}^{t})[_{t=1}^{T}\})]\qquad(8)$$
The sensitivity metrics here are the lower the better and are denoted by ↓ in the following sections.
## 4.2 Tasks And Datasets
We implement our experiments on the tasks below.
The statistics are in Table 1 and we calculate the macro-average scores of samples for each metric.
| Task | Dialogue | | |
|---------------|---------------------|-----------|-----------|
| Summarization | Question Generation | Reading | |
| Comprehension | | | |
| Dataset | SAMSum | Molweni | Molweni |
| #Train | 14,732 | 20,873 | 20,873 |
| #Val | 818 | 2,346 | 2,346 |
| #Test | 819 | 2,560 | 2,560 |
| Output Length | 23.44±12.72 | 7.05±2.02 | 4.01±2.93 |
Table 1: A summary of tasks. \#Train, \#Val and \#Test refer to the number of samples in the datasets. Output length are statistics(avg±std) for the word counts.
Dialogue Summarization outputs fluent and concise summaries covering the salient information in dialogues. We experiment with the SAMSum dataset (Gliwa et al., 2019) consisting of around 16k open-domain dialogues among two or more interlocutors. Rouge-2 F1 (Lin, 2004) and BertScore F1 (Zhang et al., 2019)
2are task-specific evaluation 2We adopted microsoft/deberta-xlarge-mnli recommended by https://github.com/Tiiiger/bert_score for BertScore.
metrics. We consider genders to be consistent when switching names following Khalifa et al. (2021).
Question Generation is to generate a question given an input dialogue and its corresponding answer span. We use Molweni dataset (Li et al., 2020)
made up of around 10k task-oriented dialogues sampled from the Ubuntu Chat Corpus. Similar to the question generation work based on SQuAD1.1, we extract (dialogue, answer, question) tuples from the original Molweni dataset and ignore unanswerable questions. Bleu (Papineni et al., 2002) and Rouge-L F1 are used for evaluations.
Reading Comprehension generates an answer by inputting a dialogue with a question. We use the Molweni dataset (Li et al., 2020) and ignore unanswerable questions as well. Bleu and Rouge-L
F1 are also used for evaluations.
## 4.3 Implementation Details
We use BART-large as our basic pre-trained model. We truncate inputs to the first 1024 tokens and the learning rate is 3e − 5 with weight decay equaling 0.01. The model is fine-tuned with batch size equaling 32 for 10 epochs. We evaluate the performance on Dva after each epoch with Rouge-2 F1 or Bleu.
The checkpoint with the highest score on Dva is saved for testing. During the inference, we decode with no_repeat_ngram_size=3, length_penalty=1.0 and num_beams=4. We search α and β in {1, 10, 20} empirically and report results with the best validation performance. Specifically, α equals 1. β equals 1 for reading comprehension and 10 for the others. Our experiments are done on a single RTX
2080Ti with 11G GPU memory. Considering the GPU memory footprint, we set K = 2, which is the same for Aug and FreAug for fair comparisons.
We test online approaches with their corresponding test sets. For offline approaches, we focus on two sources of P. One is **in-distribution names**
representing speaker names from the corresponding Dtr. The other is **all-possible names** with more than 117 thousand names3, which can reflect the models' performances in complicated real scenarios. For approaches with sampling operations, we construct data with 3 different random seeds. Results are averaged over the number of runs.
## 5 Results
We show performances of approaches first, followed by ablation studies and evaluations. Then, 3https://data.world/arunbabu/gender-by-names we take a closer look at offline approaches, which show the inherent capability of models, with multifaceted analysis. Hyper-parameter search and case studies are in Appendix C and E.
## 5.1 Performance Of Offline Approaches
The performance on the original test sets is shown in Table 2. Emb only outperforms Vanilla on question generation and Aug only makes little improvements over Vanilla on dialogue summarization. Our approach Ins makes consistent improvements, performing best among offline approaches.
![4_image_0.png](4_image_0.png)
Results with sensitivity scores are in Table 3.
Emb fails to generate more insensitive results, especially for question generation. Aug doesn't make promising improvements on outputs' quality over Vanilla, but it reduces the sensitiveness of models across different test sets and tasks. Ins leads to better results on randomly augmented training data with different random seeds, significantly outperforming Aug. In a word, Ins achieves the best performance among offline approaches.
By comparing the results in Table 3 horizontally, in-distribution names perform better than all-possible names on dialogue summarization, whereas results are opposite on the others. Speaker names in SAMSum are mostly real and popular names, while names in Molweni are online nicknames containing unknown words, such as
"zykotick9". All-possible names contain a large proportion of real names, and a small proportion of names never seen during pre-training which can be regarded as nicknames. In this way, we can observe that the difficulty of modeling names for a model is "SAMSum in-distribution < all-possible
< Molweni in-distribution". In other words, models perform better on more popular names, which is in accord with the success of Fre in Sec. 5.2.
## 5.2 Performance Of Online Approaches
The results of online approaches are in Table 4.
R2 **BertScore**
Approach - S↓ R↓ D↓ - S↓ R↓ D↓
![5_image_0.png](5_image_0.png)
Vanilla 27.66 31.24 13.98 5.51 74.90 11.80 6.41 2.49 Emb 27.63 29.39 13.21 5.20 74.91 11.29 6.26 2.43 Aug 27.82 27.35 12.33 4.86 74.95 10.42 5.77 2.57 Ins? 28.79 21.36 9.50 3.82 75.48 7.94 4.32 **1.71**
Vanilla 27.19 33.10 14.64 5.72 74.83 12.26 6.66 2.60 Emb 27.22 31.38 13.59 5.30 74.89 12.03 6.63 2.55 Aug 27.50 28.17 12.56 4.97 74.96 10.56 5.76 2.25 Ins? 28.44 25.37 11.58 4.62 75.38 9.38 **5.22 2.05**
Bleu RL
Approach - S↓ R↓ D↓ - S↓ R↓ D↓
Vanilla 18.48 34.80 11.96 5.06 57.14 14.94 14.19 5.74 Emb 19.00 38.24 13.76 5.79 57.31 17.55 16.85 6.82 Aug 17.89 26.24 8.22 3.52 56.26 12.04 11.35 4.69 Ins? 19.58 16.90 5.53 2.35 57.47 7.83 8.09 **3.35**
Vanilla 18.56 29.64 10.04 4.26 57.38 12.98 11.88 4.90 Emb 18.70 35.52 12.55 5.27 57.28 16.05 15.26 6.20 Aug 17.81 23.09 7.15 3.06 56.08 10.66 9.64 4.03 Ins? 19.57 14.65 4.41 1.90 57.49 6.96 6.58 **2.78**
Bleu RL
Approach - S↓ R↓ D↓ - S↓ R↓ D↓
Vanilla 28.34 54.98 6.54 2.83 73.07 7.54 9.69 4.17 Emb 25.80 57.78 7.17 3.13 69.29 9.83 12.30 5.31 Aug 27.07 55.96 6.04 2.62 72.11 8.14 10.42 4.50 Ins? 29.31 52.03 4.53 1.97 74.04 5.65 7.66 **3.32**
Vanilla 28.56 53.94 5.39 2.34 73.60 6.39 8.21 3.53 Emb 25.99 56.22 5.11 2.21 69.59 7.29 8.60 3.69 Aug 27.12 54.72 5.15 2.23 72.23 6.39 8.29 3.58 Ins? 29.34 51.38 3.66 1.59 74.35 4.62 6.15 **2.64**
(c) Reading Comprehension
All speaker names will be normalized into fixed code names in ID, so that the test set for ID is changeless for each sample and the sensitivity scores are actually 0.0. Unfortunately, its quality scores lag behind Ins and even drop dramatically on dialogue summarization. Thus, it's not recommended to be a necessary data pre-processing step. Fre makes some improvements on R2 for dialogue summarization by comparing with the vanilla model, which is consistent with the results in (Khalifa et al., 2021), whereas the drops in BertScore were not mentioned in their work. The sensitivity scores are lower than those for offline approaches in Table 3. To better understand the gains of Fre, we further test the vanilla model with the same test sets replaced by frequent names. It achieves similar performance on Rouge-2 (28.18) and BertScore
(75.13) with the vanilla model. The sensitivity score D-BertS is 2.24, which is lower than 2.49 of
![5_image_1.png](5_image_1.png)
![5_image_2.png](5_image_2.png)
Vanilla in Table 3. It shows that the advantages of Fre not only come from using the group of frequent names that are easier for a model to understand, but also from doing fine-tuning with this group of names. FreAug doesn't improve the outputs' quality consistently, but reduces the sensitivity scores. FreIns performs the most insensitively with better generation quality among online approaches.
## 5.3 Ablation Study
Ablation studies of our full approach Ins are in Table 5. Aug is regarded as an ablation representing the model trained without any auxiliary losses.
Both insensitivity losses outperform Aug with using Ldh topping the rank on most metrics, showing that penalizing differences on the decoder hidden states has more direct effects on the outputs. Combining both losses induces more performance gains.
![5_image_3.png](5_image_3.png)
## 5.4 Human Evaluation
Taking dialogue summarization as an example, we did human evaluation to further prove the improvement on sensitivity by sampling 200 pairs of generations for each offline approach and asked three proficient English speakers from Asia to label each
![6_image_0.png](6_image_0.png)
case out of 4 choices by selecting the primary one that makes the generations distinct: **Infor**mation difference means both outputs contain different information or keywords. **Fact**ual difference refers to different matchings between speakers and events.
Expression difference is that the outputs have minor differences, such as capitalization and different orders of juxtaposed names. **Same** represents the identical outputs. The results are in Fig. 2 with 0.64 Kappa score, indicating substantial agreement. We can see that content distinction is the primary difference type. Ins generates less distinct contents and more identical results, outperforming the baselines.
## 5.5 Sensitivity Among Name Groups
We collect specific groups of names in terms of popularity and race and show differences in the quality performances on test sets constructed with corresponding names. The sensitivity among different groups for each method are reflected by the scattering of dots vertically in Fig. 3.
Name groups by popularity and usage: We define 4 groups. **Frequent** including words frequently and solely used as human names is mentioned before. **Polysemous** represents words frequently used but not specialized for human names, such as June and Florida. **Rare** is names with low occurrence times like Paderau. **Unknown** names are similar to random strings from a model's perspective since they haven't been exposed to the model. The last three groups are collected by counting occurrences of all-possible names in the pretraining corpus of BART. We select 200 names for each group (More details are in Appendix B).
According to Fig. 3a, we can see that models usually perform poorly on Polysemous, even worse than Rare and Unknown. The daily meanings dominate the representation of this word and confuse the model. Frequent generally outperforms other groups. We conclude that words frequently and uniquely used as names that result in more specialized embeddings in pre-trained models and perform
![6_image_1.png](6_image_1.png)
better. Moreover, comparing the sensitivity among different approaches, Ins outperforms the baselines in most cases except Aug. It achieves more centralized dots due to the performance reduction on the dominant groups or even all groups, showing that models tend to overfit with augmented data without our losses. To recap, Ins results in consistent improvements over Vanilla among different tasks compared with other baselines.
Name groups by races: Names from different races are from Tzioumis (2018) by assigning each name to a race with the highest probability. 4 major groups4are gathered, including Non-Hispanic White, **Hispanic** or Latino, Non-Hispanic **Black**
or African American, and Non-Hispanic **Asian** or Native Hawaiian or Other Pacific Islander. To avoid the influence of the various number of names, we select the most frequent 50 names in each group and show the results in Fig. 3b. All of the approaches show discrimination against Asian in dialogue summarization. Emb, Aug and Ins improve the insensitivity among different races compared with Vanilla, and Ins is better with the guarantee on quality. We consider to introduce special designs on demographic features in the future.
## 5.6 Sensitivity On An Individual Speaker
We can also only change the name of a single speaker each time to analyze fine-grained sensitivity. The results of offline approaches for dialogue summarization are shown in Table 6 (see more in Appendix D). The sensitivity scores are lower than the ones in Table 3. It seems that the sensitivity of models is proportional to the amount of changes in test samples, i.e., whether changing all speaker names (change-all-name) or only one speaker name (change-one-name). However, it's not always true and changing one name can be more sensitive than changing all names. Taking the results from Ins as an example, around 52.01%
samples have speakers whose change-one-name D-BertS is higher than the corresponding changelall-name one. Over 34.80% of the change-onename D-BertS averaged by speakers from the same dialogue is also higher than the change-all-name D-BertS.
![7_image_0.png](7_image_0.png)
1.6 1.9 2.2 We further show the trends between speaker features and their sensitivity scores in Fig. 4. Names are more sensitive and thus crucial for speakers at the start of a dialogue or with more utterances, deserving attention for further improvements.
1 2 3 D-BertS (%)
Utterance Index of the First Appearance
![7_image_2.png](7_image_2.png)
## 6 Related Work
Entity/Name Bias in Narrative Texts: Previous work on entity biases shows that pre-trained language models are sensitive to changes in narrative text. Some works (Zhang et al., 2018, 2017; Wang et al., 2022b) for relation extraction mask entities in the context to prohibit learning spurious features between entities and relations. Yan et al. (2022)
analyzes the robustness of models by entity renaming on reading comprehension. They all consider different kinds of entities, such as person and organization. However, the entities have the potential to be grounded in real life (Smith and Williams, 2021), and the background knowledge of these entities may be necessary for understanding. Besides, the context and the entities cannot always be well-separated, especially persons Yan et al. (2022).
Thus, masking and switching operations are not always suitable for these entities. In our work, we focus on speakers that are not grounded.
Names that are not grounded have also been studied. Information such as age, gender and race can
![7_image_1.png](7_image_1.png)
be reflected by a given name to some extent (Girma, 2020), while models learned with statistical features may make wrong predictions about specific persons or bring unexpected stereotypes (Bertrand and Mullainathan, 2004). Romanov et al. (2019) takes occupation classification as an example and discourages the model to predict an individual's occupation depending on his/her name. Wang et al.
(2022a) presents that machine translation models perform poorly on female names when translating into languages with grammatical gender and also have sentiment bias caused by names with sentiment-ambiguous words. Samples in all these works only have a single name each, while multiple speaker names are entangled in a single dialogue.
Fairness of Dialogue Models: Safety and fairness issues on generations from dialogue models are crucial for implementation in practice. Harmful differences in responses caused by different demographic personas are observed in well-known dialogue systems (Sheng et al., 2021; Dinan et al.,
2020), including offensiveness, gender bias, race discrimination, etc. These unfairness phenomena also exist in dialogue systems without considering persons (Liu et al., 2020), reflected by the politeness, sentiment, diversity and other aspects of a response. Recent work from (Smith and Williams, 2021) shows dialogue models treat their conversation partner differently for different speaker names. Instead of analyzing differences in open-ended dialogue systems, we target on text generation tasks given dialogues and show that sensitivity/unfairness also exists among speakers.
## 7 Conclusion
This paper focuses on the speaker name sensitivity in the text generation from dialogues. We provide a classification for previous approaches, and propose the insensitivity losses to reduce the sensitivity while achieving favorable generation quality. Fair comparisons and comprehensive analysis are done among different approaches for evaluating the sensitivity quantitatively. More approaches targeting dialogue sensitivity issues are expected.
## Limitations
Our work has the following limitations:
First, we cannot generalize our conclusions to other languages that are dramatically different from English or more complicated multi-lingual scenarios without further experiments.
Second, we didn't consider any special designs on demographic features of names in our proposed approach. As shown in Sec. 5.5, discrimination does exist among different groups. Although Ins outperforms other baselines overall, there is still room to improve insensitivity among different groups for tasks with longer outputs containing multiple speaker names. We hypothesize that demographic features of names can be added through a more dedicated data augmentation strategy.
Third, our experimentation was restricted to the BART model in this paper. The reason is that among all the models that can be fine-tuned with our limited resources, including T5 and GPT-2, BART is still the best and the most popular, therefore we pick BART as the target of this study. Our intention is to devote the limited paper space to a more in-depth analysis of the problem using a range of tasks. Besides, it should be noticed that the speaker name sensitivity is still an issue with recent large pre-trained models, as shown in the example of dialogue summarization with outputs from ChatGPT in Fig. 5. The two summaries are expected to be the same, modulo speaker names. However, the third speaker (Sergio/Ashley) is not even mentioned in Summary-2.
We will try to address these limitations in the future.
## Ethics Statement
All of the name lists we adopted in this paper are borrowed from public websites (https://www.
![8_image_0.png](8_image_0.png)
ssa.gov) and previous publications (Tzioumis, 2018; Khalifa et al., 2021). We considered only binary genders and four different racial groups, which are clearly incomplete for depicting all humans. Our work is mainly at drawing researchers' attention to the unfairness caused by speaker names in text generation tasks given dialogues. These demographic features are selected to shed light on this potential issue and our method is not restricted to any specific demographic groups.
## Acknowledgments
This work was generously supported by the CMB
Credit Card Center & SJTU joint research grant, and Meituan-SJTU joint research grant.
## References
Ashutosh Baheti, Maarten Sap, Alan Ritter, and Mark Riedl. 2021. Just say no: Analyzing the stance of neural dialogue generation in offensive contexts.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 4846–4862.
Marianne Bertrand and Sendhil Mullainathan. 2004.
Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. *American economic review*, 94(4):991–
1013.
Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang.
2021. Dialogsum: A real-life scenario dialogue
summarization dataset. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP*
2021, pages 5062–5074.
Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou. 2020. Mutual: A dataset for multi-turn dialogue reasoning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1406–1416.
Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. 2020.
Queens are powerful too: Mitigating gender bias in dialogue generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8173–8188.
Hewan Girma. 2020. Black names, immigrant names:
Navigating race and ethnicity through personal names. *Journal of Black Studies*, 51(1):16–36.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A
human-annotated dialogue dataset for abstractive summarization. In *Proceedings of the 2nd Workshop* on New Frontiers in Summarization, pages 70–79.
Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020.
Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 29th ACM International Conference on Information
& Knowledge Management, pages 2041–2044.
Zihao He, Leili Tavabi, Kristina Lerman, and Mohammad Soleymani. 2021. Speaker turn modeling for dialogue act classification. In *Findings of the Association for Computational Linguistics: EMNLP*
2021, pages 2150–2157. Association for Computational Linguistics.
Peter Henderson, Koustuv Sinha, Nicolas AngelardGontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2018. Ethical challenges in data-driven dialogue systems. In *Proceedings of* the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 123–129.
Muhammad Khalifa, Miguel Ballesteros, and Kathleen Mckeown. 2021. A bag of tricks for dialogue summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8014–8022.
Seokhwan Kim, Michel Galley, Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, et al. 2019. The eighth dialog system technology challenge. *arXiv preprint arXiv:1911.06394*.
Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020.
Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 2642–2652.
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 986–995.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81. Association for Computational Linguistics.
Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020. Does gender matter? towards fairness in dialogue systems. In *Proceedings* of the 28th International Conference on Computational Linguistics, pages 4403–4416.
Zhengyuan Liu and Nancy Chen. 2021. Controllable neural dialogue summarization with personal named entity planning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 92–106.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318.
Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation sensitivity analysis to detect unintended model biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5740–5745.
Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Tauman Kalai.
2019. What's in a name? reducing bias in bios without access to protected attributes. *arXiv preprint* arXiv:1904.05233.
Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, and Nanyun Peng. 2021. Revealing persona biases in dialogue systems. *arXiv preprint* arXiv:2104.08728.
Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord.
2020. "you are grounded!": Latent name artifacts in pre-trained language models. pages 6850–6861.
Eric Michael Smith and Adina Williams. 2021. Hi, my name is martha: Using names to measure and mitigate bias in generative dialogue models. arXiv preprint arXiv:2109.03300.
Konstantinos Tzioumis. 2018. Demographic aspects of first names. *Scientific data*, 5(1):1–9.
Jun Wang, Benjamin Rubinstein, and Trevor Cohn.
2022a. Measuring and mitigating name biases in neural machine translation. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2576–2590.
Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022b. Should we rely on entity mentions for relation extraction? debiasing relation extraction with counterfactual analysis. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3071–3081, Seattle, United States. Association for Computational Linguistics.
Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2020. Recipes for safety in open-domain chatbots. *arXiv preprint* arXiv:2010.07079.
Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508–520, Seattle, United States. Association for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In Proceedings of the 8th International Conference on Learning Representations.
Yuhao Zhang, Peng Qi, and Christopher D Manning.
2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Positionaware attention and supervised data improve slot filling. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 35–45, Copenhagen, Denmark. Association for Computational Linguistics.
## A Illustration For Insensitivity Losses
Fig. 6 depicts the positions of the cross attentions and the final decoder hidden states in the encoderdecoder Transformer model for a better understanding of our two insensitivity losses.
## B Name Groups
To collect polysemous, rare and unknown names, we counted the number of occurrences of all-possible names in the pre-training corpus, Wikipedia5and BookCorpus6. We denote the frequency of a name as f*exact* and fner representing doing exact string match or named entity recognition when counting name occurrences respectively.
Rare contains names shown at least once and with the lowest f*exact* not equaling 0. Unknown includes names with f*exact* equaling 0. According to our observations, we find that names with a larger f*exact* are likely to be polysemy and are not uniquely used as personal names. So, we design a metric to recognize such names as follows:
$$u={\frac{r a n k(f_{e x a c t})-r a n k(f_{n e r})}{r a n k(f_{e x a c t})+r a n k(f_{n e r})}}\qquad(9)$$
rank(·) means that the ranking of a name among the whole name list based on its frequency in descending order 7. A higher u shows a higher level of uniqueness of a word as a name. The names with the lowest u scores are selected as Polysemous in Sec. 5.5.
Examples of names in different name groups are listed as follows:
- **Frequent**: Alexis, Philip, Matthew, Frank, Tyler, Roy, Catherine, Joan, Amanda, Henry
- **Polysemous**: July, Sea, March, Paris, Treasure, Oxford, Romania, Ice, Jersey, Navy
- **Rare**: Makinzy, Diyanna, Javione, Zamire, Harkeem, Jerralyn, Crissi, Monque, Ajahar, Dijion
- **Unknown**: Jaliyiah, Cardelia, Ravindr, Josephanthony, Tyjohn, Tnaya, Jyren, Kashaunda, Jaykob, Latonnia
- **White**: Kim, Georgia, Joseph, Mark, Martin, James, William, Barbara, Richard, Victoria
- **Hispanic**: Sofia, Daisy, Luis, Manuel, Dora, Emilia, Minerva, Antonio, Oscar, Francisco
- **Black**: Kenya, Ebony, Anderson, Kelvin, Dexter, Cleveland, Percy, Mamie, Jarvis, Essie
- **Asian**: Kong, Muhammad, Gang, Mai, Chi, Krishna, Can, Wan, Wang, Ferdinand 5https://huggingface.co/datasets/wikipedia 6https://huggingface.co/datasets/bookcorpus 7Doing named entity recognition on the whole pre-training corpus is too time-consuming. Therefore, we randomly sample 1% of the data for counting the fner and use the name rankings in Eq. 9 to get the uniqueness score.
![11_image_0.png](11_image_0.png)
## C Hyper-Parameter Search
We empirically searched the hyper-parameters α and β in {1, 10, 20} respectively with 9 combinations for Ins. Due to the limited computation resources and the large search space, we trained the model with different combinations for a single time, selected the best 3 combinations and repeated experiments with different random seeds to determine the final choice of α and β according to the performance on Dva. Finally, we set (α, β) as
(1, 10), (1, 10), (1,1) for dialogue summarization, question generation and reading comprehension respectively. We directly borrow these settings for FreIns.
In Fig. 7, we show the performances of Ins under different combinations for dialogue summarization on the vanilla test set with a single run. We can see that all of the results outperform the baselines in Table 2 and the standard deviation of BertScore among different combinations is only 0.14%, showing the stable improvements of Ins over the baselines.
![11_image_5.png](11_image_5.png)
![11_image_1.png](11_image_1.png)
![11_image_3.png](11_image_3.png)
![11_image_4.png](11_image_4.png)
Approach - S↓ R↓ D↓ - S↓ R↓ D↓
![11_image_2.png](11_image_2.png) Vanilla 27.29 25.53 11.05 4.42 74.64 9.65 5.19 2.05 Emb 27.41 24.20 10.87 4.33 74.90 9.49 5.29 2.09 Aug 27.51 22.24 9.89 3.96 74.83 8.50 4.67 1.85 Ins? 28.70 16.64 7.19 2.92 **75.44 6.11 3.18 1.28**
Vanilla 27.32 25.77 11.07 4.45 74.81 9.61 5.15 2.04 Emb 27.26 24,98 10.68 4.25 74.80 9.57 5.16 2.02 Aug 27.36 22.73 10.04 4.03 74.86 8.56 4.69 1.87 Ins? 28.38 18.65 8.12 3.29 **75.35 6.89 3.75 1.50**
Approach - S↓ R↓ D↓ - S↓ R↓ D↓
Vanilla 17.93 18.76 6.08 2.58 56.85 8.17 7.55 3.12 Emb 18.34 22.22 7.63 3.26 56.84 10.07 9.62 3.98 Aug 18.06 14.82 4.39 1.90 56.12 6.91 6.38 2.69 Ins? 19.45 9.66 2.75 1.18 **57.31 4.50 4.27 1.81**
Vanilla 17.91 17.73 5.75 2.46 56.67 7.76 7.05 2.95 Emb 18.67 20.80 7.08 3.06 56.86 9.47 8.89 3.73 Aug 17.97 13.04 3.62 1.57 56.12 6.06 6.50 2.25 Ins? 19.60 8.11 2.22 0.97 **57.51 3.77 3.42 1.47**
Approach - S↓ R↓ D↓ - S↓ R↓ D↓
Vanilla 27.96 54.08 3.85 1.67 73.91 4.49 5.50 2.37 Emb 25.52 56.61 4.28 1.85 70.20 5.32 6.37 2.75 Aug 26.54 54.76 3.69 1.60 72.53 4.57 5.87 2.55 Ins? 29.03 52.03 2.48 1.08 **74.81 5.65 4.41 1.91**
Vanilla 27.82 53.48 2.81 1.22 73.97 3.28 4.07 1.77 Emb 25.14 56.08 3.04 1.32 70.51 4.31 4.89 2.12 Aug 26.64 53.71 2.92 1.27 72.68 3.61 4.61 2.00 Ins? 29.40 51.20 1.93 0.83 **74.94 2.41 3.13 1.36**
## D Additional Results Of Sensitivity On An Individual Speaker
Results for sensitivity on an individual speaker on all of the three tasks are in Table 7 and Table 8.
Both tables lead to the same observations and con-
![12_image_0.png](12_image_0.png)
![12_image_4.png](12_image_4.png)
clusions as discussed in Sec 5.1 and Sec 5.2, where Ins and FreIns perform best among offline and online approaches correspondingly.
![12_image_1.png](12_image_1.png)
![12_image_2.png](12_image_2.png)
![12_image_3.png](12_image_3.png)
## E Case Study
We show cases for different tasks in this section.
The case for dialogue summarization is in Fig. 8.
Vanilla extracts different information for two sets of names: "She will bring eggs" and "Ethie is off on Friday". It also uses different expressions: "will come to ... for Easter" and "invited ... for Easter".
Besides, "Louise" is only mentioned in the second summary. Emb has the information difference and the expression difference. Meanwhile, it outputs incorrect content in the second summary, where
"chocolat ones" is used for describing "eggs" in the input dialogue. Aug outputs more information for the first set of names. Ins treats the two sets of names equally with the same generations modulo the speaker names.
In the case of question generation in Fig. 9, all baselines generate "who gives Jernee suggestions?" for the second set of names, which is an inaccurate question with multiple candidate answers. Emb also generates a "Who" with the capitalized first
![13_image_0.png](13_image_0.png)
letter, which is also different from the other one with lowercase "who" if we compare them strictly.
Ins generates identical and accurate questions for the same dialogue with different speaker names.
For reading comprehension in Fig. 10, both Vanilla and Emb generate quite different answers for two sets of names. Aug generates consistent but wrong answers considering the one-to-one mapping of speaker names. Ins outputs identical correct and complete answers, outperforming the baselines.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
It is the section after the conclusion.
✗ A2. Did you discuss any potential risks of your work?
We include an Ethics Statement after the Limitations. Our work aims at reducing sensitivity on speaker names. In other words, we try to reduce potential risks of current models.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.2
✓ B1. Did you cite the creators of artifacts you used?
Section 4.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All of the datasets are publicly available and we will only release the codes and results of our work.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. All of the datasets are publicly available and widely-used.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. We are not a dataset paper. We provided necessary information about the dataset in Section 4.2 and Table 1. More details please refer to their original dataset paper.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4.2, Table 1
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4.3, Appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.2. We followed the previous work on task-specific evaluation metrics and will release the corresponding codes.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 5.4
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 5.4
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. We had student volunteers to do the human evaluation.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. The volunteers knew how the data would be used before doing the human evaluation.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. We did not collect new datasets, only a simple human evaluation.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 5.4. |
qiu-etal-2023-topic | Topic and Style-aware Transformer for Multimodal Emotion Recognition | https://aclanthology.org/2023.findings-acl.130 | Understanding emotion expressions in multimodal signals is key for machines to have a better understanding of human communication. While language, visual and acoustic modalities can provide clues from different perspectives, the visual modality is shown to make minimal contribution to the performance in the emotion recognition field due to its high dimensionality. Therefore, we first leverage the strong multimodality backbone VATT to project the visual signal to the common space with language and acoustic signals. Also, we propose content-oriented features Topic and Speaking style on top of it to approach the subjectivity issues. Experiments conducted on the benchmark dataset MOSEI show our model can outperform SOTA results and effectively incorporate visual signals and handle subjectivity issues by serving as content {``}normalization{''}. | # Topic And Style-Aware Transformer For Multimodal Emotion Recognition
Shuwen Qiu1 Nitesh Sekhar2 **Prateek Singhal**2 [email protected] [email protected] [email protected] 1University of California, Los Angeles 2Amazon
## Abstract
Understanding emotion expressions in multimodal signals is key for machines to have a better understanding of human communication.
While language, visual and acoustic modalities can provide clues from different perspectives, the visual modality is shown to make minimal contribution to the performance in the emotion recognition field due to its high dimensionality.
Therefore, we first leverage the strong multimodality backbone VATT to project the visual signal to the common space with language and acoustic signals. Also, we propose contentoriented features Topic and Speaking style on top of it to approach the subjectivity issues. Experiments conducted on the benchmark dataset MOSEI show our model can outperform SOTA
results and effectively incorporate visual signals and handle subjectivity issues by serving as content "normalization".
## 1 Introduction
Emotion recognition is intrinsic for social robots to interact with people naturally. The ability to tell emotional change and propose timely intervention solutions can help maintain people's mental health and social relations. Though the traditional task of sentiment analysis is purely based on text (Wang et al., 2020; Ghosal et al., 2020; Shen et al., 2021),
humans express emotions not only with spoken words but also through non-verbal signals such as facial expressions and the change of tones. Therefore, following the current trend of multimodal emotion recognition (Delbrouck et al., 2020; Zadeh et al., 2017; Rahman et al., 2020; Gandhi et al.,
2022), we focus on addressing problems of understanding the expressed emotions in videos along with their audio and transcripts.
In this work, we tackle the problem of the multimodal emotion recognition task from two major issues: Minimal contribution of visual modality, and emotional subjectivity. Previous works which have used multimodal approaches (Rahman et al.,
![0_image_0.png](0_image_0.png)
2020; Joshi et al., 2022; Delbrouck et al., 2020)
have shown that text+audio outperforms the results of combining all three modalities. While facial and gesture signals contain abundant information, they tend to introduce more noise to the data due to its high dimensionality. In order to increase the contribution from visual modality , we propose to take advantage of the strong multimodal backbone VATT (Akbari et al., 2021) that can project features of different granularity levels into a common space.
On the other hand, the expression of emotion is subjective. People's emotion judgment can be influenced by enclosed scenarios. As shown in the left two columns in Figure 1, though the two examples are all labeled as "happy", the signals we use to detect "happy" may not be the same. In a public speech, showing gratitude may mean a positive sentiment while in movie reviews, we may focus more on sentiment words like good or bad. Also, subjectivity may come from individual differences in their own emotional intensity. As the examples shown in the right three columns in Figure 1, the sadness and happiness of the person in the excited style are more distinguishable through his face while the person in the calm style always adopts a calm face that makes sad and happy less recognizable. Therefore, we introduce content-oriented features: topic and speaking style serving as a content "normalization" for each person.
Our work makes the following contribution:
1) We propose to leverage the multimodal backbone to reduce the high dimensionality of visual modality and increase its contribution to the emotion recognition task.
2) We incorporate emotion-related features to handle modeling issues with emotional subjectivity 3) Experiments conducted on the benchmark dataset MOSEI show our model can outperform SOTA results and effectively incorporate visual signals and handle subjectivity issues.
## 2 Related Work
Emotion recognition using a fusion of input modalities such as text, speech, image, etc is the key research direction of human-computer interaction. Specific to the area of sentiment analysis, Multimodal Transformer applies pairwise crossattention to different modalities (Tsai et al., 2019).
The Memory Fusion Network synchronizes multimodal sequences using a multi-view gated memory that stores intra-view and cross-view interactions through time (Zadeh et al., 2018). TFN
performs the outer product of the modalities to learn both the intra-modality and inter-modality dynamics(Sahay et al., 2018). (Rahman et al.,
2020) begins the endeavor to take BERT (Devlin et al., 2018) as a strong backbone pretrained on large scale corpus. (Arjmand et al., 2021) follows the direction and combines Roberta with a light-weighed audio encoder to fuse the text and audio features. A recent work (Yang et al., 2022a)
presents a self-supervised framework to pretrain features within a single modality and across different modalities. Other frameworks include context and speaker-aware RNN (Shenoy and Sardana, 2020; Wang et al., 2021), graph neural networks modeling knowledge graphs and inter/intra relations between videos (Joshi et al., 2022; Fu et al.,
2021; Lian et al., 2020), while (Zhu et al., 2021)
has used topic information to improve emotion detection
## 3 Method 3.1 Overview
Our model aims to predict the presence of different emotions given an utterance-level video input along with its audio and transcripts. Figure 2 shows the overall structure of our model. To first get a better alignment of features from different modalities,
![1_image_0.png](1_image_0.png)
the raw video input will be fed into our backbone VATT and we can get the corresponding projected features for visual, acoustic, and textual signals separately. Meanwhile, our high-level content module will extract the corresponding topic and style representation. Queried by the video context, the topic and style features are further merged by a crossattention layer. Then both low-level and high-level features are concatenated and put into the final classification layer.
## 3.2 Backbone
Video-Audio-Text Transformer (VATT) is a framework for learning multimodal representations that takes raw signals as inputs. For each modality encoder, VATT appends an aggregation head at the beginning of the input sequence. The corresponding latent feature will serve as the projection head for this modality. For pretraining, contrastive loss is applied to align features from different modalities in a common projected space. Details can be found in (Akbari et al., 2021).
## 3.3 Content-Oriented Features 3.3.1 Topic
For each utterance input, we will first predict the topic of this utterance and feed the corresponding topic embedding into the model. Since we don't have the ground truth label for topics, we use Latent Dirichlet Allocation (LDA) (Blei et al., 2003)
model to cluster all the text from the training set into 3 topics. The number of topics is decided by grid search.
## 3.3.2 Speaking Style
We define speaking style based on the expression coefficient and the projection parameters of a 3DMM model (Blanz and Vetter, 1999). In a 3DMM model, the face shape is represented as an affine model of facial expression and facial identity: S = S¯ + Bidα + Bexpβ. This 3D face will be
| Weighted F1 | Happy | Sad | Angry | Surprise | Disgust | Fear |
|--------------------------|---------|-------|---------|------------|-----------|--------|
| Multilogue-Net | 70.60 | 70.70 | 74.40 | 87.80 | 83.40 | 86.00 |
| TBJE | 65.60 | 67.90 | 76.00 | 87.20 | 84.50 | 86.10 |
| MESM | 65.4 | 65.2 | 67.00 | 66.70 | 77.7 | 65.8 |
| Ours-Full | 71.18 | 73.57 | 76.62 | 87.77 | 82.79 | 86.03 |
| Full w/o text | 68.71 | 70.84 | 72.65 | 87.77 | 78.59 | 86.03 |
| Full w/o audio | 70.23 | 73.25 | 74.02 | 87.82 | 81.94 | 86.03 |
| Full w/o video | 68.95 | 72.76 | 76.83 | 87.74 | 82.74 | 86.03 |
| Full w/o content feature | 69.12 | 72.07 | 75.18 | 87.77 | 81.70 | 86.03 |
| Full w/o context | 70.87 | 73.54 | 75.18 | 87.77 | 80.76 | 86.03 |
| Full w/o style | 69.75 | 73.30 | 75.67 | 87.82 | 82.76 | 86.03 |
| Full w/o topic | 70.48 | 73.32 | 75.67 | 87.77 | 82.69 | 86.03 |
projected into a 2D image by translation and rotation p. Since there are multiple video frames, the expression coefficient β and the project parameter p will become time series β(t) and p(t). For a detailed analysis of the relations between the 3DMM
parameters and the talking styles, (Wu et al., 2021)
collected a dataset consisting of 3 talking styles: excited, tedious, and solemn. They find out that the standard deviation of the time series features and the gradient of these features are closely related to the styles. The final style code are denoted as σ(β(t)) ⊕ σ(
∂β(t)
∂t ) ⊕ σ(
∂p(t)
∂t ), ⊕ signifies the vector concatenation.
| Accuracy | 2-Class | 7-Class |
|---------------------|-----------|-----------|
| Multilogue-Net | 82.88 | 44.83 |
| TBJE | 82.40 | 43.91 |
| Topic-Style-Context | 79.75 | 48.26 |
## 3.3.3 Aggregating Different Features
Given each data input with its corresponding video ID, we collect all the transcripts with the same video ID as the context, and the context feature will be extracted from the text encoder of VATT. To adapt general topic and style features to the current speaker, we treat them as the feature sequence of length 2 and use an additional cross-attention layer to aggregate these features queried by the video context. Then this information along with the context and aligned features will be concatenated and fed into the final linear classifier.
## Happy Sad Angry Surprise Disgust **Fear** 8735 4269 3526 1642 2955 1331
Table 3: Label distribution of MOSEI Dataset
## 4 Dataset
We conduct our experiments on CMU-Multimodal Opinion Sentiment and Emotion Intensity (CMUMOSEI (Bagher Zadeh et al., 2018)) dataset. The dataset contains more than 23,500 sentence utterance videos from more than 1000 online YouTube speakers. Each sentence is annotated for a sentiment intensity from highly negative (-3) to highly positive (+3) and for 6 emotion classes: happiness, sadness, anger, fear, disgust, and surprise.
The number of utterances for train/test/dev is 16327/4662/1871 separately. Label distribution of the training set is shown in Table 3
## 5 Experiments 5.1 Setup
We train our models on 8 V100 GPU for 8 hours using the Adam optimizer (Kingma and Ba, 2014)
with a learning rate of 1e − 4 and a mini-batch size of 64. The total number of parameters of our model is 155M. For topic clustering, we adopt the scikitlearn LDA library (Pedregosa et al., 2011). We extract the style code for each video using https:
//github.com/wuhaozhe/style_avatar. The final model is selected based on validation accuracy on the development set.
Task We evaluate the performance of our model on two tasks: 1) Multi-label emotion recognition: the model needs to classify whether each of the 6 emotion classes presents or not. 2) Sentiment analysis: the model is tested on both 2-class (sentiment is positive or negative) and 7-class (a scale from -3 to +3) classification.
Evaluation Since the labels in MOSEI are unbalanced, we use the weighted F1 score for each emotion as the evaluation metric. We compare the performance with Multilogue-Net (Shenoy and Sardana, 2020) that adopted context and speaker-aware RNN , TBJE (Delbrouck et al., 2020), a state-ofthe-art method using cross-attention for modality fusion and MESM (Dai et al., 2021), who were the first to introduce a fully end-to-end trainable model for the multimodal emotion recognition task .
There are two recent works on emotion recognition, COGMEN (Joshi et al., 2022) and i-Code (Yang et al., 2022b). Since COGMEN adopted a structural representation that can exploit more relational information from other data samples and i-Code did not report the same metrics and is not opensourced, we will not compare with them in this paper.
## 5.2 Emotion Recognition
Table 1 shows our quantitative results. Compared with other SOTA methods in the first three rows, our full model achieves the best performance on recognizing happy, sad and angry. We reckon that it is due to very limited data for surprise and fear to train the big backbone, our model does not gain much improvement (shown in Table 3). To further analyze the contribution of each component of our model design, we also conduct a detailed ablation study: 1) We first remove the aligned features from the backbone each at a time. We can see from the results in the second block that combining all three modalities in our full model outperforms the bimodality input. Especially contrasting rows with and without video input, their comparative performance validates that our model can learn effectively from visual modalities. 2) In the third block, we report the performance when we simply concatenate aligned features as the input to the emotion classification layer without high-level features. The degraded performance reveals the efficacy of our content feature design. 3) Lastly, we investigate the influence of each content feature and the aggregation using context. To remove the context, we directly apply a self-attention layer to the feature sequence and use a linear layer to project the outputs into the aggregate feature dimension. For topic and style, we just remove the corresponding feature from the input. As shown in the last block, removing any part will result in a performance drop.
Overall, our full model in comparison yields the best performance.
## 5.3 Sentiment Analysis
To further validate our methods, we run our model on the other subtask, sentiment analysis. For each data sample, the annotation of sentiment polarity is a continuous value from -3 to 3. -3 means extremely negative, and 3 means extremely positive.
Our model is trained to regress the sentiment intensity. Then we ground the continuous value into 2 or 7 classes to calculate the accuracy. Contrasting 2-class and 7-class results in Table 2, our model works better for more fine-grained classification.
## 6 Qualitative Results
![3_Image_0.Png](3_Image_0.Png)
We first show that our model can correctly recognize emotions under different topics. As shown in Figure 3, for movie reviews, finance or commercial advertisements, the model can use different cues to predict the emotion as happy or sad. In Figure 4, our model can distinguish between excited/calm speaking styles and recognize the slight emotional change within each person. (all example videos can be found in supp).
![3_image_1.png](3_image_1.png)
## 7 Conclusion And Future Work
This study employs the powerful multimodal backbone VATT to facilitate feature alignment across various modalities. Moreover, content-specific features are introduced to mitigate the influence of individual subjectivity. The experimental outcomes demonstrate that the model can effectively assimilate visual information with reduced dimensions. Furthermore, the incorporation of sentimentoriented features yields further improvements in the model's performance, helping beat state of the art models on CMU-MOSEI dataset
## 8 Limitations
For modeling simplicity, we adopt the classic LDA methods to get the topic ID for each video segment.
We plan to investigate more advanced topic clustering methods and check how it can be applied to multilingual cases. Also, we propose a twostage framework that first extract topic and style features, based on which the emotion classifier will be trained. In the future, we hope to extend this work to learn features in an end-to-end manner.
## References
Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong.
2021. Vatt: Transformers for multimodal selfsupervised learning from raw video, audio and text.
Advances in Neural Information Processing Systems
(NeurIPS), 34:24206–24221.
Mehdi Arjmand, Mohammad Javad Dousti, and Hadi Moradi. 2021. Teasel: A transformer-based speechprefixed language model. *ArXiv*, abs/2109.05522.
AmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018.
Multimodal language analysis in the wild: CMUMOSEI dataset and interpretable dynamic fusion graph. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 2236–2246, Melbourne, Australia. Association for Computational Linguistics.
Volker Blanz and Thomas Vetter. 1999. A morphable model for the synthesis of 3d faces. In *Proceedings* of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194.
David M Blei, Andrew Y Ng, and Michael I Jordan.
2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022.
Wenliang Dai, Samuel Cahyawijaya, Zihan Liu, and Pascale Fung. 2021. Multimodal end-to-end sparse model for emotion recognition. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5305–5316, Online. Association for Computational Linguistics.
Jean-Benoit Delbrouck, Noé Tits, Mathilde Brousmiche, and Stéphane Dupont. 2020. A transformer-based
joint-encoding for emotion recognition and sentiment analysis. In *Second Grand-Challenge and Workshop* on Multimodal Language (Challenge-HML), pages 1–7, Seattle, USA. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Cite arxiv:1810.04805Comment: 13 pages.
Yahui Fu, Shogo Okada, Longbiao Wang, Lili Guo, Yaodong Song, Jiaxing Liu, and Jianwu Dang. 2021.
Consk-gcn: conversational semantic-and knowledgeoriented graph convolutional network for multimodal emotion recognition. In *2021 IEEE International* Conference on Multimedia and Expo (ICME), pages 1–6. IEEE.
Ankita Gandhi, Kinjal Adhvaryu, Soujanya Poria, Erik Cambria, and Amir Hussain. 2022. Multimodal sentiment analysis: A systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions. *Information Fusion*.
Deepanway Ghosal, Navonil Majumder, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020.
COSMIC: COmmonSense knowledge for eMotion identification in conversations. In *Findings of the Association for Computational Linguistics: EMNLP*
2020, pages 2470–2481, Online. Association for Computational Linguistics.
Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Singh, and Ashutosh Modi. 2022. COGMEN: COntextualized GNN based multimodal emotion recognitioN.
In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4148–4164, Seattle, United States. Association for Computational Linguistics.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Zheng Lian, Jianhua Tao, Bin Liu, Jian Huang, Zhanlei Yang, and Rongjun Li. 2020. Conversational emotion recognition using self-attention mechanisms and graph neural networks. In *INTERSPEECH*, pages 2347–2351.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Wasifur Rahman, Md Kamrul Hasan, Sangwu Lee, Amir Zadeh, Chengfeng Mao, Louis-Philippe Morency, and Ehsan Hoque. 2020. Integrating multimodal information in large pretrained transformers.
In *Proceedings of the conference. Association for*
Computational Linguistics. Meeting, volume 2020, page 2359. NIH Public Access.
Saurav Sahay, Shachi H. Kumar, Rui Xia, Jonathan Huang, and Lama Nachman. 2018. Multimodal relational tensor network for sentiment and emotion classification. *CoRR*, abs/1806.02923.
Weizhou Shen, Siyue Wu, Yunyi Yang, and Xiaojun Quan. 2021. Directed acyclic graph network for conversational emotion recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1551–1560, Online.
Association for Computational Linguistics.
Aman Shenoy and Ashish Sardana. 2020. Multiloguenet: A context-aware RNN for multi-modal emotion detection and sentiment analysis in conversation. In Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML), pages 19–28, Seattle, USA. Association for Computational Linguistics.
Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. *CoRR*,
abs/1906.00295.
Tana Wang, Yaqing Hou, Dongsheng Zhou, and Qiang Zhang. 2021. A contextual attention network for multimodal emotion recognition in conversation. In 2021 International Joint Conference on Neural Networks
(IJCNN), pages 1–7. IEEE.
Yan Wang, Jiayu Zhang, Jun Ma, Shaojun Wang, and Jing Xiao. 2020. Contextualized emotion recognition in conversation as sequence tagging. In *Proceedings* of the 21th annual meeting of the special interest group on discourse and dialogue, pages 186–195.
Haozhe Wu, Jia Jia, Haoyu Wang, Yishun Dou, Chao Duan, and Qingshan Deng. 2021. Imitating arbitrary talking style for realistic audio-driven talking face synthesis. In *Proceedings of the 29th ACM International Conference on Multimedia*, pages 1478–1486.
Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie, Robert Gmyr, Noel Codella, Naoyuki Kanda, Bin Xiao, Lu Yuan, Takuya Yoshioka, Michael Zeng, and Xuedong Huang. 2022a. i-code: An integrative and composable multimodal learning framework.
Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, et al. 2022b. i-code: An integrative and composable multimodal learning framework.
arXiv preprint arXiv:2205.01818.
Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 1103–1114, Copenhagen, Denmark. Association for Computational Linguistics.
Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Memory fusion network for multiview sequential learning. *AAAI*, abs/1802.00927.
Lixing Zhu, Gabriele Pergola, Lin Gui, Deyu Zhou, and Yulan He. 2021. Topic-driven and knowledgeaware transformer for dialogue emotion detection.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1571–1582, Online. Association for Computational Linguistics.
## A Appendix A.1 Topic Visualization
We first show the final topic clustering results. The second column shows the top 20 high frequency words in this topic and the third column shows some examples under this topic. The first topic is more related to movie reviews, the second covers business and finance, and the third one seems to associate with commercial and instruction videos.
## A.2 Style Code
In Fig 5, we can see that styles have a distinctive
![5_image_0.png](5_image_0.png)
embedding based on emotion which confirms our hypothesis that style code can add a meaningful input to our multimodal approach.
| Topic | Words | Examples | | | |
|------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------|-------|------|
| Topic 1 | movie, umm, uhh, like, know, really, one, im, good, go, see, two, kind, would, think, even, thats, going, there | 1) hi there today we're going to be reviewing cheaper by the dozen which is umm the original version; 2) i was a huge fan of the original film bruce almighty but i did think it was funny like jim | | | |
| Topic 2 | people, get, think, make, business, u, want, time, world, need, company, way, also, work, one, year, take, money, right, new | 1)future and it's a retirement future that can ultimately turned in to an income for you when you no longer have an income and you're fully retired; 2)um this year switching up how we approach funding and hopefully going to be able to arrange for some sustainable more officially recognized sorts of funding | | | |
| Topic 3 | going, | thing, | like, | know, | one, |
| want, really, well, also, im, video, make, way, thats, something, think, were, time, get, look | 1)is you can say hey i really like baby skin they are so soft they have any hair on their face so nice; 2) okay what happens at this point after we've taken this brief walk down memory lane is the presentation of the gift now | | | | |
Table 4: Topic clustering results
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✗ A2. Did you discuss any potential risks of your work?
We do not consider any risks in our work
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Model:3, 5.1 Data: 4
✓ B1. Did you cite the creators of artifacts you used?
model:3, 5.1 data: 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3.2, 4
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data is anonymized and discussed in the original paper B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-exploiting | Exploiting {A}bstract {M}eaning {R}epresentation for Open-Domain Question Answering | https://aclanthology.org/2023.findings-acl.131 | The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models (PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model{'}s ability to capture accurate correlations, especially within complex contexts. Therefore, we utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information. We introduce a method known as Graph-as-Token (GST) to incorporate AMRs into PLMs. Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance, resulting in up to 2.44/3.17 Exact Match score improvements on NQ/TQ respectively. Furthermore, our method enhances robustness and outperforms alternative Graph Neural Network (GNN) methods for integrating AMRs. To the best of our knowledge, we are the first to employ semantic graphs in ODQA. | # Exploiting Abstract Meaning Representation For Open-Domain Question Answering
Cunxiang Wang♠♣∗
, Zhikun Xu♡, Qipeng Guo♢**, Xiangkun Hu**♢,
Xuefeng Bai♣, Zheng Zhang♢ **and Yue Zhang**♣†
♠Zhejiang University, China
♣School of Engineering, Westlake University, China
♡Fudan University, China; ♢Amazon AWS AI
{wangcunxiang, zhangyue}@westlake.edu.cn
## Abstract
The Open-Domain Question Answering
(ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models
(PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model's ability to capture accurate correlations, especially within complex contexts. Therefore, we utilize Abstract Meaning Representation
(AMR) graphs to assist the model in understanding complex semantic information. We introduce a method known as Graph-as-Token
(GST) to incorporate AMRs into PLMs.
Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST
method can significantly improve performance, resulting in up to 2.44/3.17 Exact Match score improvements on NQ/TQ respectively.
Furthermore, our method enhances robustness and outperforms alternative Graph Neural Network (GNN) methods for integrating AMRs. To the best of our knowledge, we are the first to employ semantic graphs in ODQA.
1
## 1 Introduction
Question Answering (QA) is a significant task in Natural Language Processing (NLP) (Rajpurkar et al., 2016). Open-domain QA (ODQA) (Chen et al., 2017), particularly, requires models to output a singular answer in response to a given question using a set of passages that can total in the millions.
ODQA presents two technical challenges: the first is *retrieving* (Karpukhin et al., 2020) and *reranking* (Fajcik et al., 2021) relevant passages from the dataset, and the second is generating an answer for the question using the selected passages. In this work, we focus on the *reranking* and *reading* processes, which necessitate fine-grained interaction between the question and passages.
Existing work attempts to address these challenges using Pretrained Language Models (PLMs)
(Glass et al., 2022). However, the diverse surface form expressions often make it challenging for the model to capture accurate correlations, especially when the context is lengthy and complex.
We present an example from our experiments in Figure 1. In response to the question, the reranker incorrectly ranks a confusing passage first, and the reader generates the answer *"2015–16"*. The error arises from the PLMs' inability to effectively handle the complex semantic structure. Despite
"MVP", *"Stephen Curry"* and *"won the award"*
appearing together, they are not semantically related. In contrast, in the AMR graph, it is clear that
"Stephen Curry" wins over *"international players"*,
not the *"MVP"*, which helps the model avoid the mistake. The baseline model may fail to associate
"Most Valuable Player" in the passage with "MVP"
in the question, which may be why the baseline does not rank it in the Top10. To address this issue, we adopt structured semantics (i.e., Abstract Meaning Representation (Banarescu et al., 2013)
graphs shown on the right of Figure 1) to enhance Open-Domain QA.
While previous work has integrated graphs into neural models for NLP tasks, adding additional neural architectures to PLMs can be non-trivial, as training a graph network without compromising the original architecture of PLMs can be challenging (Ribeiro et al., 2021). Converting AMR
graphs directly into text sequences and appending them can be natural, but leads to excessively long sequences, exceeding the maximum process2083
![1_image_0.png](1_image_0.png)
ing length of the transformer. To integrate AMR
into PLMs without altering the transformer architecture and at a manageable cost, we treat nodes and edges of AMR Graphs aS Tokens (GST) in PLMs. This is achieved by projecting the embeddings of each node/edge, which consist of multiple tokens, into a single token embedding and appending them to the textual sequence embeddings. This allows for integration into PLMs without altering the main model architecture. This method does not need to integrate a Graph Neural Network into the transformer architecture of PLMs, which is commonly used in integrating graph information into PLMs Yu et al. (2022); Ju et al. (2022). The GST
method is inspired by Kim et al. (2022) in the graph learning domain, who uses token embeddings to represent nodes and edges for the transformer architecture in graph learning tasks. However, their method is not tailored for NLP tasks, does not consider the textual sequence embeddings, and only handles a certain types of nodes/edges, whereas we address unlimited types of nodes/edges consisting of various tokens.
Specifically, we select BART and FiD as baselines for the reranking and reading tasks, respectively. To integrate AMR information, we initially embed each question-passage pair into text embeddings. Next, we parse the pair into a single AMR
graph using AMRBART (Bai et al., 2022a). We then employ the GST method to embed the graph nodes and graph edges into graph token embeddings and concatenate them with the text embeddings. Lastly, we feed the concatenated text-graph embeddings as the input embeddings to a BARTbased (Lewis et al., 2020a) reranker to rerank or a FiD-based (Izacard and Grave, 2020b) reader to generate answers.
We validate the effectiveness of our GST approach using two datasets - Natural Question
(Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). Results indicate that AMR enhances the models' ability to understand complex semantics and improves robustness. BART-GST-reranker and FiD-GST outperform BART-reranker and FiD
on the reranking and reading tasks, respectively, achieving up to 5.9 in Top5 scores, 3.4 in Top10 score improvements, and a 2.44 increase in Exact Match on NQ. When the test questions are paraphrased, models equipped with GST prove more robust than the baselines. Additionally, GST outperforms alternative GNN methods, such as Graphtransformer and Relational Graph Convolution Network (RGCN) (Schlichtkrull et al., 2018), for integrating AMR.
To the best of our knowledge, we are the first to incorporate semantic graphs into ODQA, thereby achieving better results than the baselines.
## 2 Related Work
Open-domain QA. Open-Domain Question Answering (ODQA) (Chen et al., 2017) aims to answer one factual question given a large-scale text database, such as Wikipedia. It consists of two steps. The first is *dense passage retrieval*
(Karpukhin et al., 2020) , which retrieves a certain number of passages that match the question. In this process, a *reranking* step can be used to filter out the most matching passages (Fajcik et al., 2021; Glass et al., 2022). The second is *reading*, which finds answer by reading most matching passages
(Izacard and Grave, 2020b; Lewis et al., 2020b).
We focus on the reranking and reading, and integrate AMR into those models.
Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a formalism for representing the semantics of a text as a rooted, directed graph.
In this graph, where nodes represent basic semantic units such as entities and predicates, and edges represent the relationships between them. Compared with free-form natural language, AMR graphs are more semantically stable as sentences with same semantics but different expressions can be expressed as the same AMR graph (Bai et al., 2021; Naseem et al., 2021). In addition, AMR graphs are believed to have more structure semantic information than pure text (Naseem et al., 2021).
Previous work has implemented AMR graphs into neural network models. For example, (Bai et al., 2021) adopts Graph-transformer (Yun et al.,
2019) to integrate AMRs into the transformer architecture for the dialogue understanding and generation. AMR-DA (Shou et al., 2022) uses AMRs as an data augmentation approach which first feeds the text into AMRs and regenerates the text from the AMRs. Bai et al. (2022b) uses AMR graphs with rich semantic information to redesign the pretraining tasks which results in improvement on downstream dialogue understanding tasks. However, none of them is used for Open-domain QA
or applied with the GST technique. which does not need to implement extra architectures in the PLMs, avoiding the incompatibility of different model architectures.
## Integrating Structures Into Plms For Odqa
Some work also tries to integrate structure information into PLMs for ODQA. For example, GRAPE
(Ju et al., 2022) insert a Relation-aware Graph Neural Network into the T5 encoders of FiD to encode knowledge graphs to enhance the output embeddings of encoders; KG-FiD (Yu et al., 2022) uses the knowledge graph to link different but correlated passages, reranks them before and during the reading, and only feeds the output embeddings of most correlated passages into the decoder. However, existing work concentrates on the knowledge graph as the source of structure information and no previous work has considered AMRs for ODQA.
LLMs in Open-Domain Question Answering
(ODQA) Research has been conducted that utilizes pre-trained language models (PLMs) to directly answer open-domain questions without retrieval (Yu et al., 2023; Wang et al., 2021; Ye et al.,
2021; Rosset et al., 2021). The results, however, have traditionally not been as effective as those achieved by the combined application of DPR and FiD. It was not until the emergence of ChatGPT
that direct answer generation via internal parameters appeared to be a promising approach.
In a study conducted by Wang et al. (2023), the performances of Large Language Models (LLMs),
such as ChatGPT (versions 3.5 and 4), GPT-3.5, and Bing Chat, were manually evaluated and compared with that of DPR+FiD across NQ and TQ test sets. The findings demonstrated that FiD surpassed ChatGPT-3.5 and GPT-3.5 on the NQ test set and outperformed GPT-3.5 on the TQ test set, affirming the relevance and effectiveness of the DPR+FiD
approach even in the era of LLMs.
## 3 Method
We introduce the Retrieval and Reading of OpenDomain QA and their baselines in Section 3.1, AMR graph generation in Section 3.2 and our method Graph-aS-Token (GST) in Section 3.3.
## 3.1 Baseline
Retrieval. The retrieval model aims to retrieve N1 passages from M reference passages (N1 <<
M) given the question q. Only fast algorithms, such as BM25 and DPR (Karpukhin et al., 2020),
can be used to retrieve from the large-scale database, and complex but accurate PLMs cannot be directly adopted. So, retrieval algorithm is often not very accurate. One commonly used method is applying a reranking process to finegrain the retrieval results, and we can use PLMs to encode the correlations, which is usually more accurate. Formally, reranking requires model to sort out the most correlated N2 passages with q from N1 passages (N2 < N1). For each passage p in the retrieved passage PN1
, we concatenate the q p together and embed them into text sequence embeddings Xqp ∈ R
L×H, where L is the max token length of the question and passage pair and H is the dimension.
We use a pretrained language model to encode each Xqp and a classification head to calculate a
![3_image_0.png](3_image_0.png)
correlation score between q and p:
$$s_{q p}=P L M(\mathbf{X_{qp}})$$
sqp = *P LM*(Xqp) (1)
where *P LM* denotes the pretrained language model and the commonly used Multi-Layer Perceptron (MLP) is used as as the classification head.
We use the cross entropy as the loss function,
$$\mathcal{L}=\frac{1}{N_{q}}\sum_{q}[\frac{1}{N_{pos}+N_{neg}}\sum_{p}l_{qp}]\tag{2}$$ $$=\frac{1}{N_{q}*(N_{pos}+N_{neg})}\sum_{q}\sum_{p}-$$ $$[(y_{qp}*log(s_{qp})+(1-y_{qp})*log(1-s_{qp}))],$$
where Npos and Nneg are the numbers of positive and negative passages for training one question, respectively. To identify positive/negative label of each passage to the question, we follow Karpukhin et al. (2020), checking whether at least one answer appears in the passage.
We choose the N2 passages which have reranked among Top-N2 for the reading process.
Reading. The reader needs to generate an answer a given the question q and N2 passages. In this work, we choose the Fusion-in-Decoder (FiD)
model (Izacard and Grave, 2020b) as the baseline reader model. The FiD model uses N2 separate T5 encoders (Raffel et al., 2020) to encode N2 passages and concatenate the encoder hidden states to feed in one T5 decoder to generate answer.
Similar to reranking, we embed the question q and each passage p to text sequence embeddings
$$(1)$$
Xqp ∈ R
L×dH , where L is the max token length of the question and passage pair and dH is the dimension. Next, we feed the embeddings in the FiD model to generate the answer
$$a=F i D([\mathbf{X_{qp_{1}}},\ldots,\mathbf{X_{qp_{l}}},\mathbf{X_{qp_{N_{2}}}}])\quad\quad(3)$$
$${\mathrm{text~sequence}}.$$
where a is a text sequence.
## 3.2 Amr
We concatenate each question q and passage p, parse the result sequence into an AMR graph Gqp = {*V, E*}, where *V, E* are nodes and edges, respectively. Each edge is equipped with types, so e = {(*u, r, v*)} where *u, r, v* represent the head node, relation and the tail node, respectively.
## 3.3 Graph As Token (Gst)
As shown in Figure 2, we project each node n or edge e in one AMR graph G into node embedding x n or edge embedding x e. We adopt two types of methods to project each node and edge embeddings to one token embedding, which are MLP projection and Attention projection. After the projection, we append the node embeddings XN = [x n 1
, . . . , x nnn
] and edge embeddings XE = [x e1
, . . . , x ene
] to the corresponding text sequence embeddings XT = [x t1
, . . . , x tnt
]. So, the result sequence embedding is in the following notation:
$$\mathbf{X}=[\mathbf{X}^{\mathrm{{T}}},\mathbf{X}^{\mathrm{{N}}},\mathbf{X}^{\mathrm{{E}}}]$$
Initialization We explain how we initialize embeddings of nodes and edges here.
As each node n and relation r contain plural tokens (example of node 'ordinal-entity' is shown the left and bottom of Figure 2), n =
[t1*, .., t*n] and r = [t1*, . . . , t*r], and each edge e contains two nodes and one relation, we have e = [[t1, .., tu], [t1, . . . , tr], [t1*, .., t*v]].
For edges and nodes, we first embed their internal tokens into token embedding.
For edges, we have
$\mathbf{x^{e1}}=[[\mathbf{x^{u}_{1}},\ldots,\mathbf{x^{u}_{nu}}],$ $[\mathbf{x^{r}_{1}},\ldots,\mathbf{x^{r}_{nr}}],$ $[\mathbf{x^{v}_{1}},\ldots,\mathbf{x^{v}_{nr}}]$
$${\mathrm{(5)}}$$
For nodes, we have
$$\mathbf{x^{n1}}=[\mathbf{x_{1}^{n}},\ldots,\mathbf{x_{n}^{n}}]$$
n] (6)
MLP Projection The process is illustrated in the MLP Projection part of Figure 2. As each AMR
node can have more than one tokens, we first average its token embeddings. For example, for a head node u, x u = *AV E*([x u 1
, . . . , x unu
]) ∈ R
dH . The same is done for the relation.
Then, we concatenate the two node embeddings and one relation embedding together as the edge embedding,
$$\mathbf{x}^{\mathbf{e2}}=[\mathbf{x}^{\mathbf{u}},\mathbf{x}^{\mathbf{r}},\mathbf{x}^{\mathbf{v}}]\in\mathbb{R}^{3d_{H}}$$
3dH (7)
Next, we use a R
3dH×dH MLP layer to project the x e2 ∈ R
dH into x e ∈ R
dH , and the final edge embedding
$$\begin{array}{l}{{\mathbf{x^{e}}=M L P(\mathbf{x^{e2}})}}\\ {{\quad=M L P([\mathbf{x^{u}},\mathbf{x^{r}},\mathbf{x^{v}}])}}\end{array}\qquad(8)$$
Similarly, we average the node tokens embeddings first x n1 = *AV E*([x n 1
, . . . , x nn]). To reuse the MLP layer, we copy the node embedding two times and concatenate, so, x n2 =
[x n1, x n1, x n1] ∈ R
3dH . Last, We adopt an MLP
layer to obtain final node embedding
$$\mathbf{x^{n}}=M L P(\mathbf{x^{n2}})\in\mathbb{R}^{d_{H}}$$
dH (9)
We have also tried to assign separate MLP layers to nodes and edges, but preliminary experiments show that it does not improve the results.
Attention Projection We use one-layer selfattention to project nodes and edges into embeddings, which is shown in the Attn Projection part in Figure 2. The edge embedding is calculated
2. The edge embedding is calculated by $\mathbf{x^{e}}=Att_{E}([\mathbf{x^{u}_{1}},\ldots,\mathbf{x^{u}_{nu}},$ (10) $\mathbf{x^{r}_{1}},\ldots,\mathbf{x^{r}_{nr}},\mathbf{x^{v}_{1}},\ldots,\mathbf{x^{v}_{nv}}])$ if $2087$
Similarly, the node embedding is calculated
$\bf x^{n}=Att_{N}([x_{1}^{n},\ldots,x_{n}^{n}),$ (11)
where AttE and AttN both denote one selfattention layer for edges and nodes, respectively.
We take the first token (additional token) embedding from the self-attention output as the final embedding.
We only modify the input embeddings from X = XT to X = [XT, XN, XE]. The rest details of models, such as the transformer architecture and the training paradigm, are kept the same with the baselines. Our model can directly use the PLMs to encode AMR graphs, without incompatibility between GNN's parameters and PLMs' parameters.
$\left(6\right)^3$
## 4 Experiments 4.1 Data
$\eqref{eq:walpha}$.
We choose two representative Open-Domain QA
datasets, namely Natural Questions (NQ) and TriviaQA (TQ), for experiments. Data details are in presented in Appendix Table 9.
Since retrieval results have a large impact on the performance of downstream reranking and reading, we follow Izacard and Grave (2020b) and (Yu et al.,
2022) to fix retrieval results for each experiment to make the reranking and reading results comparable for different models. In particular, we use the DPR
model initialized with parameters in Izacard and Grave (2020a)
2to retrieve 100 passages for each question. Then we rerank them into 10 passages, which means N1 = 100, N2 = 10.
We generate the amr graphs using AMRBART (Bai et al., 2022a) (the AMRBART-largefinetuned-AMR3.0-AMRParsing checkpoint) 3.
## 4.2 Models Details
We choose the BART model as the reranker baseline and the FiD model (implemented on T5 model(Raffel et al., 2020)) as the reader baseline, and adopt the GST method on them. For each model in this work, we use its Large checkpoint, such as BART-large and FiD-large, for reranking and reading, respectively. In the reranking process, we evaluate the model using the dev set per 2https://dl.fbaipublicfiles.com/FiD/
pretrained_models/nq_retriever.tar.gz https://dl.fbaipublicfiles.com/FiD/pretrained_
models/tqa_retriever.tar.gz 3https://huggingface.co/xfbai/AMRBART-largefinetuned-AMR3.0-AMRParsing
| Natural Questions | TriviaQA | | | | | |
|-----------------------------|-------------|-------------|-------------|-----------|-----------|-------------|
| Reranker + Reader \ Dataset | Reranking | Reading | Reranking | Reading | | |
| Top5 | Top10 | EM | Top5 | Top10 | EM | |
| w/o reranker + FiD-reader | 49.47/50.66 | 69.02/69.50 | | | | |
| w/o reranker + FiD-GST-A | 50.12/51.11 | 70.17/70.39 | | | | |
| 73.7/74.6 | 79.5/80.3 | 78.0/78.1 | 81.5/81.8 | | | |
| w/o reranker + FiD-GST-M | 50.06/50.97 | 69.98/70.10 | | | | |
| BART-reranker + FiD-reader | 50.33/51.33 | 71.16/71.33 | | | | |
| BART-reranker + FiD-GST-A | 50.80/52.38 | 71.93/72.05 | | | | |
| 78.7/78.6 | 83.0/83.3 | 83.2/83.2 | 85.2/85.1 | | | |
| BART-reranker + FiD-GST-M | 50.76/52.24 | 72.12/72.24 | | | | |
| BART-GST-A + FiD-reader | 79.3/79.3 | 83.3/83.3 | 50.68/52.18 | 83.5/83.3 | 85.3/85.3 | 71.54/71.71 |
| BART-GST-A + FiD-GST-A | 51.05/52.80 | 72.63/72.67 | | | | |
| BART-GST-M + FiD-reader | 79.6/80.0 | 83.3/83.7 | 51.11/52.13 | 83.1/82.9 | 85.0/85.1 | 71.47/71.62 |
| BART-GST-M + FiD-GST-M | 51.40/53.10 | 72.58/72.61 | | | | |
| Rearnker \ Dataset Natural Questions | TriviaQA | | |
|----------------------------------------|-----------------------------------------|-----|-------|
| MRR | MH@10 | MRR | MH@10 |
| w/o reranker | 20.2/18.0 37.9/34.6 12.1/12.3 25.5/25.9 | | |
| BART-reranker | 25.7/23.3 49.3/45.8 16.9/17.0 37.7/38.0 | | |
| BART-GST-A | 28.1/24.7 52.7/48.2 17.7/17.8 39.3/39.9 | | |
| BART-GST-M | 28.4/25.0 53.2/48.7 17.5/17.6 39.1/39.5 | | |
Table 2: Overall reranking results on NQ and TQ. In each cell, the left is dev and the right is test.
epoch, and use Top10 as the pivot metric to select the best-performed checkpoint for the test. For the reading, we evaluate the model per 10000 steps, and use Exact Match as the pivot metric. For training rerankers, we set number of positive passages as 1 and number of negative passages as 7. We run experiments on 2 Tesla A100 80G GPUs.
## 4.3 Metric
Following Glass et al. (2022) and Izacard and Grave (2020b), we use Top-N to indicate the reranking performance and Exact Match for the reading performance.
However, TopN is unsuitable for indicating the overall reranking performance for all positive passages, so we also adopt two metrics, namely Mean Reciprocal Rank (MRR) and Mean Hits@10
(MHits@10). The MRR score is the Mean Reciprocal Rank of all positive passages. Higher scores indicate that the positive passages are ranked higher overall. The MHits@10 indicates the percentage of positive passages are ranked in Top10.
Higher scores indicate that more positive passages are ranked in Top10. Their formulations are in Appendix Section A.5. Note that, only when the retrieved data is exactly the same, the MRR and MHits@10 metrics are comparable.
## 4.4 Preliminary Experiments
We present the reranking performance of four baseline PLMs, including BERT (Devlin et al., 2019),
RoBERTa (Liu et al., 2019), ELECTRA (Clark et al., 2020) and BART (Lewis et al., 2020a) on the NQ and TQ in Appendix Table 8. BART outperforms other three models in every metric on both NQ and TQ. So, we choose it as the reranker baseline and apply our Graph-aS-Token method to it in following reranking experiments.
## 4.5 Main Results
The Main results are presented in Table 1. Our method can effectively boost the performance on both reranking and reading.
Reading. As shown in the reading columns of Table 1, our method can boost the FiD performance, no matter whether there is reranker and whether the reranker is with AMR or not. Without reranking, FiD-GST-A achieves 51.11/70.39 EM on NQ/TQ
test , which are 0.45/0.89 EM higher over the baseline FiD; With reranking, 'BART-GST-M + FiDGST-M ' achieves 53.10/72.61 EM on NQ/TQ test, 1.77/1.27 EM better than 'BART-reranker + FiD'.
With the same reranker, FiD-GST is better than the baseline FiD, for example, 'BART-reranker +
FiD-GST-A' achieves 52.38/72.05 on NQ/TQ test, which is 1.05/0.72 higher than the 51.33/71.33 of
'BART-reranker + FiD'.
Overall, our GST models have achieved up to
| Orig Test | New Test | Drop | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------|-----------|
| BART-reranker | 78.6/83.3 | 76.2/81.8 | -2.6/-1.5 |
| 23.3/45.8 | 21.5/43.6 | -1.8/-2.2 | |
| BART-GST-A | 79.3/83.3 | 77.4/82.0 | -1.9/-1.3 |
| 24.7/48.2 | 23.2/46.1 | -1.4/-2.1 | |
| BART-GST-M | 80.0/83.7 | 78.0/82.4 | -2.0/-1.3 |
| 25.0/48.7 | 23.4/46.3 | -1.6/-2.4 | |
| A: Robustness of rerankers. Each cell contains Top5/Top10/MRR/MHits@10 as the metrics. Orig Test New Test Drop FiD-reader 50.66 46.76 -3.90 FiD-GST-A 51.11 47.84 -3.27 FiD-GST-M 50.97 47.76 -3.21 B: Robustness of readers. Exact Match as the Metric. To avoid the influence of different reranking results, we use the same DPR results to train and eval. | | | |
2.44 EM (53.10 vs 50.66) on NQ test and 3.17
(72.67 vs 69.50) on TQ test.
Reranking Shown in the reranking columns of Table 1, BART-GST-M can achieve 80.0/83.7 scores in Top5/Top10, which improve 5.4/3.4 on NQ-test compared to DPR and 1.4/0.4 compared to BART-reranker. BART-GST-M achieves 79.3/83.3 scores in Top5/Top10, which outperform DPR by 4.7/3.0 on NQ-test, showing that our GST method is effective.
We present results of the MRR and MHits@10 metrics in Table 2. Our GST method can help positive passages rank higher in Top10. In NQ, BART-GST-M has 7.0/14.1 advantages on MRR/MHits@10 over DPR while 1.7/2.9 advantages over BART-reranker; In TQ, BART-GST-A
has 5.5/14.0 advantages on MRR/MHits@10 over DPR and 0.8/1.9 advantages on MRR, MHits@10 over BART-reranker.
The overall reranking results can also explain the reason why even when the Top10 results are similar and readers are the same, the reranked passages by BART-GST can lead to better reading performance.
For example, in NQ test, the reading performance of 'BART-GST-M + FiD' is 0.80 better than 'BARTreranker + FiD'.
| NQ dev NQ test TQ dev TQ test | | | | |
|---------------------------------|-------|-------|-------|-------|
| FiD-10 | 49.47 | 50.66 | 69.02 | 69.50 |
| FiD-100 | 51.60 | 52.88 | 71.61 | 71.88 |
| FiD-10 | | | | |
| w/ BART-reranker | 50.33 | 51.33 | 71.16 | 71.33 |
| FiD-GST-A-10 | | | | |
| w/ BART-GST-A reranker | 51.03 | 52.80 | 72.63 | 72.67 |
| FiD-GST-M-10 | | | | |
| w/ BART-GST-M reranker | 51.30 | 53.10 | 72.58 | 72.61 |
## 4.6 Analysis
Robustness. To evaluate the robustness of the baseline and our models, we paraphrase the test questions of NQ and TQ, evaluate paraphrased test questions and the original ones with the same model checkpoint. We use a widely-used paraphraser, namely *Parrot Paraphraser* (Damodaran, 2021) to paraphrase test questions. The results are shown in Table 3.
The performance drops in reranking and reading of our GST models are lower than the baseline model, despite that our models have better performance. For reranking, the drop of our BART-GST-A is -1.9/-1.3/-1.4/-2.1 for Top5/Top10/MRR/MHits@10, which is lower than the baseline's -2.6/-1.5/-1.8/-2.2. For reading, the
-3.21 EM drop of FiD-GST-M is also smaller than the -3.90 of baseline FiD. It shows that our GST
method can not only improve performance but also improve robustness, which can prove that adding structural information can help models avoid the erroneous influence of sentence transformation.
Comparison with FiD-100. We also compare the reranking+reading paradigm with the directlyreading paradigm. For the latter, the FiD reader is directly trained and evaluated on 100 retrieved passages without reranking. The results are shown in Table 4.
Without our GST method, the reranking+reading paradigm (FiD-10 w/ BART reranker) is worse than FiD-100 without reranking, which is 71.33 to 71.78 on the test. However, with our GST method, the reranking+reading paradigm outperforms FiD-100.
For example, FiD-GST-M-10 w/ BART-GST-M reranker has better performance on NQ test than FiD-100, which is 53.10 vs 52.88, and FiD-GSTA-10 w/ BART-GST-A reranker vs FiD-100 on TQ
| Top5 | Top10 | MRR | MH@10 |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|-------|---------|
| BART-reranker | 78.7/78.6 83.0/83.3 25.7/23.3 49.3/45.8 | | |
| BART-GST-M (superior AMRs) 79.6/80.0 83.3/83.7 28.4/25.0 53.2/48.7 BART-GST-M (inferior AMRs) 79.5/79.3 83.5/83.1 28.4/24.7 52.9/47.8 In reranking. Exact Match FiD-reader 48.47/50.66 FiD-GST-A (superior AMRs) 50.12/51.11 FiD-GST-A (inferior AMRs) 49.95/50.83 In reading. | | | |
Table 5: Influence of superior AMR graphs which generated by a larger model, and inferior AMR graphs which generated by a smaller model.
| Top5 | Top10 | MRR | MH@10 |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------|---------|
| BART-reranker 78.7/78.6 83.0/83.3 25.7/23.3 49.3/45.8 BART-GST-M 79.6/80.0 83.3/83.7 28.4/25.0 53.2/48.7 BART-GST-M only nodes 78.5/78.9 82.9/83.1 27.6/24.2 51.8/47.3 BART-GST-M only edges 78.6/79.3 83.0/83.3 27.9/24.7 52.4/47.4 | | | |
Table 6: Ablation to nodes and edges to our GST methods on NQ. We choose BART-GST-M because it better performs on NQ.
test is 72.67 vs 71.78.
To our knowledge, we are the first make FiD-10 beat FiD-100.
Influence of AMR Quality. We explore how AMR graphs quality influence the performance of our models in this section, by using the AMRBARTbase-finetuned-AMR3.0-AMRParsing, 4 which is a smaller version. We compare the reranking performance of BART-GST with either superior or inferior graphs on NQ and TQ. We use the each kind of graphs to train its own reranking models.
The results are shown in Table 5.
Our models still work with inferior AMR graphs but the performance is not good as the superior ones in both reranking and reading. This indicates that when the quality of AMR graphs is higher, the GST
models can potentially achieve better performance.
Ablation to Nodes/Edges We ablate nodes and edges in our models to explore whether nodes or 4https://huggingface.co/xfbai/AMRBART-basefinetuned-AMR3.0-AMRParsing
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
psg to answer while our model answer correctly.
Figure 3: Two cases from our experiments for reranking and reading, respectively. We highlight important information over questions and passages.
edges contribute more to the results. We conduct reranking experiments on NQ. The results are shown in Table 6. As can be seen, nodes are edges are both useful for the GST method, where
'BART-GST-M (only nodes)' and 'BART-GST-M
(only edges)' both outperform the baseline BARTreranker in MRR/MHits@10 on NQ test, which are 24.2/48.7 vs 24.7/47.4 vs 23.3/45.8, respectively.
However, 'BART-GST-M (only edges)' are better than 'BART-GST-M (only nodes)' in four metrics on NQ, partly due to the fact that edges also contain nodes information.
Case Study We present two cases from our in Figure 3. In the upper one, for the negative passage, the baseline may consider *"a ban on smoking in* all closed public areas" same as *"the smoking ban* in public places", which are actually different; For the positive passage, the baseline may not take *"act* regulated smoking in public area" as "the smoking ban in public places" while our model does.
In the lower one, the baseline reader ignores the competition is *" for the opportunity to play* in Super Bow" rather than *"in the Super Bowl"* ,
and because the number of similar passages with
"Philadelphia Eagle" are more than the positive passage's, the baseline reader finds the incorrect passage which leads to the incorrect answer. In Top5 Top10 MRR MH@10 BART-reranker 78.7/78.6 83.0/83.3 25.7/23.3 49.3/45.8 BART-GST-M 79.6/80.0 83.3/83.7 28.4/25.0 53.2/48.7 RGCN-Stacking 78.6/78.2 82.3/83.0 26.1/23.1 49.5/46.0 Table 7: Comparison between the baseline, GST and RGCN-Stacking in reranking on NQ.
contrast, our model focuses on the only positive passage and answers the question correctly.
## 4.7 Alternative Graph Methods
We have also tried several methods to integrate AMRs into PLMs, but their performance is worse than our Graph-aS-Token method. Here we take two representative examples, which are Relational Graph Convolution Network (RGCN)
(Schlichtkrull et al., 2018) for the reranker and Graph-transformer (Yun et al., 2019) for FiD. All those methods require alignments between text tokens and graph nodes, for which only some nodes can be successfully aligned.
Stacking RGCN above Transformer The model architecture consists of a transformer encoder and a RGCN model where RGCN is stacked on top of the transformer. After the vanilla forward by transformer encoder, AMR graphs abstracted from queries and passages in advance are constructed with node embeddings initialized from transformer output. Then they are fed into the RGCN model and the final output of the [CLS]
node is used for scoring.
For the text embeddings of one question-passage pair, its encoder hidden states
$$\mathbf{H}=E n c o d e r(X_{q p})$$
For one node n, its initial embedding h 0 = *MeanP ooling*(Hstart:end)
where *start* and end are the start and end positions of the text span aligned with the node.
The update of node embedding for each layer l is
$$\mathbf{h_{i}^{l+1}}=\sigma(W_{0}^{l}\mathbf{h_{i}^{l}}+\sum_{r\in R}\sum_{j\in N_{i}^{r}}{\frac{1}{c_{i,r}}}W_{r}^{l}\mathbf{h_{i}^{l}})$$ $$c_{i,r}=\|N_{i}^{r}\|$$
where R is the set of edge types, Nr istands for the group of nodes which connect with node i in relation r.
so the correlation score of q and p:
$$s_{q p}=C l s H e a d(h_{[C L S]}^{L})$$
The results are presented in Table 7, which is clear that the RGCN-stacking method is inferior to the GST method. Some metrics, including Top5, Top10 and MRR, of RGCN-stacking are worse than the baseline, meaning the RGCN method is not feasible for integrating AMRs into PLMs though it looks like reasonable and practical.
Graph-transformer We apply the graphtransformer architecture to FiD model for reading.
We follow the graph-transformer architecture in Bai et al. (2021), whose main idea is using AMR
information to modify the self-attention scores between text tokens. However, we find stucking challenging for PLMs because the new-initialized graph architectures are not compatible with architectures of PLMs, lead to non-convergence during training. Despite that, tricks such as incrementally training and separate tuning can lead to convergence, results are still below the baseline model, let alone GST.
Flattening AMR Graphs We have also tried to directly flatten AMR graphs into text sequences, but the result sequences are always beyond the maximum processing length (1024) of the transformer.
So, we have to cut off some nodes and edges to fit in the transformer, but the results show that it does not work well and has only a very sight improvement while the computational cost is tens times over the baseline.
## 5 Conclusion
In this study, we successfully incorporated Abstract Meaning Representation (AMR) into OpenDomain Question Answering (ODQA) by innovatively employing a Graph-aS-Token (GST) method to assimilate AMRs with pretrained language models. The reranking and reading experiments conducted on the Natural Questions and TriviaQA
datasets have demonstrated that our novel approach can notably enhance the performance and resilience of Pretrained Language Models (PLMs) within the realm of ODQA.
## Acknowledgement
This publication has emanated from research conducted with the financial support of the Pioneer and
"Leading Goose" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003.
## Limitations
Our Graph-aS-Token (GST) method can increase the time and GPU memory cost, we set an quantitative analysis in Appendix Section A.4. We train the models with only one random seed. We do not conduct a large number of hyper-parameter tuning experiments, but use a fixed set of hyperparameters to make the baseline and our models comparable.
## Ethics Statement
No consideration.
## References
Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic representation for dialogue modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 4430–4445, Online. Association for Computational Linguistics.
Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022a.
Graph pre-training for AMR parsing and generation.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland.
Association for Computational Linguistics.
Xuefeng Bai, Linfeng Song, and Yue Zhang. 2022b.
Semantic-based pre-training for dialogue understanding. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 592–607, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In *Proceedings of the 7th Linguistic* Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Association for Computational Linguistics (ACL).
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Pre-training transformers as energy-based cloze models. In *EMNLP*.
Prithiviraj Damodaran. 2021. Parrot: Paraphrase generation for nlu.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-D2: A modular baseline for opendomain question answering. In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 854–870, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, and Alfio Gliozzo. 2022. Re2G: Retrieve, rerank, generate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2701–2715, Seattle, United States. Association for Computational Linguistics.
Gautier Izacard and Edouard Grave. 2020a. Distilling knowledge from reader to retriever for question answering.
Gautier Izacard and Edouard Grave. 2020b. Leveraging passage retrieval with generative models for open domain question answering.
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics, Vancouver, Canada. Association for Computational Linguistics.
Mingxuan Ju, Wenhao Yu, Tong Zhao, Chuxu Zhang, and Yanfang Ye. 2022. Grape: Knowledge graph enhanced passage reader for open-domain question answering. In Findings of Empirical Methods in Natural Language Processing.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. 2022. Pure transformers are powerful graph learners. *ArXiv*, abs/2207.02505.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *Transactions of the Association of Computational Linguistics*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020b.
Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Proceedings of the 34th International Conference on Neural Information Processing Systems*, NIPS'20, Red Hook, NY, USA. Curran Associates Inc.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Timothy J. O'Gorman, Young-Suk Lee, Jeffrey Flanigan, Ramón Fernández Astudillo, Radu Florian, Salim Roukos, and Nathan Schneider. 2021. Docamr:
Multi-sentence amr representation and evaluation. In North American Chapter of the Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Leonardo F. R. Ribeiro, Yue Zhang, and Iryna Gurevych.
2021. Structural adapters in pretrained language models for AMR-to-Text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4269–4282, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Corbin L Rosset, Chenyan Xiong, Minh Phan, Xia Song, Paul N. Bennett, and saurabh tiwary. 2021. Pretrain knowledge-aware language models.
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.
2018. Modeling relational data with graph convolutional networks. In *The Semantic Web*, pages 593–
607, Cham. Springer International Publishing.
Ziyi Shou, Yuxin Jiang, and Fangzhen Lin. 2022. AMRDA: Data augmentation by Abstract Meaning Representation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3082–3098, Dublin, Ireland. Association for Computational Linguistics.
Cunxiang Wang, Sirui Cheng, Zhikun Xu, Bowen Ding, Yidong Wang, and Yue Zhang. 2023. Evaluating open question answering evaluation.
Cunxiang Wang, Pai Liu, and Yue Zhang. 2021. Can generative pre-trained language models serve as knowledge bases for closed-book QA? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3241–3251, Online.
Association for Computational Linguistics.
Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen tau Yih, Xiang Ren, and Madian Khabsa. 2021. Studying strategically: Learning to mask for closed-book qa.
Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. 2022. KG-FiD: Infusing knowledge graph in fusion-in-decoder for opendomain question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4961–4974, Dublin, Ireland. Association for Computational Linguistics.
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are strong context generators. In *International Conference for Learning Representation (ICLR)*.
Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. 2019. Graph transformer networks. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc.
## A Experimental Details A.1 Pre-Experiment A.2 Details For Data
For each question and passage pair, we feed it in the generator in such a format "Question: <question>. Title: <Passage Title>. Context: <Passage Table 8: Pre-experiments of four PLMs' reranking performance on NQ and TQ. In each cell, the left is on the dev while the right is on the test. Among four PLMs, BART performs best.
Table 9: Details of each dataset.
Context>". Additionally, we link the nodes, which are recognized as entities such as person name and date and have same surfaces, with the ":same" relation because it helps performance. For nodes in one AMR graph, we remove their '-XX', where X
is a 0-9 number.
## A.3 Hyper-Parameters
We set other model-related hyper-parameters in Table 10.
## A.4 Cost Increase
We conduct an experiment of the increase of time and GPU memory cost on our GST compared with the baseline. For inference, while keeping other parameters as the same, the time costs of FiD-GSTM, FiD-GST-A are 1.29x and 1.40x, respectively, and the GPU memory costs are 1.11x and 1.40x, respectively, compared with FiD, as shown in Table 11.
## A.5 Metrics
$$MRR=\frac{1}{|Q|}\sum_{i\in Q}((\sum_{j\in Pos}\frac{1}{t(j)})\frac{1}{num_{Pos}(i)})$$
Table 10: Hyper-parameters Setting
| Top5 | Top10 | MRR | MH@10 |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------|---------|
| w/o reranker 73.7/74.6 79.5/80.3 20.2/18.0 37.9/34.6 BERT 76.5/75.7 81.5/81.4 23.7/20.9 45.5/41.5 RoBERTa 77.1/76.6 82.3/82.3 24.7/21.5 47.7/43.3 ELECTRA 77.3/77.8 82.4/82.5 25.1/22.5 47.9/43.9 BART 78.7/78.6 83.0/83.3 25.7/23.3 49.3/45.8 A: On the Natural Questions dataset. Top5 Top10 MRR MH@10 w/o reranker 78.0/78.1 81.5/81.8 12.1/12.3 25.5/25.9 BERT 82.0/82.3 84.5/84.7 16.0/16.2 35.6/35.9 RoBERTa 82.8/82.9 85.0/85.0 16.8/16.8 37.2/37.4 ELECTRA 82.4/82.6 84.8/82.6 16.3/16.4 36.2/36.4 BART 83.2/83.1 85.2/85.1 16.9/17.0 37.7/38.0 B: On the TriviaQA dataset. | | | |
| Train Set | Dev Set | Test Set | |
|-------------------|-----------|------------|-------|
| Natural Questions | 79168 | 8757 | 3610 |
| TriviaQA | 78785 | 8837 | 11313 |
where Q is the evaluating dataset; t(j) is the rank of passage j; *P os* is the set of positive passages.
$$M H i t s@10=\frac{1}{|Q|}\sum_{i\in Q}(\sum_{j\in p o s,t(j)<11}\frac{1}{n u m_{P o s}(i)})$$
| Time cost GPU Memory Cost | | |
|-----------------------------|------|------|
| FiD | 1.00 | 1.00 |
| FiD-GST-M | 1.29 | 1.11 |
| FiD-GST-M | 1.40 | 1.40 |
where Q is the evaluating dataset; t(j) is the rank of passage j; *P os* is the set of positive passages.
| Reranking Reading | | |
|---------------------|-----------|---------|
| Leaning Rate | 3e-5 | 1e-4 |
| Training Epoch | 10 | 5 |
| Node MaxLength | 145 | 145 |
| Edge MaxLength | 165 | 165 |
| Text Maxlength | 200 | 200 |
| Eval Step/Epoch | 10k steps | 1 epoch |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section
✓ A2. Did you discuss any potential risks of your work?
No potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction sections
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
min-etal-2023-nonparametric | Nonparametric Masked Language Modeling | https://aclanthology.org/2023.findings-acl.132 | Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked language model that replaces this softmax with a nonparametric distribution over every phrase in a reference corpus. NPM fills in the [MASK] solely from retrieving a token from a text corpus. We show that NPM can be efficiently trained with a contrastive objective and an in-batch approximation to full corpus retrieval. Zero-shot evaluation on 16 tasks including classification, fact probing and question answering demonstrates that NPM outperforms significantly larger parametric models, either with or without a retrieve-and-generate approach. It is particularly better at dealing with rare patterns (word senses or facts) and predicting rare or nearly unseen words (e.g., non-Latin script). We release the model and code at github.com/facebookresearch/NPM. | # Nonparametric Masked Language Modeling
Sewon Min1,2 **Weijia Shi**1,2 Mike Lewis2 **Xilun Chen**2 Wen-tau Yih2 **Hannaneh Hajishirzi**1,3 **Luke Zettlemoyer**1,2 1University of Washington 2Meta AI 3Allen Institute for AI
{sewon,swj0419,hannaneh,lsz}@cs.washington.edu
{mikelewis,xilun,scottyih}@meta.com
## Abstract
Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked language model that replaces this softmax with a nonparametric distribution over every *phrase* in a reference corpus. NPM fills in the [MASK] solely from retrieving a token from a text corpus. We show that NPM can be efficiently trained with a contrastive objective and an in-batch approximation to full corpus retrieval. Zero-shot evaluation on 16 tasks including classification, fact probing and question answering demonstrates that NPM outperforms significantly larger parametric models, either with or without a retrieve-and-generate approach.
It is particularly better at dealing with rare patterns (word senses or facts) and predicting rare or nearly unseen words (e.g., non-Latin script). We release the model and code at github.com/facebookresearch/NPM.
## 1 Introduction
Current large language models, despite their wide use and impressive performance, are expensive to scale, difficult to update, and struggle with long-tail knowledge and patterns (Kandpal et al., 2022). Recent work follows a retrieve-and-generate approach to partially address these issues (Lewis et al., 2020; Izacard et al., 2022); however, their final predictions are still made by a parametric model. In particular, they still include a softmax over a finite vocabulary, which limits expressivity (Yang et al.,
2018; Pappas et al., 2020) and can make them reluctant to predict rare or unseen tokens (e.g., *Thessaloniki* in Figure 1).
In this paper, we introduce NPM, the first NonParametric Masked Language Model that predicts tokens solely based on a nonparametric distribution over *phrases* in a text corpus (Figure 1).
NPM consists of an *encoder* that maps the text
![0_image_0.png](0_image_0.png)
Figure 1: An illustration of NPM. The *encoder* maps a masked sentence into a dense vector, and retrieves the nearest phrase from a *reference corpus*. NPM can fill in the [MASK] with multiple tokens, e.g., *Thessaloniki*
(4 BPE tokens) and unseen words, e.g., 반포대교 (12 BPE tokens).
into a fixed-sized vector, and a *reference corpus* from which NPM retrieves a phrase and fills in the [MASK]. It, crucially, does not have a softmax over a fixed vocabulary, but instead has a *fully* nonparametric distribution over phrases. This is in contrast to a recent body of work that incorporates nonparametric components in a parametric model (Borgeaud et al., 2022; Izacard et al., 2022; Zhong et al., 2022b).
Training such a nonparametric model introduces two key challenges: (1) full corpus retrieval during training is expensive, and (2) learning to predict an arbitrary length phrase without a decoder is nontrivial. We address the first challenge by using inbatch approximations to full corpus retrieval (Wu et al., 2020; Zhong et al., 2022b), and the second by extending span masking (Joshi et al., 2020) and a phrase-level contrastive objective (Oord et al.,
2018; Lee et al., 2021).
We perform zero-shot evaluation on 16 tasks including classification, fact probing and question answering. They include temporal shift and wordlevel translation tasks that highlight the need to predict new facts or rare phrases. We compare with 2097 a range of competitive baselines including encoderonly (Liu et al., 2019), encoder-decoder (Raffel et al., 2020), and decoder-only models (Zhang et al.,
2022; Brown et al., 2020). We also compare with a retrieve-and-generate approach that feeds a concatenation of the input and passages to parametric models using off-the-shelf retrieval. Results show that NPM is significantly more parameter-efficient, outperforming up to 500x larger parametric models and up to 37x larger retrieve-and-generate models. It is particularly good at (1) predicting rare words (e.g., an entity split into multiple BPE tokens such as *Thessaloniki*) and (2) disambiguating word senses (e.g., *cheap* may indicate *inexpensive* or *of very poor quality*; Figure 1). Finally, our evaluation on an entity translation task demonstrates that NPM can predict a word consisting of characters that are extremely rare if not unseen (e.g.,
non-Latin script; Figure 1).
In summary, our contributions are as follows.
1. We introduce NPM, the first nonparametric masked language model that fills in the
[MASK] solely from a phrase-level nonparametric distribution over a corpus.
2. We introduce a novel training scheme to train NPM on unlabeled data. We completely remove the softmax over the output vocabulary, enabling an effectively unbounded output space by predicting any n-gram.
3. Zero-shot evaluation on 16 downstream tasks shows that NPM outperforms significantly larger parametric models, are better on rare patterns, scale well, can be efficiently updated at test time, and can predict extremely rare if not unseen tokens (e.g., words in non Latin script).
## 2 Related Work
Language Models (LMs). Large LMs trained on a vast amount of text are shown to perform a wide range of downstream tasks in a zero-shot manner by converting a task into a cloze format (Radford et al., 2019; Brown et al., 2020). This is possible because a variety of knowledge is encoded in the parameters of the models. Recent work has scaled parametric LMs by adding more parameters (Brown et al.,
2020; Rae et al., 2021; Chowdhery et al., 2022)
which can be very expensive in practice. Moreover, such models struggle with predicting rare words or entities, and cannot be updated over time.
There has been a recent body of work that incorporates the nonparametric component with a parametric LM. We distinguish (1) work that concatenates retrieved text to the input and trains the model with a standard LM objective (Borgeaud et al.
(2022); Izacard et al. (2022); so-called retrieveand-generate approaches) from (2) work that retrieves tokens from a large text corpus to estimate a probability distribution that is interpolated with the output distribution from a standard LM (Khandelwal et al. (2020); Yogatama et al. (2021); Zhong et al. (2022b); Lan et al. (2023); so-called kNN
models). Our work is closely related to such a line of work and can be seen as an extreme version of the kNN approach with no interpolation. However, our work is the first that models a *fully* nonparametric distribution by entirely removing the softmax over a finite vocabulary. This offers a range of new functionalities, such as modeling a distribution over phrases, or predicting rare or unseen words.
Bottleneck in softmax. Most if not all language models use a softmax function that gives a categorical probability distribution over a finite vocabulary. Yang et al. (2018) showed that this softmax is a low-rank approximation of a high-rank output space, making the model less expressive. Pappas et al. (2020) discussed that a fixed output vocabulary makes language models resistant to adaptation to new domains and tasks. We share the motivation with such prior work and propose to use a nonparametric output space to address these issues.
Moreover, although not explicitly explored in this paper, our work that completely removes the softmax over the vocabulary can make training more efficient, especially when the vocabulary is large
(e.g., multilingual models (Conneau et al., 2020)).
Nonparametric models. In nonparametric models, the data distribution is not defined by a fixed set of parameters, but is rather a function of the available data (Siegel, 1957; Hollander et al., 2013).
Having complexity that grows as the data grows, they are differentiated from parametric models whose complexity is bounded as a priori. Freeman et al. (2002) noted that the term nonparametric does not imply that they have no parameters, but rather that the number and nature of the *effective* parameters are flexible and can depend on the data.
Recent work in NLP has explored nonparametric inference without training (Khandelwal et al.,
2020; He et al., 2021; Xu et al., 2022), or trained the nonparametric model on the labeled data for a specific downstream task (Seo et al., 2018, 2019; Lee et al., 2021). In contrast, our work trains a fully
![2_image_0.png](2_image_0.png)
nonparametric language model without the labeled data and performs a range of tasks zero-shot.
## 3 Method
We introduce NPM, the first NonParametric Masked Language Model. NPM consists of an encoder and a reference corpus, and models a nonparametric distribution over a reference corpus
(Figure 1). The key idea is to map all the phrases in the corpus into a dense vector space using the encoder and, when given a query with a [MASK]
at inference, use the encoder to locate the nearest phrase from the corpus and fill in the [MASK].
Encoder-only models are competitive representation models (Patel et al., 2022), outperforming the other two classes of models in classification tasks (Section 5.4). However, existing encoderonly models are unable to make a prediction whose number of tokens is unknown, making their use cases limited without fine-tuning. NPM addresses this issue, since it can fill in the [MASK] with an arbitrary number of tokens by retrieving a *phrase*.
We first describe inference of NPM assuming a learned encoder (Section 3.1), and then describe how we train the encoder to map the text into a good vector space (Section 3.2).
## 3.1 Np**M: Inference**
Overview. The encoder maps every distinct phrase in a reference corpus C into a dense vector space. At test time, the encoder maps the masked query into the same vector space and retrieves phrases from C to fill in the [MASK]. Here, C does not have to be the same as the training corpus, and can be replaced or scaled at test time without re-training the encoder.
In practice, there is a significant number of phrases in the corpus, and it is expensive to index all of them. We therefore use a technique from Lee et al. (2021) that represents a phrase with *token* representations of the start and the end of the phrase. In this approach, we index representations of each distinct token in C, and then at test time, use a k nearest neighbor search for the start and the end of the phrase, separately. Consider Figure 2 as an example. We represent a query with two vectors, q start and q end. We then use each to retrieve the start and the end of the plausible phrases—in this case, c1 and c4, which are the start and the end of Thessaloniki, respectively.
Method. Formally, let C = {c1, · · · , cN } be a reference corpus with N tokens. We first map each token ciinto a contextualized, h-dimensional vector ci ∈ R
h by feeding the text into the encoder and take the vector that corresponds to each token:
c1...cN = Encoder(c1*...c*N ).
At inference time, NPM is given a query whose tth token is masked: q1*...q*t−1, [MASK], qt+1*...q*L.
We replace [MASK] with two special tokens
[MASKs][MASKe] and feed it into the encoder to obtain a list of h-dimensional vectors:
$$\mathbf{q}_{1}...\mathbf{q}_{L+1}=\mathrm{Encoder}(q_{1}...q_{t-1},\,[\texttt{MASK}_{\mathrm{s}}\,]\,,$$ $$[\texttt{MASK}_{\mathrm{e}}\,]\,,q_{t+1}...q_{L}).$$
We then take the vector corresponding to [MASKs]
and [MASKe] as q start and q end, respectively.1
$$\mathbf{q}^{\mathrm{start}}=\mathbf{q}_{t},\mathbf{q}^{\mathrm{end}}=\mathbf{q}_{t+1}.$$
We then make a prediction via:
$$\operatorname*{argmax}_{v^{*}\in\mathcal{V}^{*}}\sum_{i\leq j}\mathbb{I}[v^{*}=c_{i:j}]\bigg($$ $$\exp(\operatorname*{sim}(\mathbf{q}^{\mathrm{start}},\mathbf{c}_{i}))+\exp(\operatorname*{sim}(\mathbf{q}^{\mathrm{end}},\mathbf{c}_{j}))\bigg),$$
where V∗is a set of possible n-grams defined by the vocabulary V and sim is a pre-defined similarity function that maps a pair of vectors into a scalar 1This allows obtaining two vectors without encoding the query twice, e.g., unlike Lee et al. (2021)
![3_image_0.png](3_image_0.png)
Sequence to mask
In the **2010** NFL season, **the Seattle Seahawks** made history by making it
into the playoffs despite having a 7–9 record. (…) The Seahawks lost **to the**
Bears in their second game, 35–24.
Other sequence in the batch
Russell Wilson's first game against **the Seattle Seahawks** (…) when they
lost Super Bowl XLIX **to the** New England Patriots. In the **2010** season, the
Seahawks became the first team in NFL history (..)
Masked sequence
In the [masks][maske] NFL season, [masks][maske] made history by making it into the playoffs despite having a 7–9 record. (…) The Seahawks lost [masks][maske] Bears in their second game, 35–24.
value. In practice, iterating over N tokens is infeasible. We thus use an approximation using a fast nearest neighbor search for the start and the end separately. Details are provided in Appendix A.1.
Similarity function. The choice of similarity function can be flexible. We follow Zhong et al. (2022b) in using a scaled inner product sim(h1, h2) = h√1·h2 h
, where h is a dimension of the token vectors.
## 3.2 Np**M: Training**
NPM is trained on unlabeled text data. We describe the masking strategy first (Section 3.2.1), and then the training objective (Section 3.2.2).
## 3.2.1 Masking
We extend span masking (Joshi et al., 2020), which masks spans (consecutive tokens) whose length is sampled from a geometric distribution. Our span masking differs from Joshi et al. (2020) in two ways. First, we mask spans if they co-occur in the other sequences in the batch to guarantee inbatch positives during training (Section 3.2.2). For instance, masked spans in Figure 4 are '*2010*', 'the Seattle Seahawks' and '*to the*' all of which are found in the other sequences. Second, instead of replacing each token in the span with a [MASK],
we replace the whole span with two special tokens
[MASKs][MASKe]. For instance, each of '*2010*',
'*the Seattle Seahawks*' and '*to the*' is replaced with
[MASKs][MASKe]. This is to obtain the start and the end vectors for each span as we do at inference.
## 3.2.2 Training Objective
Key idea. We illustrate an example in Figure 3. The masked span is '*the Seattle Seahawks*', thus the model should retrieve a phrase
'*the Seattle Seahawks*' from other sequences in the reference corpus when it is given a query like this at test time. Specifically, we should encourage the [MASKs] vector to be closer to
...the Seattle Seahawks... and the [MASKe] vector to be closer to ...the Seattle **Seahawks**... , while being distant from other tokens. We train the model to do so by approximating the full corpus as the other sequences in the batch. Concretely, we train the model to retrieve the start and the end of the span
'*the Seattle Seahawks*' from other sequences in the same batch. Note that our masking strategy ensures that every masked span has a co-occurring span in the batch (Section 3.2.1).
Obtaining vector representations. Consider the i-th sequence in the batch that consists of L tokens, x i = x i1
...xiL
. We denote xˆ
i = ˆx i1
...xˆ
i L
as a consequence of span masking over x i. Both x iand xˆ
i are fed into the encoder, and each token is mapped into an h-dimensional vector:2
$$\begin{array}{r c l}{{{\bf x}_{1}^{i}\cdot\cdot\cdot{\bf x}_{L}^{i}}}&{{=}}&{{\mathrm{Encoder}(x_{1}^{i}\cdot\cdot\cdot x_{L}^{i}),}}\\ {{{\hat{\bf x}}_{1}^{i}\cdot\cdot\cdot{\hat{\bf x}}_{L}^{i}}}&{{=}}&{{\mathrm{Encoder}({\hat{x}}_{1}^{i}\cdot\cdot\cdot{\hat{x}}_{L}^{i}).}}\end{array}$$
Training objective. We consider a masked span in xi, represented with [MASKs][MASKe], denoted as xˆ
it, xˆ
it+1. We then denote g i t as the original n-gram that were replaced by xˆ
it, xˆ
it+1.
We now define the objective for this masked span, and the final objective is summed over all masked spans. The training objective for this masked span is defined as
− log Py∈Y+ s (g i t ) exp(sim(ˆx i t, y)) Py∈Y+ s (g i t )∪Y− s (g i t ) exp(sim(ˆx it, y)) + log Py∈Y+ e (g i t ) exp(sim(ˆx i t+1, y)) Py∈Y+ e (g i t )∪Y− e (g i t ) exp(sim(ˆx i t+1, y))!.
Here, sim(·, ·) is a similarity function defined in Section 3.1, and Y
+
s(g i t), Y−
s(g i t), Y
+
e(g i t) and Y−
e(g i t) are start positives, start negatives, *end positives* and *end negatives* of g i t
, respectively, which are defined in the next paragraph. This objective follows a phrase-level contrastive learning objectives in prior work (Lee et al., 2021; Ram et al.,
2021; Deng et al., 2021; Kulkarni et al., 2022) with an extension that allows *multiple* positives.
In-batch positives and negatives. The start positives and the end positives are the start and the end of the spans to be retrieved. The start negatives and the end negatives are tokens that are not the start positives and not the end positives, respectively.
More formally:
Y + s(g i t) = x j m|g i t = x j m...x j m+|g i t|−1 & i ̸= j , Y − s(g i t) = x j m|g i t ̸= x j m...x j m+|g i t|−1 & i ̸= j , Y + e(g i t) = x j m|g i t = x j m−|g i t|+1...xjm & i ̸= j , Y − e(g i t) = x j m|g i t ̸= x j m−|g i t|+1...xjm & i ̸= j .
Here, |g i t| indicates the length of the span g i t
.
## 4 Training Details
Training data. We use English Wikipedia (August 2019) and an English portion of CC-News
(Mackenzie et al. (2020), February 2019) for training, which contains 13B tokens in total. The data is segmented into sequences, each with up to 256 tokens.
2The unmasked sequence and the masked sequence may have different lengths before padding, but we pad them to have the same length.
Training. We use the model architecture and initial weights of RoBERTa large (Liu et al., 2019),
consisting of 354M parameters. Training is done for 100,000 steps, using thirty-two 32GB GPUs.
One batch consists of 512 sequences (131,072 tokens). We use an Adam optimizer (Kingma and Ba, 2014) with a learning rate of 3 × 10−5, weight decay of 0.01 and 4, 000 steps of warm-up.
Batching. The choice of batching is important in in-batch approximations, as it determines the quality of positives and negatives. For instance, Zhong et al. (2022b) uses BM25 to ensure the sequences in the same batch are likely to share the same topic.
With a pretraining corpus with billions of tokens, it can be significantly expensive to build a BM25 index. Therefore, we instead construct the batch by grouping sequences from the same document and assigning them to the same batch.3 This trick ensures that (a) positives (spans that share the string)
are likely to share the context, reducing false positives, and (b) negatives are those that the model is likely to be confused with, thus training against them helps the model better identify positives. During training, we gather all sequences from multiple GPUs to increase the size of the effective batch and make in-batch approximation more effective.
## 5 Experiments: Closed-Set Tasks
We perform zero-shot evaluation on closed-set tasks where a small set of candidates is given.
## 5.1 Evaluation Datasets
We include nine classification datasets that are known for not necessarily requiring factual knowledge: AGNews (Zhang et al., 2015), Yahoo (Zhang et al., 2015), Subj (Pang and Lee, 2004), SST2 (Socher et al., 2013), MR (Pang and Lee, 2004),
Rotten Tomatoes (RT), CR (Hu and Liu, 2004),
Amazon polarity (Amz, McAuley and Leskovec (2013)) and RTE (Dagan et al., 2005). The tasks range from topic classification and sentiment analysis to subjectivity classification and textual entailment. Statistics are provided in Appendix B.
## 5.2 Baselines
We compare with the encoder-only, the decoderonly and the encoder-decoder models with various sizes (354M to 175B parameters). We include RoBERTa (Liu et al., 2019) as the encoder-only, 3Documents that are not long enough to construct a batch are grouped with each other.
| Model | # Params | AGN | Yahoo | Subj | SST-2 | MR | RT | CR | Amz | RTE | Avg |
|---------------------------------------------------------|------------|-------|---------|--------|---------|------|------|------|-------|-------|-------|
| Baselines (encoder-only) RoBERTa (Gao et al., 2021) | 1.0x | - | - | 51.4 | 83.6 | 80.8 | - | 79.5 | - | 51.3 | - |
| RoBERTa | 1.0x | 71.3 | 41.4 | 67.6 | 84.5 | 81.7 | 81.1 | 80.4 | 83.5 | 57.4 | 72.1 |
| Baselines (encoder-decoder) T5 | 2.2x | 72.0 | 51.3 | 54.9 | 57.5 | 57.7 | 59.1 | 56.4 | 59.3 | 55.6 | 58.2 |
| T5 3B | 8.5x | 80.5 | 53.6 | 54.8 | 59.6 | 58.6 | 57.3 | 53.7 | 57.0 | 58.5 | 59.3 |
| Baselines (decoder-only) GPT-2 (Shi et al., 2022) | 2.2x | 67.4 | 49.7 | 60.8 | 55.3 | 54.6 | 53.0 | 66.2 | 57.6 | 53.1 | 57.5 |
| + PMI (Shi et al., 2022) | 2.2x | 65.1 | 48.8 | 62.5 | 76.5 | 74.6 | 74.1 | 82.8 | 76.2 | 54.2 | 68.3 |
| GPT-2 kNN† (Shi et al., 2022) | 2.2x | 29.8 | 37.0 | 50.0 | 47.1 | 49.9 | 49.1 | 69.3 | 57.4 | 54.1 | 49.3 |
| GPT-2 kNN-LM† (Shi et al., 2022) | 2.2x | 78.8 | 51.0 | 62.5 | 84.2 | 78.2 | 80.6 | 84.3 | 85.7 | 55.6 | 73.4 |
| GPT-3 (Holtzman et al., 2021) | 500x | 75.4 | 53.1 | 66.4 | 63.6 | 57.4 | 57.0 | 53.8 | 59.4 | 56.0 | 60.2 |
| + PMI (Holtzman et al., 2021) | 500x | 74.7 | 54.7 | 64.0 | 71.4 | 76.3 | 75.5 | 70.0 | 75.0 | 64.3 | 69.5 |
| Ours (encoder-only, nonparametric) NPM† 1.0x | 74.5 | 53.9 | 75.5 | 87.2 | 83.7 | 86.0 | 81.2 | 83.4 | 61.7 | 76.4 | |
| Full fine-tuning (reference) RoBERTa (Gao et al., 2021) | 1.0x | - | - | 97.0 | 95.0 | 90.8 | - | 89.4 | - | 80.9 | - |
T5 (Raffel et al., 2020) as the encoder-decoder, and GPT-2/3 (Radford et al., 2019; Brown et al.,
2020) as the decoder-only model. For the decoderonly models, we additionally apply PMI (Holtzman et al., 2021) for better calibration of the model output. We also compare with Shi et al. (2022) who use kNN inference using GPT-2 with PMI. In particular, (1) GPT-2 kNN uses kNN inference without training, and (2) GPT-2 kNN-LM interpolates distributions from GPT-2 and GPT-2 kNN.
## 5.3 Setup
We use the templates and verbalizers from Shi et al.
(2022) for all models. When available, we use fuzzy verbalizers from Shi et al. (2022). We use a domain-specific reference corpus: a union of the English Wikipedia and CC News for AGN, Yahoo and RTE, a subjectivity corpus for Subj, and a review corpus for sentiment classification datasets.
Their sizes vary from 15M tokens to 126M tokens.
Details are in Appendix B. Fast similarity search is done using FAISS (Johnson et al., 2019) with the HNSW index. We use k = 4096 for inference.
## 5.4 Results
NPM outperforms baselines in the zero-shot setting
(Table 1). We discuss the results in detail below.
Comparison between baselines. Among parametric models, RoBERTa achieves the best performance, outperforming larger models including
| RoBERTa cheaper than an iPod. It was <mask>. Positive cheap construction. It was <mask>. Positive NPM SINGLE cheap construction. It was <mask>. Positive cheaper than an iPod. It was <mask>. Negative | Sim(cheap, <m>) = 27.3 Sim(cheap, <m>) = 27.5 Sim(cheap, cheap) = 28.0 Sim(<m>, <m>) = 27.9 Sim(cheap, <m>) = 28.8 Sim(cheap, <m>) = 28.5 Sim(cheap, cheap) = 15.9 Sim(<m>, <m>) = 15.7 |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Retrieved context for <mask>: 10/10, would buy this cheap awesome gaming headset again. Retrieved context for <mask>: Item delivered broken. Very cheaply made and does not even function. | |
GPT-3. This is perhaps surprising, and is likely because bidirectionality of the encoder-only model plays a vital role, as claimed in Patel et al. (2022).
The kNN-LM approach from Shi et al. (2022),
which incorporates the nonparametric component to the parametric model, outperforms all other baselines. Nonetheless, solely relying on retrieval
(kNN) performs poorly with GPT-2, suggesting that using kNN at inference only is limited.
Sentiment analysis: {positive, negative} with fuzzy verbalizers Baselines versus NPM. NPM significantly outperforms all baselines, achieving consistently competitive performance over all datasets. This indicates that, even for tasks that do not explicitly require external knowledge, nonparametric models are very competitive.
Qualitative analysis. Figure 5 depicts predictions from RoBERTa and NPM on a sentiment
![6_image_0.png](6_image_0.png)
BM25+OPT
analysis task. The first example uses *cheap* to indicate *inexpensive*, and the second example uses cheap to indicate *of very poor quality*. RoBERTa predicts Positive to both, while NPM makes correct predictions by retrieving the context that uses *cheap* in the same context as the input.
We also find that representations from NPM lead to better word sense disambiguation. For instance, RoBERTa assigns a high similarity score between cheap (*inexpensive*) and cheap (*of very poor quality*). On the other hand, NPM successfully assigns a low similarity score between *cheap* and *cheap*,
even though their surface forms are the same.
## 6 Experiments: Open-Set Tasks
We include zero-shot evaluation on open-set tasks whose answer can be any arbitrary-length string.
## 6.1 Evaluation Datasets
We evaluate on seven datasets: T-REx and Google-RE from LAMA (Petroni et al., 2019),
KAMEL (Kalo and Fichtel, 2022), Natural Questions (NQ, Kwiatkowski et al. (2019)), TriviaQA
(TQA, Joshi et al. (2017)), TempLAMA22 19 and an entity translation task. In particular, TempLAMA
requires probing knowledge with temporal updates, motivated by Dhingra et al. (2022) and Jang et al.
(2022). The entity translation task involves a translation of an entity from English to other, non-Latin languages, requiring the model to predict extremely rare (if not unseen) characters. See Appendix B for details and statistics of all datasets.
## 6.2 Baselines
We compare with T5 (Raffel et al., 2020) as the encoder-decoder, and GPT-3 (Brown et al., 2020) and OPT (Zhang et al., 2022) as the decoder-only models. The encoder-only models are not applicable for open-set tasks since the number of tokens to predict is unknown.
Prior work found that a "retrieve-and-generate" approach that concatenates the input and passages from an off-the-shelf retrieval system is often helpful in knowledge-dependent tasks (Kandpal et al.,
2022). We add them as baselines, using up to five passages from BM25 (Robertson et al., 2009).
![7_image_0.png](7_image_0.png)
## 6.3 Setup
For all datasets, we report Exact Match (EM). The LAMA test data is biased toward frequent entities because they are filtered to only include answers that are single tokens based on BERT (Devlin et al.,
2019). Since we do not want our evaluation to be biased toward overly frequent entities, we report a micro-averaged accuracy over the data whose answers are 1, 2, 3 and 4+ grams, respectively.
Other datasets do not have such filtering, therefore we report average EM.
As a reference corpus, we use the English Wikipedia from 08/01/2019, consisting of 810M
tokens. For TempLAMA22 19 , we use the English Wikipedia from 08/01/2022, consisting of 858M
tokens.
For NPM, we find combining with sparse retrieval significantly helps, likely because dense retrieval and sparse retrieval capture complementary features (Karpukhin et al., 2020; Seo et al., 2019).
In particular, we reduce the search space to the top 3 passages based on BM25 and perform dense search as done in Kassner and Schütze (2020).
## 6.4 Results
Figure 6 show results on five knowledge tasks.
First, performance of parametric models largely depends on the number of parameters, as it has been claimed in much of prior work (Brown et al., 2020; Kandpal et al., 2022). The retrieve-and-generate approach that combines parametric models with BM25 significantly improves performance.
NPM outperforms or is on par with significantly larger baselines across all datasets. It substantially outperforms all models on two LAMA datasets, including 500x larger GPT-3 either with or without BM25. On KML, TQA and NQ, NPM consistently outperforms 37x larger models with or
| Model | #Params | Unchanged | Changed | AVG |
|-----------------|-----------|-------------|-----------|-------|
| Baselines T5 | 2.2x | 1.9 | 0.4 | 1.1 |
| T5 3B | 8.5x | 1.8 | 0.4 | 1.1 |
| OPT 6.7B | 19x | 2.5 | 1.0 | 1.7 |
| OPT 13B | 37x | 4.9 | 2.1 | 3.5 |
| BM25 + T5 | 2.2x | 13.7→14.9 | 3.0→20.1 | 17.5 |
| BM25 + T5 3B | 8.5x | 11.9→12.0 | 2.2→17.8 | 14.9 |
| BM25 + OPT 6.7B | 19x | 10.2→8.2 | 1.7→11.3 | 9.7 |
| BM25 + OPT 13B | 37x | 14.8→14.4 | 2.8→16.6 | 15.5 |
| Ours NPM | 1.0x | 18.9→19.5 | 2.9→17.5 | 18.5 |
without BM25. This is impressive given that NPM
is not trained on data with questions.
It is also worth noting that sparse retrieval is critical in NPM, e.g., without sparse retrieval, performance on LAMA-TREx drops from 34.5 to 16.1.
We think this is because (1) sparse retrieval and dense retrieval capture complementary features, and (2) the removal of approximation in search improves search quality. We think future work can explore completely removing sparse retrieval, as has been done in Lee et al. (2021) to improve Seo et al. (2019).
Impact of the reference corpus size. Figure 7 reports the impact of the size of the reference corpus, from 41M tokens (5%) to 810M tokens (100%).
Performance of NPM is highly correlated with the size of the reference corpus, strongly suggesting that using a larger reference corpus is important.
## Results On Temporal Knowledge Tasks. Table 2
reports results on TempLAMA. NPM retains its performance on the unchanged set (18.9 →19.5)
and successfully updates its answers on the changed set (2.9 → 17.5). Its performance is significantly better than the performance of parametric models with up to 13B parameters, and is on par with a larger model with the retrieve-and-generate approach, which also successfully updates its answer by leveraging the updated corpus. This is in agreement with prior work that shows the model with a nonparametric component adapts to temporal updates by replacing the reference corpus at test time (Izacard et al., 2022). Nonetheless, the retrieve-and-generate approach is still significantly worse than NPM when the target entities are rare, which we show in the next paragraph.
![8_image_0.png](8_image_0.png)
| Model | #Params | #L | w/o BM25 | w/ BM25 |
|---------------------------------------|-----------|------|------------|-----------|
| Baselines, English-only T5 2.2x | 0.2 | 1.9 | | |
| T5 3B | 8.5x | 0.5 | 4.4 | |
| OPT 6.7B | 19x | 0.4 | 22.3 | |
| OPT 13B | 37x | 1.0 | 24.6 | |
| Ours, English-only NPM 1.0x | 52.4 | | | |
| References, Multilingual mT5 3.4x 101 | 1.3 | 19.0 | | |
| mT5 XL | 11x | 101 | 4.1 | 56.6 |
| BLOOM 3B | 8.5x | 46 | 0.0 | 17.4 |
| BLOOM 7.1B | 20x | 46 | 0.1 | 26.0 |
Performance on rare entities. We break down the instances on LAMA and TempLAMA based on the number of BPE splits of the target entity, e.g.,
Thessaloniki is one word that is split into 4 BPE
tokens, thus the number of splits is 3. Since BPE
splits a word if they are rare, the number of BPE
splits indicates the rarity of the entity. We compare NPM with GPT-3 and BM25+GPT-3 on LAMA,
and BM25+T5 (770M and 3B) on TempLAMA, the two most competitive baselines on each dataset.
Figure 8 reports results. On LAMA, NPM outperforms GPT-3 fairly consistently, with larger gains as the number of BPE splits increases. On TempLAMA, while BM25+T5 is competitive on frequent entities with zero BPE split, it consistently lags behind NPM with ≥ 1 BPE splits. This suggests that NPM is particularly good at addressing rare entities, compared to not only parametric models without retrieval but also the retrieve-andgenerate approach.
Results in Entity Translation. Results on the entity translation task are shown in Table 3 (perlanguage results are reported in Table 10 of Appendix C). T5 and OPT struggle to perform the task, both with and without BM25 retrieval. In contrast, NPM performs well across all languages.
In order to better calibrate performance of NPM,
we provide reference performance of models that are purposely trained on the multilingual datamT5 (Xue et al., 2021) and BLOOM (Scao et al.,
2022). NPM outperforms 3.4x larger mT5 and 20x larger BLOOM, and approaches 11x larger mT5, even though it is trained on English. We think strong cross-lingual transferability of NPM
is likely because it can retrieve a phrase based on its surrounding context, even if it has not seen the exact word during training.
## 7 Conclusion
We introduced NPM, a nonparametric masked language model that replaces a softmax over the output vocabulary with a nonparametric distribution over a reference corpus. NPM can be efficiently trained using a contrastive objective and an in-batch approximation to a full corpus. Zero-shot evaluation on 16 tasks shows that NPM outperforms significantly larger parametric models. NPM is particularly good at rare patterns (word senses or facts),
scaling and updating at test time, and predicting extremely rare if not unseen characters.
## Limitation
Scaling through the inference corpus. The size of the reference corpus is an additional dimension for model scale in nonparametric models. In this paper, we scale the corpus up to nearly 1B tokens, which is still smaller than the training data of very large language models (Brown et al., 2020; Rae et al., 2021). We think future work can scale it further using tools such as Distributed FAISS (Johnson et al., 2019) or ScaNN (Guo et al., 2020).
Significant memory usage. Using NPM saves GPU compute and memory compared to using models with more parameters. However, NPM requires more RAM and disk memory due to embeddings of a reference corpus. For instance, the largest corpus in our experiments (full English Wikipedia)
requires 70GB of RAM and 1.4TB of disk memory. Future work can build more efficient NPM as done in prior work in nearest neighbor search (Jegou et al., 2010; Norouzi et al., 2012; Ge et al., 2014; Izacard et al., 2020; Yamada et al., 2021).
Exploration of larger vocabulary. Large vocabulary is known to lead performance gains (Conneau et al., 2020) but is bounded in memory costs.
Previous work explored more efficient softmax approximations (Morin and Bengio, 2005; Chen et al.,
2016; Grave et al., 2017). Our nonparametric training offers an alternative by removing the softmax over the vocabulary. With the RoBERTa architecture, increasing the vocab size by 2x makes the baseline training 50% more memory expensive, but does not increase the memory in training NPM.
However, this paper does not include more systematic evaluation on the effect of large vocabulary. Future work can explore training NPM with a significantly larger vocabulary to further boost performance.
Extension for generation. Our paper evaluates NPM only on prediction tasks. It is currently nontrivial to use NPM for generation, since it is the encoder-only model. Future work can explore autoregressive generation as done in Patel et al. (2022)
or use NPM for editing (Schick et al., 2022; Gao et al., 2022).
Extension to few-shot learning and fine-tuning.
Our paper focuses on zero-shot evaluation only. Future work can extend NPM to a few-shot learning setup. In fact, fine-tuning NPM is significantly easier than fine-tuning larger models such as T5, OPT
and GPT-3 which we compare NPM with, and can be explored in future work.
Better cross-lingual transfer. Our work explored cross-lingual transfer in a limited setup where the model is trained on monolingual data.
We think future work can train multilingual NPM,
and explore more comprehensive cross-lingual evaluation. In fact, nonparametric training may alleviate the burden of collecting large-scale multilingual
| Model | #Params | FS | SP | Acc | #Q/sec |
|------------------|-----------|------|-------|-------|----------|
| RoBERTa | 1.0x | 67.6 | 36.36 | | |
| NPM‡ | 1.0x | ✓ | 75.5 | 7.63 | |
| OPT 2.7B | 7.6x | 2.1 | 0.71 | | |
| OPT 2.7B + BM25‡ | 7.6x | ✓ | 8.3 | 0.28 | |
| OPT 6.7B | 19x | 4.2 | 0.18 | | |
| OPT 6.7B + BM25‡ | 19x | ✓ | 10.7 | 0.12 | |
| NPM‡ | 1.0x | ✓ | 10.8 | 4.52 | |
corpora since it makes the model less sensitive to the language coverage in the training data, and may lead to significantly better cross-lingual transfer, as we demonstrate in the entity translation task.
Limitation in speed. We find that search makes inference considerably slower than the counterpart without search. We think that (1) search can significantly be faster with better engineering (we use the default hyperparameters of the FAISS index with no tuning) or better index, and (2) the speed of NPM is still on par with the speed of significantly larger parametric models that NPM outperforms
(see Table 4). Moreover, while not explored in this work, there has been work that improves inference speed (He et al., 2021; Alon et al., 2022) that can be applied to NPM. We leave improving inference speed to future work.
## Acknowledgements
We thank Ari Holtzman, Eric Wallace, Iz Beltagy, Jinhyuk Lee, Jungsoo Park, Mark Johnson, Noah Smith, Ofir Press, Patrick Lewis, Xiang Deng, Xinxi Lyu, Zexuan Zhong, UW-NLP members and anonymous reviewers for discussion and comments on the paper. This research was supported by NSF IIS-2044660, ONR N00014-18-1-2826, ONR
MURI N00014- 18-1-2670, an Allen Distinguished Award and gifts from AI2. SM is supported by a J.P. Morgan fellowship.
## References
Uri Alon, Frank Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neuro-symbolic language modeling with automaton-augmented retrieval. In *Proceedings of the International Conference of Machine Learning*.
Mikel Artetxe, Jingfei Du, Naman Goyal, Luke Zettlemoyer, and Ves Stoyanov. 2022. On the role of bidirectionality in language model pre-training. In Proceedings of Empirical Methods in Natural Language Processing.
Akari Asai, Jungo Kasai, Jonathan H. Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021.
XOR QA: Cross-lingual open-retrieval question answering. In *Conference of the North American Chapter of the Association for Computational Linguistics*.
Bogdan Babych and Anthony F. Hartley. 2003. Improving machine translation quality with automatic named entity recognition. Proceedings of the International EAMT workshop on MT and other Language Technology Tools, Improving MT through other Language Technology Tools Resources and Tools for Building MT.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022.
Improving language models by retrieving from trillions of tokens. In Proceedings of the International Conference of Machine Learning.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In Proceedings of Advances in Neural Information Processing Systems.
Wenlin Chen, David Grangier, and Michael Auli. 2016.
Strategies for training large vocabulary neural language models. In Proceedings of the Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the Association for Computational Linguistics*.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*.
Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, and Huan Sun. 2021. ReasonBERT: Pre-trained to reason with distant supervision. In *Proceedings of* Empirical Methods in Natural Language Processing.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Conference of the North American Chapter of the Association for Computational Linguistics*.
Bhuwan Dhingra, Jeremy R Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W Cohen. 2022. Time-aware language models as temporal knowledge bases. *Transactions of the* Association for Computational Linguistics.
Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language with knowledge base triples. In *Proceedings of the International Conference on Language Resources and Evaluation*.
William T Freeman, Thouis R Jones, and Egon C Pasztor. 2002. Example-based super-resolution. *IEEE*
Computer graphics and Applications.
Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2022. Attributed text generation via post-hoc research and revision. *arXiv preprint* arXiv:2210.08726.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proceedings of the Association for Computational Linguistics*.
Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. 2014.
Optimized product quantization. *IEEE Transactions* on Pattern Analysis and Machine Intelligence.
Édouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. 2017. Efficient softmax approximation for GPUs. In *Proceedings of* the International Conference of Machine Learning.
Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020.
Accelerating large-scale inference with anisotropic vector quantization. In *Proceedings of the International Conference of Machine Learning*.
Ahmed Hassan, Haytham Fahmy, and Hany Hassan.
2007. Improving named entity translation by exploiting comparable and parallel corpora. In Proceedings of the International Workshop on Acquisition and Management of Multilingual Lexicons.
Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2021. Efficient nearest neighbor language models. In Proceedings of Empirical Methods in Natural Language Processing.
Myles Hollander, Douglas A Wolfe, and Eric Chicken.
2013. *Nonparametric statistical methods*. John Wiley & Sons.
Ari Holtzman, Peter West, Vered Schwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. In *Proceedings of Empirical Methods in Natural Language Processing*.
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In *Knowledge Discovery* and Data Mining.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299.
Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, and Edouard Grave. 2020.
A memory efficient baseline for open domain question answering. *arXiv preprint arXiv:2012.15156*.
Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, and Minjoon Seo. 2022. Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models. In *Proceedings of Empirical Methods* in Natural Language Processing.
Herve Jegou, Matthijs Douze, and Cordelia Schmid.
2010. Product quantization for nearest neighbor search. *IEEE transactions on pattern analysis and* machine intelligence, 33(1):117–128.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019.
Billion-scale similarity search with gpus. IEEE
Transactions on Big Data.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert:
Improving pre-training by representing and predicting spans. *Transactions of the Association for Computational Linguistics*.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the Association for Computational Linguistics*.
Jan-Christoph Kalo and Leandra Fichtel. 2022. Kamel:
Knowledge analysis with multitoken entities in language models. In *Proceedings of the Conference on* Automated Knowledge Base Construction.
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. *arXiv* preprint arXiv:2211.08411.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of Empirical Methods in Natural Language Processing*.
Nora Kassner and Hinrich Schütze. 2020. BERT-kNN:
Adfding a kNN search component to pretrained language models for better QA. In *Findings of the Association for Computational Linguistics: EMNLP*
2020.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In *Proceedings of the International Conference on Learning Representations*.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Mayank Kulkarni, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. 2022. Learning rich representation of keyphrases from text. In Findings of the Association for Computational Linguistics: NAACL
2022.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. *Transactions of the* Association for Computational Linguistics.
Tian Lan, Deng Cai, Yan Wang, Heyan Huang, and Xian-Ling Mao. 2023. Copy is all you need. In *Proceedings of the International Conference on Learning* Representations.
Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021. Learning dense representations of phrases at scale. In *Proceedings of the Association* for Computational Linguistics.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.
2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of Advances in Neural Information Processing Systems.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R Trippas, J Shane Culpepper, and Alistair Moffat. 2020. Cc-news-en: A large english news corpus. In *Proceedings of the ACM International* Conference on Information and Knowledge Management.
Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In *Proceedings of the ACM*
conference on Recommender systems.
Robert C Moore. 2003. Learning translations of namedentity phrases from parallel corpora. In Proceedings of the European Chapter of the Association for Computational Linguistics.
Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the International workshop on artificial intelligence and statistics.
Mohammad Norouzi, Ali Punjani, and David J Fleet.
2012. Fast search in hamming space with multiindex hashing. In 2012 IEEE conference on computer vision and pattern recognition, pages 3108–3115.
IEEE.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*.
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the Association for Computational Linguistics.
Nikolaos Pappas, Phoebe Mulcaire, and Noah A. Smith.
2020. Grounded compositional outputs for adaptive language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1252–1267, Online. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Proceedings of Advances in Neural Information Processing* Systems.
Ajay Patel, Bryan Li, Mohammad Sadegh Rasooli, Noah Constant, Colin Raffel, and Chris CallisonBurch. 2022. Bidirectional language models are also few-shot learners. *arXiv preprint arXiv:2209.14500*.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of Empirical Methods in Natural Language Processing.
Nina Poerner, Ulli Waltinger, and Hinrich Schütze. 2019.
Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa.
arXiv preprint arXiv:1911.03681.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog.
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models:
Methods, analysis & insights from training gopher.
arXiv preprint arXiv:2112.11446.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*.
Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, and Omer Levy. 2021. Few-shot question answering by pretraining span selection. In *Proceedings of the Association for Computational Linguistics*.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® *in Information Retrieval*.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´
Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model.
arXiv preprint arXiv:2211.05100.
Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. 2022. Peer: A collaborative language model. *arXiv preprint arXiv:2208.11663*.
Minjoon Seo, Tom Kwiatkowski, Ankur P Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phraseindexed question answering: A new challenge for scalable document comprehension. In *Proceedings* of Empirical Methods in Natural Language Processing.
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur P
Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019.
Real-time open-domain question answering with dense-sparse phrase index. In *Proceedings of the* Association for Computational Linguistics.
Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer. 2022. Nearest neighbor zero-shot inference. In Proceedings of Empirical Methods in Natural Language Processing.
Sidney Siegel. 1957. Nonparametric statistics. The American Statistician.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of Empirical Methods in Natural Language Processing*.
Zequn Sun, Wei Hu, and Chengkai Li. 2017. Crosslingual entity alignment via joint attribute-preserving embedding. In *Proceedings of the International Semantic Web Conference*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers:
State-of-the-art natural language processing. In Proceedings of Empirical Methods in Natural Language Processing: System Demonstrations.
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zeroshot entity linking with dense entity retrieval. In Proceedings of Empirical Methods in Natural Language Processing.
Frank F. Xu, Junxian He, Graham Neubig, and Vincent Josua Hellendoorn. 2022. Capturing structural locality in non-parametric language models. In *Proceedings of the International Conference on Learning* Representations.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In Conference of the North American Chapter of the Association for Computational Linguistics.
Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi.
2021. Efficient passage retrieval with hashing for open-domain question answering. In *Proceedings of* the Association for Computational Linguistics.
Jinghui Yan, Jiajun Zhang, JinAn Xu, and Chengqing Zong. 2018. The impact of named entity translation for neural machine translation. In Proceedings of the China Workshop on Machine Translation.
Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bottleneck: A high-rank rnn language model. In *Proceedings of the International Conference on Learning* Representations.
Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Proceedings of the Association for Computational Linguistics.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Proceedings of Advances in Neural* Information Processing Systems.
Tony Z Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the International Conference of Machine Learning.
Ruiqi Zhong, Charlie Snell, Dan Klein, and Jacob Steinhardt. 2022a. Describing differences between text distributions with natural language. In Proceedings of the International Conference of Machine Learning.
Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021.
Factual probing is [mask]: Learning vs. learning to recall. In Conference of the North American Chapter of the Association for Computational Linguistics.
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022b. Training language models with memory augmentation. In Proceedings of Empirical Methods in Natural Language Processing.
## A Model Details A.1 Details Of Npm
Approximation at inference. Given q start and q end, we take the top k tokens with the highest similarity scores with each of them, and compute scores over spans composed by these tokens. Let c∗
i:j be a span in C from the i-th token to the j-th token, and E(c) ∈ Rh be a vector corresponding to a token c ∈ C. We find the top k tokens for the start and the end:
$$\begin{array}{r c l}{{c_{\mathrm{s_{1}}},c_{\mathrm{s_{2}}},\cdots,c_{\mathrm{s_{k}}}}}&{{=}}&{{\operatorname*{argTopk}\operatorname*{sim}(\mathbf{q}^{\mathrm{start}},\mathrm{E(c)}),}}\\ {{c_{\mathrm{e_{1}}},c_{\mathrm{e_{2}}},\cdots,c_{\mathrm{e_{k}}}}}&{{=}}&{{\operatorname*{argTopk}\operatorname*{sim}(\mathbf{q}^{\mathrm{end}},\mathrm{E(c)})}}\end{array}$$
using a fast nearest neighbor search. We then define
a set of candidate phrases C˜∗as:
$$\left(\bigcup_{i=1}^{k}\bigcup_{j=1}^{l_{\mathrm{max}}}c_{\mathrm{s}_{i}:\mathrm{s}_{i}+j-1}^{*}\right)\cup\left(\bigcup_{i=1}^{k}\bigcup_{j=1}^{l_{\mathrm{max}}}c_{\mathrm{e}_{i}-j+1:\mathrm{e}_{i}}^{*}\right),$$ and predict:
$$\operatorname{argmax}_{v^{*}\in{\mathcal{V}}^{*}}\sum_{c^{*}\in{\tilde{c}}^{*}}\mathbb{I}[v^{*}=c^{*}]\mathrm{expsim}(\mathbf{q},\operatorname{E}(c^{*})),$$
where E(c∗) ∈ R2his a vector corresponding to c∗, and V∗is a set of any possible n-grams defined by the vocabulary V.
## A.2 Training Details
All implementation was done with PyTorch (Paszke et al., 2019), PyTorch Lightning4and Huggingface Transformers (Wolf et al., 2020).
Masking. We use a masking ratio of 15% for all models, following the standard in prior work (Devlin et al., 2019; Liu et al., 2019; Joshi et al., 2020).
We implement masking as follows: (1) we first identify all possible candidate spans (spans that positives are found from other sequences in the batch),
(2) sample the length of spans to mask from a geometric distribution with a hyperparameter p = 0.5, and (3) mask the spans with respect to the sampled length until the masking budget has been spent.
We do not mask more than 128 spans from one sequence, and do not mask the span if the same span has been masked for more than ten times within the batch in order to prevent repeatedly masking overly frequent spans.
4https://github.com/Lightning-AI/
lightning For [MASKs] and [MASKe], we use the
[MASK] vocab from the RoBERTa tokenizer. Note that it is not necessary to use different tokens for
[MASKs] and [MASKe] since the Transformer can handle positional information.
## A.3 A Special Case: Npm **Single**
Along with NPM, we introduce NPM **SINGLE**,
which outputs a nonparametric distribution over every single *token* in C, instead of a *phrase*. To some extent, NPM is a strict generalization of NPM SIN-GLE, and NPM SINGLE still has a problem that existing encoder-only models have, e.g., can only fill in the [MASK] with a single token. We however think NPM SINGLE can be useful for some applications, e.g., for fine-tuning, as existing encoder-only models are used for.
Inference. Given a reference corpus C =
{c1, · · · , cN }, we construct N number of hdimensional vectors c1, *· · ·* , cN ∈ R
h by feeding the text into the encoder. At inference time, given a query whose t-th token is [MASK], we feed it into the encoder:
q1..qL = Encoder(q1..qt−1, [MASK], qt+1..qL).
We take qt as a vector that represents the [MASK]
token in the query. Finally, the prediction is made by aggregating the similarity scores to the tokens in C:
$$\operatorname*{argmax}_{v\in\mathcal{V}}\sum_{c\in\mathcal{C}}\mathbb{I}[c=v]\mathrm{exp}(\mathrm{sim}(\mathbf{q}_{t},\operatorname{E}(c))),$$
where E(c) ∈ Rhis a vector corresponding to c, and V is the vocabulary set.
In practice, since computing scores over all tokens in C is infeasible, an approximation is made by computing scores for the top k nearest neighbors only, and treating other tokens to have a similarity score of −Inf. More precisely:
$$c^{1},c^{2},\cdots,c^{k}=\operatorname*{argTopk}\operatorname*{sim}(\mathbf{q}_{t},\operatorname{E}(c))$$
are obtained by using an index (e.g., FAISS (Johnson et al., 2019)), and the following is returned as a prediction:
$$\operatorname{argmax}\sum_{v\in\mathcal{V}}\ \ \Pi[c^{i}=v]\mathrm{exp}(\mathrm{sim}(\mathbf{q}_{t},\mathrm{E}(c^{i}))).$$
Training. Let x i1
...xiL
be the i-th sequence in the batch, whose subset is replaced with [MASK] and converted to xˆ
i1
...xˆ
i L
. Both the unmasked sequence and the masked sequence are fed into the encoder, and each token is mapped into an h-dimensional vector:
$$\begin{array}{r c l}{{\mathbf{x}_{1}^{i}\cdots\mathbf{x}_{L}^{i}}}&{{=}}&{{\mathrm{Encoder}(x_{1}^{i}\cdots x_{L}^{i}),}}\\ {{}}&{{}}&{{\hat{\mathbf{x}}_{1}^{i}\cdots\hat{\mathbf{x}}_{L}^{i}}}&{{=}}&{{\mathrm{Encoder}(\hat{x}_{1}^{i}\cdots\hat{x}_{L}^{i}).}}\end{array}$$ The training objective is then defined as:
$$\sum_{t=1}^{L}\mathbb{I}[\hat{x}_{t}=\,[\texttt{MASK}\,]\,]l(x_{t}^{i},\hat{x}_{t}^{i}),$$ where $l(x_{t}^{i},\hat{x}_{t}^{i})$ is
where $i(x_{t}^{i},x_{t}^{i})$ is $$-\log\frac{\sum_{\mathbf{y}\in\mathcal{Y}^{+}(x_{t}^{i})}\exp(\sin(\hat{\mathbf{x}}_{t}^{i},\mathbf{y}))}{\sum_{\mathbf{y}\in\mathcal{Y}^{+}(x_{t}^{i})\cup\mathcal{Y}^{-}(x_{t}^{i})}\exp(\sin(\hat{\mathbf{x}}_{t}^{i},\mathbf{y}))}.$$ Here, $\sin(\cdot,\cdot)$ is a similarity function defined in
Here, sim(·, ·) is a similarity function defined in Section 3.1, and Y
+(x it) and Y−(x it) are *positives* and *negatives* of x it—tokens from *other* sequences in the batch that share and do not the vocab, respectively.
$$\begin{array}{r c l}{{{\mathcal{Y}}^{+}(x_{t}^{i})}}&{{=}}&{{\left\{x_{m}^{j}|x_{t}^{i}=x_{m}^{j}\;\mathrm{and}\;i\neq j\right\},}}\\ {{{\mathcal{Y}}^{-}(x_{t}^{i})}}&{{=}}&{{\left\{x_{m}^{j}|x_{t}^{i}\neq x_{m}^{j}\;\mathrm{and}\;i\neq j\right\}.}}\end{array}$$
## A.4 Inference On Closed-Set Tasks
When applying NPM and NPM SINGLE on closedsetk tasks, we closely follow Shi et al. (2022) who adapts kNN-LM for zero-shot inference on classification tasks. We assume a fuzzy verbalizer:
f : Y → V˜, where Y is a set of labels in the task and *V ∈ V* ˜ is a subset of the vocabulary V. The fuzzy verbalizer maps a label to a set of tokens that express the label, e.g., in a sentiment classification task, f(Positive) includes awesome or *great*,
and f(Negative) includes terrible or *broken*.
NPM **SINGLE** is given a query vector q ∈ R
h and predicts:
$\underset{y\in\mathcal{Y}}{\operatorname{argmax}}\sum\mathbb{I}[c\in f(y)]\text{exp}\left(\frac{\text{sim}(\mathbf{q},\operatorname{E}(c))}{\tau}\right),$ where $\operatorname{E}(c)\in\mathbf{R}^h$ is a vector corresponding to $c$.
where E(c) ∈ Rhis a vector corresponding to c, and τ is a hyperparameter.
NPM is given a query vector q ∈ R
2hand predicts:
dicts: $$\operatorname*{argmax}_{y\in\mathcal{Y}}\sum_{c^{*}\in C^{*}}\mathbb{I}[c^{*}\in f(y)]\text{exp}\left(\frac{\text{sim}(\mathbf{q},\text{E}(c^{*}))}{\tau}\right),\quad\frac{\text{sim}}{\tau}$$
where E(c∗) ∈ R2his a vector corresponding to c∗. Note that this is essentially equivalent to
$$\begin{array}{c}{{\operatorname{argmax}_{y\in\mathcal{Y}}\sum_{c\in\mathcal{C}}\mathbb{I}[c\in f(y)]\mathrm{exp}\Bigg(}}\\ {{\qquad\qquad\frac{\mathrm{{sim}}(\mathbf{q}^{\mathrm{start}},\mathrm{E}(c))}{\tau}+\frac{\mathrm{{sim}}(\mathbf{q}^{\mathrm{end}},\mathrm{E}(c))}{\tau}\Bigg).}}\end{array}$$ We use $\tau=5.0$ for both NPM single and NPM.
## B Evaluation Details
Table 5 reports statistics and templates on each downstream task, and Table 6 reports statistics of the retrieval corpus used in experiments.
For closed-set tasks, we use templates and verbalizers provided by Shi et al. (2022) for most datasets, except two datasets. For RTE, we use the template from Artetxe et al. (2022). For Subj, we write our own template, motivated by Zhong et al. (2022a) that found Subj is mainly about differentiating a review and a summary. For open-set tasks, we use templates provided by the original authors, except NQ and TQA for which we use the templates from GPT-3 (Brown et al., 2020). Due to limited computation resource, we subsample the data to include up to 3,000 examples, following the standard from prior work (Zhao et al., 2021; Shi et al., 2022). For closed-set tasks, we use exactly the same set of data as Shi et al. (2022), and for open-set tasks, we use the same script to subsample the data. For LAMA T-REx and Google RE, we subsample up to 1,000 examples for each of 1, 2, 3 and 4+ grams. For the entity translation task, we subsample up to 1,000 examples per language.
The following is a more detailed description of open-set tasks used in Section 6.
LAMA (Petroni et al., **2019)** is a factual probing benchmark that is designed to quantify the amount of factual knowledge in the model. It requires the model to predict the object given a subjectrelation tuple in a cloze format. We use two versions of LAMA (Petroni et al., 2019): (1) LAMA
T-REx, derived from Elsahar et al. (2018) and (2)
LAMA Google-RE, derived from the Google-RE
corpus.5 For each version, we additionally consider the UHN (UnHelpfulNames) subset (Poerner et al.,
2019)) where instances whose subject strongly hints the object by names (e.g., Apple Watch 5https://code.google.com/archive/p/
relation-extraction-corpus
| Dataset | |D| | |Ds| # labels | Example | |
|---------------------------|---------|-----------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Closed-set tasks AGN | 120,000 | 3,000 | 4 | Indiana defends its NCAA mens's soccer title by edging UC Santa Barbara in penalty kicks. The text topic is about [MASK]. ([MASK]={politics, sports, business, technology}) Company for cemecal at espaniea? Answer: Can you give us more info? The text topic is |
| Yahoo | 60,000 | 3,000 | 10 | about [MASK]. ([MASK]={society, science, health, education, computer, sports, business, entertainment, family, politics}) |
| Subj | 2,000 | 2,000 | 2 | He tells mitchell that he is now in debt. This is a [MASK]. ([MASK]={review, summary}) |
| SST-2 | 2,210 | 2,210 | 2 | It was [MASK]. ([MASK]={great, terrible}) |
| MR | 2,000 | 2,000 | 2 | Simplistic, silly and tedious. It was [MASK]. ([MASK]={great, terrible}) |
| RT | 1,066 | 1,066 | 2 | weird. rewarding. It was [MASK]. ([MASK]={great, terrible}) |
| CR | 2,000 | 2,000 | 2 | I am very pleased so far. It was [MASK]. ([MASK]={great, terrible}) |
| Amz | 400,000 | 3,000 | 2 | It was [MASK]. ([MASK]={great, terrible}) |
| RTE | 277 | 277 | 2 | Most commercial logwood is grown in Honduras, right? [MASK], plants are grown in water or in substance other than soil. ([MASK]={Yes, No}) |
| Open-set tasks LAMA T-REx | 34,039 | 2,983 | - | AVCDH is owned by [MASK]. |
| LAMA Google RE | 5,200 | 1,856 | - | Joshua Mathiot died in [MASK]. |
| KAMEL | 46,800 | 3,000 | - | What is followed by So-Lo? Answer: [MASK]. |
| NQ | 3,610 | 3,000 | - | who sang i ran all the way home? The answer is: [MASK]. |
| TQA | 11,313 | 3,000 | - | Who wrote the opera Carmen? The answer is: [MASK]. |
| TempLAMA22 19 - changed | 3,360 | 3,000 | - | Contributor Covenant is developed by [MASK]. |
| - unchanged | 3,360 | 3,000 | - | Atari 8-bit family is developed by [MASK]. |
| Entity translation | 10,452 | 6,622 | - | The Korean translation of Banpo Bridge is: [MASK]. |
Table 5: Statistics of downstream datasets. |D| and |Ds| indicate the number of test examples on the original data and the subsampled data, respectively. See Appendix B for details.
| Corpus name | Source | |C| | Datasets used |
|---------------------|-----------------------------------------|-------|------------------------|
| En-Wiki+CCNews | Subset of En-Wiki 08/01/2019 and CCNews | 126M | AGN, Yahoo, RTE |
| Subjectivity corpus | Raw IMDB | 15M | Subj |
| Review corpus | Amazon and IMDB | 62M | SST-2, MR, RT, CR, Amz |
| En-Wiki 2019 | En-Wiki 08/01/2019 | 810M | All open-set tasks |
| En-Wiki 2022 | En-Wiki 08/01/2022 | 858M | TempLAMA22 19 |
and Apple) are excluded. We also consider the hard subset of T-Rex from Zhong et al. (2021).
Note that Petroni et al. (2019) only include triples whose object is one token based on BERT (Devlin et al., 2019); however, with a different pretrained model like RoBERTa, entities could be multiple BPE tokens. Entities that are splitted into multiple BPE tokens are more rare entities.
KAMEL (Kalo and Fichtel, **2022)** is another factual probing task as LAMA but with a few key differences to make it more general and broad: (1)
it includes a broader coverage of triples, (2) it removes the constraint that the object is one token based on BERT, (3) it includes objects with literal values, and (4) it has a question answering format.
Natural Questions (NQ, Kwiatkowski et al.
(2019)) and **TriviaQA** (TQA, Joshi et al. (2017))
are two welll-studied open-domain question answering datasets. We use the open-version of NQ (Lee et al., 2019) and TQA where the question is the only input and the model should use its knowledge to answer the question.
TempLAMA22 19 is a task that requires probing knowledge with temporal updates. The task is first introduced by Dhingra et al. (2022) and Jang et al.
(2022); however, we could not use either of existing data as their time split do not match our training.
We therefore create the data by using a script provided by Dhingra et al. (2022) but using the 2019 and the 2022 dumps. We take Wikipedia triples whose relations are available for a template from either Petroni et al. (2019) or Dhingra et al. (2022).
We then include triples whose object entities differ between the 2019 dump and the 2022 dump (due to the entity being updated), or only appear in the 2022 dump (due to the subject or the relation being
| ISO Code | Language | |D| | |Ds| |
|------------|------------|-------|--------|
| zh | Chinese | 3,199 | 1,000 |
| ar | Arabic | 2,013 | 1,000 |
| el | Greek | 1,618 | 1,000 |
| iw | Hebrew | 841 | 841 |
| ru | Russian | 758 | 758 |
| jp | Japanese | 471 | 471 |
| hi | Hindi | 427 | 427 |
| ko | Korean | 418 | 418 |
| pl | Polish | 177 | 177 |
| tr | Turkish | 150 | 150 |
| cs | Czech | 109 | 109 |
| ta | Tamil | 80 | 80 |
| th | Thai | 74 | 74 |
| mn | Mongolian | 64 | 64 |
| ml | Malayalam | 53 | 53 |
| TOTAL | 10,452 | 6,622 | |
added) to the *changed* set. Otherwise, triples are included in the *unchanged* set. We additionally find that many triples are overly difficult because the fact is extremely niche and not really known. We thus filter the data to only include facts that appear in Wikipedia. Specifically, we include triples if the subject has a corresponding Wikipedia page and the object entity appears in that Wikipedia page.
Entity translation requires translating an entity from English to other languages that are not Latin based. While this is mainly to evaluate if the model can generate rare or unseen characters that are not in English, the entity translation task itself is a vital and challenging task in real applications such as machine translation (Babych and Hartley, 2003; Yan et al., 2018) and cross-lingual question answering (Clark et al., 2020; Asai et al., 2021). It is often beyond a series of simple translations of each word, or spelling out its pronunciation (Moore, 2003; Hassan et al., 2007; Sun et al., 2017). For instance, the Korean translation of *Banpo Bridge* in Figure 1 (반 포대교) is not the concatenation of the translations of *Banpo* and *Bridge* (반포 다리).
We first identify a list of 15 non-Latin languages: Arabic (ar), Czech (cs), Greek (el),
Hindi (hi), Hebrew (iw), Japanese (jp), Korean
(ko), Malayalam (ml), Mongolian (mn), Polish
(pl), Russian (ru), Tamil (ta), Thai (th), Turkish (tr), and Chinese (zh). We then implement heuristics to identify entities and their translations from English Wikipedia. Specifically, we parse the first paragraph of each Wikipedia article and pair the found translation with a topic entity of the article. For instance, a Korean translation of Banpo Bridge is found from the first sentence of https://en.wikipedia.org/
wiki/Banpo_Bridge. Per-language statistics are reported in Table 7.
## C Additional Results
Full results on knowledge tasks. Table 8 reports full results on five knowledge tasks. See Figure 6 for an illustration, and Section 6.4 for discussion.
Comparison to few-shot GPT-3. Table 9 compares zero-shot NPM SINGLE and NPM with zeroand four-shot GPT-3. Our zero-shot models outperform 500x larger zero-shot GPT-3 and 7.6x larger 4-shot GPT-3, but lag behind 4-shot GPT-3 that is 19x or larger. We think future work can explore extending our models to a few-shot setup.
Additional qualitative results. Figure 9 depicts predictions from RoBERTa and NPM in topic classification, choosing a label between four candidates: health, computer, *travel* and *politics*. All three examples contain the word *torch*, but with different meanings, e.g., an infectious diseases, a tool, and a computer library. RoBERTa predicts *health* for all of them, while NPM predicts health, *travel* and *computer*, which are all correct predictions.
As in Figure 5, we find that representations from NPM enable better word sense disambiguation:
the pairwise similarities between between different meanings of *torch* are significantly lower than the pairwise similarities between other tokens that share the meaning.
Entity translation given an oracle passage. We evaluate models on the entity translation task where an oracle passage—a passage that is guaranteed to contain the translation information—is provided to the model. Baselines prepend oracle passages to the input, as it does with the retrieve-and-generate approach. NPM uses oracle passages to restrict the search space.
Table 11 reports results. While performance overall increases compared to when the oracle passage is not provided, the overall comparison between models does not change from Table 10: (1)
all monolingual models significantly suffer, except for a couple of languages that are derived from Latin; (2) NPM significantly outperforms all monolingual models; (3) NPM even outperforms 3.4x larger mT5 and 20x larger BLOOM, and approaches 11x larger mT5.
| Model | #Params | C | T-REx | Google RE | KML | TQA | NQ | | | |
|-----------------------------------------------|-----------|------|---------|-------------|-------|-------|------|------|------|------|
| All | UHN | Hard | All | UHN | | | | | | |
| Baselines (encoder-decoder) T5 2.2x | 13.3 | 5.5 | 10.7 | 1.1 | 0.4 | 1.6 | 4.2 | 0.5 | | |
| T5 3B | 8.5x | 12.1 | 8.2 | 11.5 | 2.1 | 0.7 | 3.6 | 9.0 | 2.0 | |
| BM25 + T5 | 2.2x | ✓ | 22.2 | 20.3 | 22.4 | 16.4 | 16.6 | 13.9 | 31.4 | 5.2 |
| BM25 + T5 3B | 8.5x | ✓ | 21.6 | 19.0 | 21.8 | 18.5 | 15.5 | 16.2 | 39.6 | 10.8 |
| Baselines (decoder-only) OPT 2.7B 7.6x | 9.8 | 6.7 | 8.3 | 0.0 | 0.0 | 1.6 | 9.9 | 2.1 | | |
| GPT-3 2.7B | 7.6x | 4.4 | 2.6 | 3.8 | 0.0 | 0.0 | 2.1 | 5.2 | 1.1 | |
| OPT 6.7B | 19x | 11.6 | 9.9 | 10.7 | 0.6 | 0.3 | 3.2 | 20.9 | 4.2 | |
| GPT-3 6.7B | 19x | 8.1 | 5.0 | 6.7 | 0.0 | 0.0 | 2.1 | 12.4 | 3.1 | |
| OPT 13B | 37x | 15.0 | 12.7 | 12.7 | 0.3 | 0.3 | 2.5 | 22.5 | 4.2 | |
| GPT-3 13B | 37x | 16.4 | 13.7 | 15.5 | 0.8 | 0.4 | 2.2 | 25.5 | 5.2 | |
| GPT-3 175B | 500x | 25.7 | 24.1 | 24.7 | 1.1 | 1.0 | 6.5 | 49.0 | 11.4 | |
| BM25 + OPT 2.7B | 7.6x | ✓ | 14.8 | 14.1 | 13.8 | 4.4 | 3.7 | 11.3 | 28.5 | 8.3 |
| BM25 + GPT-3 2.7B | 7.6x | ✓ | 3.5 | 3.4 | 3.6 | 0.1 | 0.1 | 5.2 | 14.5 | 6.1 |
| BM25 + OPT 6.7B | 19x | ✓ | 14.8 | 14.3 | 14.9 | 4.1 | 3.3 | 8.2 | 29.9 | 10.7 |
| BM25 + GPT-3 6.7B | 19x | ✓ | 14.9 | 15.3 | 15.1 | 4.4 | 3.5 | 7.0 | 21.1 | 8.8 |
| BM25 + OPT 13B | 37x | ✓ | 18.9 | 19.1 | 19.3 | 3.8 | 3.1 | 10.6 | 34.0 | 10.7 |
| BM25 + GPT-3 13B | 37x | ✓ | 22.2 | 22.7 | 22.4 | 11.8 | 11.2 | 8.9 | 32.4 | 11.2 |
| BM25 + GPT-3 175B | 500x | ✓ | 32.0 | 31.6 | 31.3 | 11.4 | 11.9 | 12.2 | 44.9 | 6.4 |
| Ours (encoder-only, nonparametric) NPM 1.0x ✓ | 34.5 | 29.0 | 32.1 | 27.9 | 23.0 | 15.6 | 32.2 | 10.8 | | |
| Model | #Params | AGN | SST-2 | | |
|------------------------------------|-----------|--------|---------|------|------|
| 0-shot | 4-shot | 0-shot | 4-shot | | |
| Baselines (Parametric) RoBERTa | x1.0 | 71.3 | - | 84.5 | - |
| GPT-3 2.7B (Zhao et al., 2021) | x7.6 | 44.7 | 43.3 | 57.2 | 59.1 |
| + CC (Zhao et al., 2021) | x7.6 | 63.2 | 71.1 | 71.4 | 79.9 |
| GPT-3 2.7B (Holtzman et al., 2021) | x7.6 | 69.0 | - | 53.8 | 88.1 |
| + PMI (Holtzman et al., 2021) | x7.6 | 67.9 | - | 72.3 | 87.7 |
| GPT-3 6.7B (Holtzman et al., 2021) | x19 | 64.2 | - | 54.5 | 92.9 |
| + PMI (Holtzman et al., 2021) | x19 | 57.4 | - | 80.0 | 79.8 |
| GPT-3 13B (Holtzman et al., 2021) | x37 | 69.8 | - | 69.0 | 85.4 |
| + PMI (Holtzman et al., 2021) | x37 | 70.3 | - | 81.0 | 86.9 |
| GPT-3 175B (Zhao et al., 2021) | x500 | 43.9 | 61.0 | 71.6 | 93.6 |
| + CC (Zhao et al., 2021) | x500 | 73.9 | 85.9 | 75.8 | 94.3 |
| GPT-3 175B (Holtzman et al., 2021) | x500 | 75.4 | - | 63.6 | 89.9 |
| + PMI (Holtzman et al., 2021) | x500 | 74.7 | - | 71.4 | 95.5 |
| Ours (Nonparametric) NPM SINGLE | x1.0 | 74.2 | - | 86.8 | - |
| NPM | x1.0 | 74.5 | - | 87.2 | - |
| RoBERTa A torch infection in pregnancy. The topic is about <mask>. Health Is a torch permitted on board? The topic is about <mask>. Health Health The version of torch is 1.12.0. The topic is about <mask>. | Sim(torch, <m>) Sim(torch, <m>) Sim(torch, <m>) Sim(torch, torch, torch) Sim(<m>, <m>, <m>) = 30.8 = 30.9 = 30.1 = 29.4–30.4 = 29.7–30.9 | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|--------|
| NPM SINGLE | Sim(torch, <m>) | = 22.7 |
| Sim(torch, <m>) | = 22.1 | |
| Sim(torch, <m>) | = 24.2 | |
| Sim(torch, torch, torch) = 12.3—16.2 Sim(<m>, <m>, <m>) = 15.8—18.0 | | |
| A torch infection in pregnancy. The topic is about <mask>. Health Is a torch permitted on board? The topic is about <mask>. Travel The version of torch is 1.12.0. The topic is about <mask> . Computer Retrieved context for <mask>: ".. But it is still unclear what these findings mean for infant health, especially since early infancy is such an important developmental time" Retrieved context for <mask>: Devices running Windows 8.1 or above support the last version of the app as well. (…) is one of the most popular computer and video game. Retrieved context for <mask>: Travel with dogs airplane travel, Dog travel, road trips, Trip preparation. 4 comments on "Dog friendly travel tips - comfort for both of you!" | | |
Figure 9: Predictions from RoBERTa (baseline) and NPM on a topic classification task (classes={health, computer, travel, politics}). The bottom indicates the context NPM retrieves to fill in [MASK]. On the right, we indicate the token-wise similarity scores. NPM assigns significantly lower scores to the token pairs with distinct meanings than to the token pairs with the similar meaning, e.g., torch (*a disease*) and torch (*a tool*).
| Topic classication: {health, computer, travel, politics} | | | | | | | | | | | | | | | | | | |
|------------------------------------------------------------|---------|--------------------------------------------------------------------------------|------------------------------------------------------------|--------------------|-----------------------------------|--------------------|------|-----|----------|-----|-----|----------|-----|----------|-----|-----|--------|-----|
| Model | #Params | #L | ar | cs | el | hi | iw | jp | ko | ml | mn | pl | ru | ta | th | tr | zh AVG | |
| Baselines, English-only T5 | 2.2x | 0.0 | 0.0 | 0.0 | 0.0 | 0.1 | 0.2 | 0.0 | 0.0 | 0.0 | 1.1 | 0.9 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2 | |
| T5 3B | 8.5x | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 5.6 | 1.3 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5 | |
| OPT 6.7B | 19x | 0.0 | 0.0 | 0.3 | 0.0 | 0.0 | 0.0 | 3.1 | 0.0 | 0.0 | 0.0 | 2.9 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4 | |
| OPT 13B | 37x | 1.5 | 0.0 | 1.2 | 0.7 | 0.0 | 0.0 | 1.4 | 0.0 | 0.0 | 1.1 | 7.4 | 0.0 | 0.0 | 1.3 | 0.1 | 1.0 | |
| BM25 + T5 | 2.2x | 0.0 | 5.5 | 0.3 | 0.2 | 0.5 | 0.0 | 0.2 | 1.9 | 0.0 | 6.8 | 0.8 | 1.2 | 0.0 11.3 | 0.0 | 1.9 | | |
| BM25 + T5 3B | 8.5x | 0.0 12.8 | 0.1 | 0.7 | 0.2 | 0.8 | 0.0 | 0.0 | 1.6 28.8 | 1.7 | 0.0 | 0.0 20.0 | 0.0 | 4.4 | | | | |
| BM25 + OPT 6.7B | 19x | 26.4 54.1 15.5 11.2 11.8 14.4 19.6 | 5.7 | 3.1 47.5 52.5 | 6.2 12.2 32.0 22.7 | 22.3 | | | | | | | | | | | | |
| BM25 + OPT 13B | 37x | 17.3 51.4 24.9 15.5 27.8 12.3 22.0 11.3 | 7.8 45.8 48.2 | 8.8 18.9 34.0 23.3 | 24.6 | | | | | | | | | | | | | |
| Ours, English-only NPM | 1.0x | 51.9 33.0 60.9 63.2 63.7 59.0 60.5 50.9 46.9 33.3 61.2 51.2 60.8 32.7 56.9 | 52.4 | | | | | | | | | | | | | | | |
| References, Multilingual mT5 3.4x | 101 | 0.3 | 1.8 | 1.5 | 0.0 | 0.4 | 1.9 | 0.7 | 0.0 | 0.0 | 1.1 | 4.6 | 2.5 | 1.4 | 3.3 | 0.7 | 1.3 | |
| mT5 XL | 11x | 101 | 4.4 | 3.7 | 4.9 | 6.8 | 0.7 | 2.3 | 4.1 | 1.9 | 4.7 | 5.6 | 8.0 | 5.0 | 0.0 | 6.7 | 2.8 | 4.1 |
| BLOOM 3B | 8.5x | 46 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3 | 0.0 |
| BLOOM 7.1B | 20x | 46 | 0.0 | 0.9 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5 | 0.1 |
| BM25 + mT5 | 3.4x | 101 12.4 22.9 21.6 | 9.8 12.5 28.9 19.1 11.3 18.8 15.8 16.0 17.5 28.4 16.7 33.4 | 19.0 | | | | | | | | | | | | | | |
| BM25 + mT5 XL | 11x | 101 64.4 64.2 54.3 65.6 62.7 55.4 69.4 43.4 62.5 52.0 53.7 37.5 50.0 48.7 65.0 | 56.6 | | | | | | | | | | | | | | | |
| BM25 + BLOOM 3B | 8.5x | 46 24.2 25.7 | 1.7 13.3 15.1 18.5 17.9 | 5.7 | 6.2 21.5 11.1 10.0 27.0 18.0 44.5 | 17.4 | | | | | | | | | | | | |
| BM25 + BLOOM 7.1B | 20x | 46 19.0 49.5 11.4 20.8 | 8.1 30.1 25.4 | 5.7 | 6.2 54.2 29.0 | 6.2 37.8 33.3 53.7 | 26.0 | | | | | | | | | | | |
Table 10: Results on the entity translation task. \#L indicates the number of languages multilingual models are trained on. **Bold** and **Bold** indicate the best among monolingual models and the best including multilingual models, respectively. NPM significantly outperforms all existing monolingual models, and approaches or outperforms larger multilingual models.
| Model | #Params | #L | ar | cs | el | hi | iw | jp | ko | ml | mn | pl | ru | ta | th | tr | zh AVG |
|------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------|--------------------------------------------------------------------------------|----------------------------------------|-----------------------------------|------|------|------|----------|----------|------|----------|----------|------|------|------|------|----------|
| Baselines, English-only T5 2.2x | 0.0 13.8 | 0.7 | 0.9 | 0.6 | 1.1 | 0.5 | 3.8 | 1.6 15.8 | 1.3 | 7.5 | 0.0 16.7 | 0.4 | 4.0 | | | | |
| T5 3B | 8.5x | 0.2 21.1 | 1.0 | 0.7 | 0.7 | 2.3 | 1.2 | 3.8 | 4.7 37.3 | 2.9 | 8.8 | 1.4 30.7 | 0.4 | 7.3 | | | |
| OPT 6.7B | 19x | 24.4 56.9 22.9 15.5 19.7 19.1 32.5 24.5 | 3.1 56.5 60.9 22.5 23.0 46.0 30.2 | 30.5 | | | | | | | | | | | | | |
| OPT 13B | 37x | 20.7 62.4 22.7 15.7 30.9 17.6 36.1 18.9 15.6 56.5 52.2 22.5 35.1 48.7 40.0 | 33.0 | | | | | | | | | | | | | | |
| Ours, English-only NPM 1.0x | 70.3 44.0 76.8 74.0 82.4 71.3 73.2 58.5 59.4 45.2 71.5 68.8 66.2 45.3 74.5 | 65.4 | | | | | | | | | | | | | | | |
| References, Multilingual mT5 3.4x 101 19.4 25.7 30.8 19.0 20.6 33.8 28.2 28.3 40.6 18.6 23.1 30.0 29.7 26.7 37.4 | 27.5 | | | | | | | | | | | | | | | | |
| mT5 XL | 11x | 101 83.2 76.1 69.6 81.5 77.4 68.2 85.2 49.1 67.2 65.5 62.7 51.2 68.9 64.0 79.0 | 69.9 | | | | | | | | | | | | | | |
| BLOOM 3B | 8.5x | 46 51.2 27.5 | 3.1 30.2 34.1 34.0 30.9 11.3 | 7.8 28.2 23.0 17.5 37.8 22.0 70.1 | 28.6 | | | | | | | | | | | | |
| BLOOM 7.1B | 20x | 46 29.6 43.1 12.0 27.6 12.2 32.5 30.9 | 9.4 15.6 59.3 38.1 13.8 43.2 32.0 65.5 | 31.0 | | | | | | | | | | | | | |
Table 11: Results on the entity translation task given an oracle passage. \#L indicates the number of languages multilingual models are trained on. **Bold** and **Bold** indicate the best excluding multilingual models and the best including multilingual models, respectively.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section following Section 6 (No section number).
✓ A2. Did you discuss any potential risks of your work?
Appendix E.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4.
✓ B1. Did you cite the creators of artifacts you used?
We include citations to the pretraining data and pretrained models (Section 3.3 and Appendix B) and evaluation datasets (Section 4.1, 5.1 and Appendix C).
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The license and terms for use will be released in the open-sourced repo, which we do not include in the submission in order to keep anonymity.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We discuss how we use the evaluation datasets in detail in Section 4, 5 and Appendix C.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use standardalized, research data/benchmarks, and checked that the authors of the original papers tried their best to ensure the data does not contain names or uniquely identifies individual people or offtensive content. Nonetheless, we've discussed them in "Potential Risk" (Appendix E).
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We reported the language of the model/data and their domains in Section 3.3, 4, 5, Appendix B and Appendix C.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We included statistics of datasets in Appendix C.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We include training details and related information in Section 3.3 and Appendix B.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discuss detailed experimental setups in Section 3.3, 4 and 5, Appendix B and C. We evaluated models in a zero-shot setup, thus there is no hyperparameter tuning involved.
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Since all experiments are about zero-shot evaluation, all results are based on a single run and are deterministic.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We described and cited packages that we use in the paper. For complete reproducibility, we will also open-source the code, which we do not include in the submission in order to keep anonymity.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
cao-etal-2023-pay | Pay More Attention to Relation Exploration for Knowledge Base Question Answering | https://aclanthology.org/2023.findings-acl.133 | Knowledge base question answering (KBQA) is a challenging task that aims to retrieve correct answers from large-scale knowledge bases. Existing attempts primarily focus on entity representation and final answer reasoning, which results in limited supervision for this task. Moreover, the relations, which empirically determine the reasoning path selection, are not fully considered in recent advancements. In this study, we propose a novel framework, RE-KBQA, that utilizes relations in the knowledge base to enhance entity representation and introduce additional supervision. We explore guidance from relations in three aspects, including (1) distinguishing similar entities by employing a variational graph auto-encoder to learn relation importance; (2) exploring extra supervision by predicting relation distributions as soft labels with a multi-task scheme; (3) designing a relation-guided re-ranking algorithm for post-processing. Experimental results on two benchmark datasets demonstrate the effectiveness and superiority of our framework, improving the F1 score by 5.8{\%} from 40.5 to 46.3 on CWQ and 5.7{\%} from 62.8 to 68.5 on WebQSP, better or on par with state-of-the-art methods. | # Pay More Attention To Relation Exploration For Knowledge Base Question Answering
Yong Cao1**, Xianzhi Li**1†
, Huiwen Liu2, Wen Dai2**, Shuai Chen**2, Bin Wang2, Min Chen3**, and Daniel Hershcovich**4 1 Huazhong University of Science and Technology 2Xiaomi AI Lab, China.
3School of Computer Science and Engineering, South China University of Technology 4Department of Computer Science, University of Copenhagen
{yongcao_epic,xzli}@hust.edu.cn, [email protected], [email protected]
{liuhuiwen, daiwen, chenshuai3, wangbin11}@xiaomi.com
## Abstract
Knowledge base question answering (KBQA)
is a challenging task that aims to retrieve correct answers from large-scale knowledge bases.
Existing attempts primarily focus on entity representation and final answer reasoning, which results in limited supervision for this task.
Moreover, the relations, which empirically determine the reasoning path selection, are not fully considered in recent advancements. In this study, we propose a novel framework, REKBQA, that utilizes relations in the knowledge base to enhance entity representation and introduce additional supervision. We explore guidance from relations in three aspects, including
(1) distinguishing similar entities by employing a variational graph auto-encoder to learn relation importance; (2) exploring extra supervision by predicting relation distributions as soft labels with a multi-task scheme; (3) designing a relation-guided re-ranking algorithm for post-processing. Experimental results on two benchmark datasets demonstrate the effectiveness and superiority of our framework, improving the F1 score by 5.8% from 40.5 to 46.3 on CWQ and 5.7% from 62.8 to 68.5 on WebQSP,
better or on par with state-of-the-art methods.
## 1 Introduction
Given a question expressed in natural language, knowledge base question answering (KBQA) aims to find the correct answers from a large-scale knowledge base (KB), such as Freebase (Bollacker et al., 2008), Wikipedia (Vrandeciˇ c and Krötzsch ´ ,
2014), DBpeidia (Auer et al., 2007), etc. For example, the question "*Who is Emma Stone's father?*"
can be answered by the fact of "(*Jeff Stone, person.parents, Emma Stone*)". The deployment of KBQA can significantly enhance a system's knowledge, improving performance for applications such as dialogue systems and search engines.
![0_image_0.png](0_image_0.png)
Figure 1: An example of KBQA process. The reasoning begins with the red node and passes through similar entities, which are defined as entities that have similar relations as shown in the upper right box. Besides, key reasoning relations whose tokens (xi) hold overlap between given questions (ti) are important for reasoning.
Early attempts on KBQA (Min et al., 2013; Zhang et al., 2018; Xu et al., 2019) mostly focus on transferring given questions into structured logic forms, which are strictly constrained by the consistent structure of parsed query and KB. To overcome the limitation of the incompleteness of KB,
many approaches (Xiong et al., 2019; Deng et al., 2019; Lan et al., 2021) have been developed that aim to map questions and their related KB entities and relations into embeddings, and define the reasoning process as a similarity retrieval problem, which is called IR-based method. Additionally, some studies (Gao et al., 2022; Liu et al., 2023; Ge et al., 2022) have attempted to learn relation embeddings and then incorporate surrounding relations to represent entities, which successfully reduces the number of parameters needed for the model.
However, most of these works (Han et al., 2021)
primarily focus on final answer reasoning and the representation of entities, while few explore the full utilization of relations in KB. Additionally, for answer reasoning, the supervision signal provided is also only from entities, while we believe that the relations also play an important role in determining the reasoning path and the answer choosing.
We propose a new framework, called RelationEnhanced KBQA (RE-KBQA), to investigate the potential use of relations in KBQA by utilizing an embedding-fused framework. The proposed framework aims to study the role of relations in KBQA in the following three aspects:
Relations for entity representation. We find that similar entities with similar surrounding relations (e.g., the three green circles in the upper right of Figure 1) play an important role in reasoning. To distinguish them, we introduce QAVGAE, a question-answering-oriented variational graph auto-encoder, which learns relation weights through global structure features and represents entities by integrating surrounding relations.
Relations for extra supervision. Multi-hop reasoning is often hindered by weak supervision, as models can only receive feedback from final answers (He et al., 2021). To overcome this limitation, we propose a multi-task scheme by predicting the relation distribution of the final answers as additional guidance, using the same reasoning architecture and mostly shared parameters. As illustrated in Figure 1, the proposed scheme requires the prediction of both the answer "Haitian Creole" and its surrounding relation distribution.
Relations for post-processing. We propose a stem-extraction re-ranking (SERR) algorithm to modify the confidence of candidates, motivated by the fact that relations parsed from given questions are empirically associated with strong reasoning paths. As depicted in the bottom of Figure 1, relations that overlap with a given question will be marked as key reasoning relations, and their confidence will be increased empirically. This allows for re-ranking and correction of the final answers.
In general, our contributions can be summarized as follows. (1) We propose a novel method named Relation Enhanced KBQA (RE-KBQA) by first presenting QA-VGAE for enhanced relation embedding. (2) We are the first to devise a multi-task scheme to implicitly exploit more supervised signals. (3) We design a simple yet effective postprocessing algorithm to correct the final answers, which can be applied to any IR-based method. (4)
Lastly, we conduct extensive experiments on two challenging benchmarks, WebQSP and CWQ to show the superiority of our RE-KBQA over other competitive methods. Our code and datasets are publicly available on Github1.
## 2 Related Work
Knowledge Base Question Answering. Most existing research on KBQA can be categorized into two groups: a). Semantic Parsing (SP)-based methods (Abdelaziz et al., 2021; We et al., 2021; Cui et al., 2022), which transfer questions into logical form, e.g., SPARQL queries, by entity extraction, KB grounding, and structured query generation.
b). Information Retrieval (IR)-based method (Ding et al., 2019; Chen et al., 2019; Wang et al., 2021; Feng et al., 2021; Zhang et al., 2022b), which applies retrieve-and-rank mechanism to reason and score all candidates of the subgraph with advancements in representation learning and ranking algorithms. Apart from the above approaches, recent studies (Xiong et al., 2019; Deng et al., 2019; Lan et al., 2021) also propose several alterations over the reasoning process, such as extra corpus exploration (Xiong et al., 2019), better semantic representation (Zhu et al., 2020; Ge et al., 2021),
dynamic representation (Han et al., 2021), and intermediate supervised signals mining (Qiu et al.,
2020; He et al., 2021). Aiming to tackle limited corpus, some works are devoted to utilizing external resources, such as using pre-trained language models (Unik-QA) (Oguz et al., 2022), retrieving similar documents (CBR-KBQA) (Das et al., 2021), extra corpus (KQA-Pro) (Cao et al., 2022), etc.
Multi-task Learning for KBQA. Multitask learning can boost the generalization capability on a primary task by learning additional auxiliary tasks (Liu et al., 2019) and sharing the learned parameters among tasks (Hwang et al., 2021; Xu et al., 2021). Many recent works have shown impressive results with the help of multi-task learning in many weak supervised tasks such as visual question answering (Liang et al., 2020; Rajani and Mooney, 2018), sequence labeling (Rei, 2017; Yu et al., 2021), text classification (Liu et al., 2017; Yu et al., 2019) and semantic parsing (Hershcovich et al., 2018). In KBQA, auxiliary information is often introduced in the form of artificial "tasks" relying on the same data as the main task (Hershcovich et al., 2018; Ansari et al., 2019; Gu et al., 2021),
rather than independent tasks. This assists the reasoning process and proves to be more effective for the main task. To the best of our knowledge, we are the first to propose a multi-task to assist KBQA
by using mostly shared parameters among tasks, for a balance of effectiveness and efficiency.
![2_image_0.png](2_image_0.png)
## 3 Problem Formulation
Knowledge Base (KB). A knowledge base usually consists of a huge amount of triples: G =
{⟨e, r, e′⟩|(e, e′) ∈ ξ, r *∈ R}*, where ⟨*e, r, e*′⟩ denotes a triple with head entity e, relation r and tail entity e′. ξ and R mean the sets of all entities and relations, respectively. To apply the triples to downstream task, the entities and relations should be firstly embedded as d-dimensional vectors: V =
{⟨Ve, Vr, Ve′⟩|(Ve, Ve′) ∈ Vξ, Vr ∈ VR}.
Knowledge Base Question Answering (KBQA).
Our dataset is formed as question-answer pairs.
Let Q represents the set of given questions and each question q is composed of separated tokens, where Q = {q ∈ Q|q = x1, x2*, ..., x*n}.
Let A (⊆ ξ) represents the correct answers of Q. Thus, the dataset is formulated as D =
{(Q, A)|(q1, a1),(q2, a2), ...,(qm, am)}. To reduce the complexity of reasoning process, we extract question-related head entities eh from q and generate an associated subgraph gsub (∈ Gsub)
within multi-hops walking from eh. Thus, the goal of KBQA is transformed to reason the candidates c
(⊆ ξ) of the highest confidence from gsub, which can be formalized as:
$$c=\arg\operatorname*{max}_{\theta,\phi}r_{\phi}(f_{\theta}(q,g_{s u b})),$$
where fθ(·) and rϕ(·) denote the representation and reasoning network, respectively.
## 4 Our Approach
As discussed in Section 1, we consider three aspects to further boost the performance of KBQA,
including (i) the enhancement of the representation capability, especially for similar entities; (ii) a strategy of mining more supervision signals to guide the training; and (iii) a reasoning path correction algorithm to adjust the ranking results. Below, we shall elaborate on our network architecture (RE-KBQA)
with our solutions to the above issues.
## 4.1 Architecture Overview
Inspired by the neighborhood aggregation strategy, we employ Neural State Machine (NSM) (He et al., 2021) as our backbone model, where entities are denoted by surrounding relations. We assume that the topic entities and the related subgraph are already achieved by preprocessing; see Section 5.2 for the details. Figure 2 shows the main pipeline of our RE-KBQA. Specifically, given a question q, we first employ a question embedding module to encode it into semantic vector.
Here, for a fair comparison with NSM baseline, we follow (He et al., 2021) to adopt Glove (Pennington et al., 2014) to encode q into embeddings
{V
j q }
n j=1 = Glove(x1, x2*, ..., x*n), which is then mapped to hidden states by LSTM:
$$\{h^{\prime},\{h_{j}\}_{j=1}^{n}\}=\mathrm{LSTM}(V_{q}^{1},V_{q}^{2},...,V_{q}^{n}),\quad(2)$$
$$(1)$$
where we set h′as the last hidden state of LSTM
to denote question vector and {hj}
n j=1 denotes the vector of tokens. After obtaining h′and {hj}
n j=1, then we can calculate :
$$q^{(t)}=\psi(s^{(t-1)},h^{\prime}),$$
$$(3)^{\frac{1}{2}}$$
where ψ(·) denotes multi-layer percetron function.
Then, the semantic vector s
(t)at the t-th reasoning step of question q is obtained by:
$$s^{(t)}=\sum_{j=1}^{n}p(\psi(q^{(t)},h_{j}))\cdot h_{j},\qquad\quad(4)$$
where p(·) denotes score function, and s
(0) (∈
R
(|d|)) is initialized randomly.
Next, a *QA-VGAE enhanced representation* module is designed to represent KB elements under the guidance of s
(t). Then, unlike previous works that directly predict final answer via a score function, we introduce a *multi-task learning-fused reasoning* module to further predict an auxiliary signal (i.e., relation distribution). Note that, though we adopt NSM framework to conduct KBQA task, we concentrate on the representation capability enhancement by identifying similar entities, as well as the multi-task learning via supervision signal mining.
At last, to avoid ignoring strong reasoning paths, we further propose a *stem-extraction re-ranking* algorithm to post-process the predictions of our network. Below, we will present the details of three of our proposed contributed modules.
## 4.2 Qa-Vgae Enhanced Representation
Similar entities are defined as entities that are connected mostly by the same edges, and only a small portion of edges are different. For example, as shown in Figure 1, the three nodes marked by dashed circles share almost the same edges, and only the node of "Haiti" holds the relation of
"*Person.Spoken_language*" that is quite important for answering the question. Hence, distinguishing similar entities and identifying key reasoning paths are essential for embedding-fused information retrieval-based methods. Traditional methods like TransE (Bordes et al., 2013) can grasp local information from independent triples within a KB,
but fail to capture the inter-relations between adjacent triple facts. Consequently, they tend to have difficulties in distinguishing similar entities.
To alleviate the above problem, we introduce Question Answering-oriented Variational Graph Auto-Encoder (QA-VGAE) module, as is shown in Figure 3, by assigning different weights to reasoning relations, where the weights are learned by VGAE (Kipf and Welling, 2016). Note that, compared with traditional methods like TransE (Bordes et al., 2013), TransR (Lin et al., 2015), and ComplEx (Trouillon et al., 2016), VGAE achieves superior performance in link prediction task. We thus adopt VGAE in our module to learn weights.
The key insight of this module is to fully learn global structure features by executing graph reconstruction task and constraining the representation as normal distribution, thus promoting the relation
![3_image_0.png](3_image_0.png)
representation to be more discriminating. Finally, by similarity evaluation of the learned representation, we can obtain the prior probability of relation
(PPR) matrix, whose elements denote the conditional probability of relations.
In detail, we first transfer the KB from ⟨*e, r, e*′⟩
(entity-oriented) to ⟨*r, e, r*′⟩ (relation-oriented). In this way, we can then learn PPR matrix via a link prediction task by unsupervised learning.
Specifically, given the connection degrees X (∈
R|nr|×|nr|) of a relation and the adjacency A (∈
R|nr|×|nr|) between relation nodes, where nr denotes the number of relations, we adopt two-layers GCN to learn the mean σ and variance µ of the relation importance distribution, and further compound the relation representation Z as :
$$Z=\mathrm{GCN}_{\mu}(X,A)\oplus\mathrm{GCN}_{\sigma}(X,A),$$
where ⊕ is compound function. Then, PPR matrix Pr is obtained by distribution similarity evaluation:
$\square$
$${\mathcal{P}}_{r}=\operatorname{Softmax}(Z\cdot Z^{\top}),$$
$$(6)$$
$\in\;\;\mathbb{R}^{|n_r|\times|n_r|}.\;\;$ Please refer to Ap
⊤, (6)
where Pr ∈ R|nr|×|nr|. Please refer to Appendix A.1 for loss function LP of QA-VGAE.
Next, we denote KB elements as d-dim vectors, Vξ(∈R|ne|×|d|) as entity vectors and VR(∈
R|nr|×|d|) as relation vectors, where ne is the number of enities. We denote candidate vectors VC as:
$$V_{\mathcal{C}}=W_{\mathcal{C}}\cdot{\mathcal{P}}_{r}\cdot V_{\mathcal{R}},\qquad\qquad(7)$$
where WC ∈ R|nc|×|nr| denotes the surrounding relation matrix of entities and nc denotes number of candidates.
Then, to integrate semantic vectors s
(t) of given question and the history vector, we update Vc as:
$$\hat{V}_{c}^{(t)}=\sigma([V_{c}^{(t-1)};s^{(t)}\odot W_{r}\odot V_{c}]),\quad\quad(8)$$
where V
(t)
c (∈ VC) is candidate vector at time step t, σ(·) is the linear layer, [; ] is the concatenation operation, ⊙ is element-wise multiplication, and Wr (∈ R|d|) is the matrix of learnable parameter.
## 4.3 Multi-Task Learning-Fused Reasoning
The purpose of this module is to conduct answer reasoning from candidate vector Vˆ
(t)
c . To this end, we jointly combine the reasoning paths implicitly among candidates by utilizing the Transformer
(Vaswani et al., 2017), formalized as:
$$V_{c}^{(t)}=\mathrm{Transformer}\left(\left[\hat{V}_{c_{1}}^{(t)};\hat{V}_{c_{2}}^{(t)};...;\hat{V}_{c_{l}}^{(t)}\right]\right),\tag{9}$$
where {Vˆ
ci }
i=1 denotes all the candidate vectors.
However, like most existing works (Deng et al.,
2019; Lange and Riedmiller, 2010), learning from
the final answers as the feedback tends to make the model hard to train, due to the limited supervision.
How to introduce extra supervision signals into
network model is still an open question. In our
method, we introduce a new multi-task to learn the
distribution of candidates' surrounding relations,
$$\{{\hat{V}}^{(t)}\}^{l}$$
namely surrounding relations reasoning. The key
idea is to leverage relations around final answer as
extra supervisions to promote the performance, and also modify reasoning paths implicitly.
Specifically, motivated by weakly-supervised
learning methods, we assume the reasoning process starts from topic entity's surrounding relations
S
(0)
R(initialized along with subgraph generation),
and during reasoning, we can easily obtain next
surrounding relations' distribution by:
$$S_{R}^{(t)}=\sigma\left(\left[(s^{(t)}\cdot V_{\mathcal{R}}^{\top(t)};S_{R}^{(t-1)}\right]\right),\tag{10}$$
where S
(t)
Rdenotes the surrounding relations of candidates at step t and V
⊤(t)
R is the transpose of VR at step t. Note that, introducing the multi-task will not increase the complexity of our method obviously, since the number of relations is far fewer than that of entities in most cases, and the multitask shares most parameters with the main task.
In this way, there are two optimization goals of KBQA task, i.e., correct answer retrieving and
![4_image_0.png](4_image_0.png)
surrounding relations prediction. We predict the final answers' possibilities by:
$$p_{c}^{(t)}=\mathrm{Softmax}\left(V_{c}^{(t)}\cdot W_{c}^{(t)}\right),$$
$$(11)$$
, (11)
where p
(t)
c is the confidence of predicted answers.
Also, the relation distribution confidence p
(t)
r is:
$$p_{r}^{(t)}=\mathrm{Softmax}\left(S_{R}^{(t)}\cdot W_{r}^{(t)}\right),$$
$$(12)$$
$\left(t\right)$ ?
, (12)
where W
(t)
c and W
(t)
r are learnable parameters.
Then, the answer retriving loss Lc and the relation prediction loss Lr can be calculated by:
$$\begin{array}{l}{{{\mathcal{L}}_{c}=}}\mathrm{KL}(p_{c}^{(t)},p_{c}^{(*)})}\\ {{{\mathcal{L}}_{r}=}}\mathrm{KL}(p_{r}^{(t)},p_{r}^{(*)}),}\end{array}$$
$$(13)$$
where pc
(∗)and pr
(∗) denote the ground truths, KL
is the KL divergence. Thus, the final total loss is:
$${\mathcal{L}}=\lambda{\mathcal{L}}_{c}+(1-\lambda){\mathcal{L}}_{r},$$
$$(14)$$
where λ denotes a hyper-parameter.
## 4.4 Stem-Extraction Re-Ranking
A limitation of embedding-fused KBQA methods is that the reasoning path is uncontrollable as the complete reasoning path is a blackbox in information retrieval-based methods. For example, in the question "*What is the Milwaukee Brewers mascot?*",
the strongly related path "*education.mascot*" may be missed due to limited representation capability.
However, this weakness can be easily addressed by semantic parsing-based methods by analyzing the semantic similarity of key elements of questions and relations and constraining the reasoning path. Inspired by this observation, we propose a stem-extraction re-ranking (SERR) algorithm for post-processing. The key idea is to stem-match and re-rank the candidates after obtaining candidates and their confidence from our network.
In detail, we design three operators to execute the re-ranking as shown in Algorithm 1: stemmer F(·), modifier M(·), and re-ranker R(·). These operators are used to extract stems from relations or given questions, modify candidates' confidence, and then re-rank the candidates. As shown in Figure 4, given question and candidate predictions, we first use F(·) to process all the relations of freebase relations and questions. Then, we generate a relation candidates pool by matching the stem pool of the question with the relation stems. This allows us to compare the subgraph of the given question with pseudo-facts produced by given topic entities and candidates, respectively. Finally, according to the comparison, M(·) and R(·) are employed to conduct the re-ranking process.
It is worth noting that, in our work, we directly use stem extraction method rather than similarity calculation to re-rank. The insight behind this choice is that, it is unnecessary to consider semantic features again, since we have already injected the question semantic information into our encoded semantic vector s
(t), which means that the model is already equipped with semantic clustering capability. And obviously, stem extraction costs fewer computation resources, as proved in Appendix A.2.
Also, our SERR can be migrated to other models as a plug-in and independent module.
## 5 Experiments And Results 5.1 Datasets
We conduct experiments on two popular benchmark datasets, including WebQuestionSP (Yih et al., 2015) and ComplexWebQuestions (Talmor and Berant, 2018). Specifically, WebQuestionSP
(abbr. WebQSP) is composed of simple questions that can be answered within two hops reasoning, which is constructed based on Freebase (Bollacker et al., 2008). In contrast, ComplexWebQuestions
(abbr. CWQ) is larger and more complicated, where the answers require multi-hop reasoning over several KB facts. The detailed statistics of the two datasets are summarized in Table 1.
## Algorithm 1 Stem Extraction Re-Ranking
Input: natural language question Q, candidates C,
confidence pC, relation set R.
Output: updated candidates C′and confidence p′C
.
1: <* Step 1: Build Relation Trie Ps *>
2: *∅ → P*s 3: **for all** r in R do 4: index i,stem s = F(r) 5: Ps.update(⟨*i, s*⟩)
6: **end for**
7: **for all** {*q, c, p*c} in {Q, C, pC} do 8: <* Step 2: Extract Stem of Q*>
9: tokenize q → Pq 10: F(Pq) → Pe stem 11: <* Step 3: Re-Ranking c and pc*>
12: rc = match(P
e stem,Ps)
13: generate P = ⟨e,Ps(rc), e′⟩ 14: generate P′ = ⟨e,Ps(rc)*⟩ ∪ ⟨*e′,Ps(rc)⟩
15: **for all** p in P ∪ P′ do 16: if p in gsub and p in P **then**
17: M(pc, h1)
18: **end if**
19: if p in gsub and p in P′**then**
20: M(pc, h2)
21: **end if**
22: **end for**
23: c′ = R(c) and p′c = R(pc)
24: **end for**
| Dataset | Train | Valid | Test | Entities | Relations |
|-----------|---------|---------|--------|------------|-------------|
| WebQSP | 2,848 | 250 | 1,639 | 259,862 | 6,105 |
| CWQ | 27,639 | 3,519 | 3,531 | 598,564 | 6,649 |
Table 1: Statistics of WebQSP and CWQ datasets. Note that, *Entities* and *Relations* denote all the entities and relations covered in the subgraph respectively.
## 5.2 Experimental Setting
Basic setting. To make a fair comparison with other methods, we follow existing works (Sun et al.,
2019, 2018; He et al., 2021) to process datasets, including candidates generation by PageRank-Nibble algorithm and subgraph construction within threehops by retrieving from topic entities. We set the learning rate as 8e−4 and decay it linearly throughout iterations on both datasets. We set the number of training epoch on WebQSP and CWQ as 200 and 100, respectively. For better reproducibility, we give all the parameter settings in Appendix A.3.
Baselines. We compare our method with multiple representative methods, including semantic pars-
| Models | WebQSP | CWQ | | | | | |
|--------------------------------|----------|--------|-------|------|-----------------|--------|-----|
| Hits@1 | F1 | Hits@1 | F1 | | | | |
| SP-Based Method | | | | | | | |
| SPARQA* (Sun et al., 2020) | - | - | 31.6 | - | | | |
| QGG* (Lan and Jiang, 2020) | - | 74.0 | 44.1 | 40.4 | | | |
| GNN-KBQA* (Hou et al., 2022) | 68.5 | 68.9 | - | - | | | |
| IR-Based Method | | | | | | | |
| KV-Mem† (Miller et al., 2016) | 46.6 | 34.5 | 18.4 | 15.7 | | | |
| EmbKGQA† (Saxena et al. 2020) | 66.6 | - | 32.0 | - | | | |
| GraftNet† (Sun et al., 2018) | 66.4 | 60.4 | 36.8 | 32.7 | | | |
| PullNet* (Sun et al. 2019) | 68.1 | - | 45.9 | - | | | |
| ReTraCk∗ (Chen et al., 2021) | 71.6 | 71.0 | - | - | | | |
| NSM† (He et al., 2021) | 68.5 | 62.8 | 46.3 | 42.4 | | | |
| BiNSM* (He et al., 2021) | 74.3 | 67.4 | 48.8 | 44.0 | | | |
| SR-KBQA* (Zhang et al., 2022a) | 69.5 | 64.1 | 50.2 | 47.1 | | | |
| RNG-KBQA* (Ye et al., 2022) | - | 75.6 | - | - | | | |
| Ours | | | | | | | |
| RE-KBQAb | 68.7 | 62.8 | 46.8 | 40.5 | | | |
| RE-KBQA | 74.6 | 68.5 | 50.3 | 46.3 | Different cases | WebQSP | CWQ |
| Hits@1 | F1 | Hits@1 | F1 | | | | |
| RE-KBQAb | 68.7 | 62.8 | 46.8 | 40.5 | | | |
| with QA-VGAE | 73.4 | 67.7 | 48.2 | 45.0 | | | |
| 4.7 ↑ | 4.9 ↑ | 1.4 ↑ | 4.5 ↑ | | | | |
| with AxLr | 72.4 | 68.4 | 47.7 | 42.5 | | | |
| 3.7 ↑ | 5.6 ↑ | 0.9 ↑ | 2.0 ↑ | | | | |
| with SERR | 72.0 | 65.5 | 47.3 | 41.5 | | | |
| 3.3 ↑ | 2.7 ↑ | 0.5 ↑ | 1.0 ↑ | | | | |
| RE-KBQA | 74.6 | 68.5 | 50.3 | 46.3 | | | |
| 5.9 ↑ | 5.7 ↑ | 3.5 ↑ | 5.8 ↑ | | | | |
| Table 3: Comparing our full pipeline (bottom row) with various cases in the ablation study. The cells with different background colors reveal the improvement over our backbone network RE-KBQAb. | | | | | | | |
ing (SP)-based methods and information retrieval
(IR)-based methods. SPARQA (Sun et al., 2020)
and QGG (Lan and Jiang, 2020) belong to the former category, which focuses on generating optimal query structures. Besides, KV-Mem (Miller et al.,
2016), EmbedKGQA (Saxena et al., 2020), GraftNet (Sun et al., 2018), PullNet (Sun et al., 2019),
ReTraCk (Chen et al., 2021) and BiNSM (He et al.,
2021) are all IR-based methods, which are also the focus of our comparison.
Evaluation metrics. To fully evaluate KBQA
performance, we should compare both the retrieved and ranked candidates with correct answers. To this end, we employ the commonly-used F1 score and Hit@1. F1 score measures whether the retrieved candidates are correct, while Hit@1 evaluates whether the ranked candidate of the highest confidence is in answer sets.
## 5.3 Comparison With Others
We first compare our RE-KBQA against the aforementioned baselines on two datasets and the results are reported in Table 2. Note that, REKBQAb indicates our backbone network without three modules, i.e., QA-VGAE, multi-task learning and SERR. Clearly, even using our backbone network, it already outperforms most baselines on two datasets, which is benefited from the semantic guidance of given questions and the reasoning mechanisms. Further, as shown in the bottom row, our full pipeline achieves the highest values on both datasets over both evaluation metrics.
Particularly, compared with the results produced by RE-KBQAb, our full method improves more on CWQ dataset, which has increased by 3.5 and 5.8 in terms of Hit@1 and F1, showing that our contributions can indeed boost the multi-hop reasoning process. Besides, RE-KBQA also obtains good results on simple questions (i.e., WebQSP dataset),
especially a 5.7 increase in F1 score, which reveals that the model can recall more effective candidates.
As shown in Table 2, we can observe that the SP-based methods (i.e., SPARQA and QGG) show a good performance in WebQSP, but perform worse in complicated questions, which reveals that SPbased methods are still weak in multi-hop reasoning. Similarly, traditional embedding methods, i.e.,
KV-Mem, EmbedKGQA, and GraftNet, also perform better in simple questions than in complex ones. Though PullNet and BiNSM show good multi-hop reasoning capacity, the extra corpora analysis and bi-directional reasoning mechanism inevitably increase the complexity of these networks.
Apart from above methods, some attempts are conducted on utilizing additional resources for task enhancement recently. As shown in Table 2 *reference*, CBR-KBQA relies on expensive largescale extra human annotations and Roberta pretrained model (PLM), Unik-QA tries to retrieve one-hundred extra context passages for relations in KB and T5-base (PLM), and KQA-Pro uses a large-scale dataset for pre-training with the help of explicit reasoning path annotation. While promising performance has been achieved through these methods, expensive human annotation costs and model efficiency also need to be concerned.
## 5.4 Network Component Analysis
To evaluate the effectiveness of each major component in our method, we conducted a comprehensive ablation study. In detail, similar to Section 5.3, we remove all three components and denote the backbone network as RE-KBQAb. Then, we add QA-VGAE (Section 4.2), multi-task learning (Section 4.3), and SERR (Section 4.4) back on REKBQAb, respectively. In this way, we constructed totally four network models and re-trained each model separately using the same settings of our RE-KBQA model. Table 3 shows the results. By comparing different cases with the bottom-most row (our full pipeline), we can see that each component contributes to improving the performance on both datasets. More ablation experiments can be found in Appendix. Below, we shall discuss the effect of each module separately.
Effect of QA-VGAE. From the results of Table 3, we can observe that the improvements of using QA-VGAE are more remarkable than using the other two modules, demonstrating that the QA-VGAE is more helpful to boost the reasoning process for both simple and complex questions.
Besides quantitative comparison, we also tried to reveal its effect in a visual manner. Here, we adopt T-SNE to visualize the relation vectors. Figure 5 shows a typical embedding distribution before and after QA-VGAE training. For a clear visualization, we randomly select some relations related to a case "*What is the capital of Austria?*".
The orange nodes represent relations close to "*location*", such as "*location.country.capital*", "*location.country.first_level_divisions*", etc., and the blue nodes denote the relations that are not covered by the question subgraph, which we call far relations. Obviously, after using QA-VGAE, the related relations (orange nodes in (b)) tend to get closer and the other nodes get farther.
Effect of multi-task learning. As shown in Table 3, the multi-learning module shows better performance in simple questions (see WebQSP
dataset), since the relation distribution is denser than candidates distribution, thus causing the prediction to be more complicated along with the increase of reasoning steps. To fully explore the effect of this module, we study different loss fusion weights and the results are shown in Figure 6, where a larger λ (range from 0.1 to 1.0, and we discard the setting of 0.0 for its bad performance)
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
denotes a more weighted loss of main task. Clearly, only designing the primary task or auxiliary task is not optimal for KBQA, and the best setting of λ is 0.1 and 0.5 for the two datasets. An interesting observation is that the best Hit@1 is obtained with lower lambda while the best F1 score is obtained with higher lambda in each dataset. We claim that it is caused by the different goals of Hit@1 and F1 metrics, that is, Hit@1 shows whether the top one candidate is found while F1 score evaluates whether most candidates are found.
Effect of SERR. This module is lightweight
(see Appendix A.2 for inference time) yet effective, especially for simple questions; see Table 3. Intuitively, the stem extraction for key paths is quite effective for questions that rely on directconnected facts. In contrast, stem extraction for complex questions relies more on the startpoint and endpoint. Figure 7(a) further shows an example result of SERR module, which proves that it can effectively identify close connected facts of a given question and re-rank the candidates.
## 5.5 Case Study
At last, we show a case result produced by our REKBQA; see Figure 7(b). Given the question "What are the movies that had Tupac in them and which were filmed in New York City?", our method first
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
Figure 7: Case analysis of multi-hop reasoning process.
embedded the question into vectors and retrieve related subgraphs. Then, by utilizing the promotion of our proposed QA-VGAE and multi-task learning, we can use the trained model and obtain the candidates of "*Murder Was the Case*", "Nothing but Trouble", etc, and thanks to the SERR algorithm, our reasoning process can have a chance to re-rank the candidates, thus boosting its performance. Finally, we output *Juice* and *Above the Rim* as the correct answers. For similarity entity identification, SERR in other methods as a plug-in and more case results, please refer to Appendix A.2 and A.5.
## 6 Conclusion
In this paper, we proposed a novel framework, namely RE-KBQA, with three novel modules for knowledge base question answering, which are QAVGAE to explore the relation promotion for entity representation, multi-task learning to exploit relations for more supervisions, and SERR to postprocess relations to re-rank candidates. Extensive experiments validate the superior performance of our method compared with state-of-the-art IRbased approaches.
## 7 Limitations
While good performance has been achieved, there are still limitations in our work. First, though QAVGAE extracts enhanced features and are fast to train, it is an independent module from the main framework. Second, as a post-processing step, the performance of SERR module on simple question is better than that of complex questions.
In the future, we would like to explore the possibility of fusing relation constraints into the representation module directly and inject strong facts identification mechanism as guidance signal of multi-hop reasoning process, aiming to integrate QA-VGAE and SERR into the main framework.
## Acknowledgments
Thanks to the anonymous reviewers for their helpful feedback. We gratefully acknowledge the insightful suggestions from Zeqi Tan. This work is supported by the China National Natural Science Foundation No. 62202182. Yong Cao is supported by China Scholarship Council (No.
202206160052) and the Zhejiang Lab's International Talent Fund for Young Professionals.
## References
Ibrahim Abdelaziz, Srinivas Ravishankar, Pavan Kapanipathi, Salim Roukos, and Alexander Gray. 2021.
A semantic parsing and reasoning-based approach to knowledge base question answering. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 35, pages 15985–15987.
Ghulam Ahmed Ansari, Amrita Saha, Vishwajeet Kumar, Mohan Bhambhani, Karthik Sankaranarayanan, and Soumen Chakrabarti. 2019. Neural program induction for kbqa without gold programs or query annotations. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence(IJCAI), pages 4890–4896.
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007.
Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722–735. Springer.
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human
knowledge. In *Proceedings of the 2008 ACM SIGMOD international conference on Management of* data, pages 1247–1250.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26.
Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Zhiyuan Liu, and Jinghui Xiao. 2022.
Program transfer for answering complex questions over knowledge bases. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8128–
8140, Dublin, Ireland. Association for Computational Linguistics.
Shuang Chen, Qian Liu, Zhiwei Yu, Chin-Yew Lin, JianGuang Lou, and Feng Jiang. 2021. ReTraCk: A flexible and efficient framework for knowledge base question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 325–336, Online. Association for Computational Linguistics.
Zi-Yuan Chen, Chih-Hung Chang, Yi-Pei Chen, Jijnasa Nayak, and Lun-Wei Ku. 2019. UHop: An unrestricted-hop relation extraction framework for knowledge-based question answering. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 345–356, Minneapolis, Minnesota. Association for Computational Linguistics.
Ruixiang Cui, Rahul Aralikatte, Heather Lent, and Daniel Hershcovich. 2022. Compositional generalization in multilingual semantic parsing over Wikidata. *Transactions of the Association for Computational Linguistics*, 10:937–955.
Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, and Andrew McCallum. 2021. Casebased reasoning for natural language queries over knowledge bases. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9594–9611, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yang Deng, Yuexiang Xie, Yaliang Li, Min Yang, Nan Du, Wei Fan, Kai Lei, and Ying Shen. 2019. Multitask learning with multi-view attention for answer selection and knowledge base question answering. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 33, pages 6318–6325.
Jiwei Ding, Wei Hu, Qixin Xu, and Yuzhong Qu. 2019.
Leveraging frequent query substructures to generate
formal queries for complex question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2614–
2622, Hong Kong, China. Association for Computational Linguistics.
Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, and Hong Chen.
2021. A pretraining numerical reasoning model for ordinal constrained question answering on knowledge base. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1852–
1861, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yunjun Gao, Xiaoze Liu, Junyang Wu, Tianyi Li, Pengfei Wang, and Lu Chen. 2022. Clusterea: Scalable entity alignment with stochastic training and normalized mini-batch similarities. In KDD, pages 421–431.
Congcong Ge, Xiaoze Liu, Lu Chen, Baihua Zheng, and Yunjun Gao. 2021. Make it easy: An effective end-to-end entity alignment framework. In *SIGIR*,
pages 777–786.
Congcong Ge, Xiaoze Liu, Lu Chen, Baihua Zheng, and Yunjun Gao. 2022. Largeea: Aligning entities for large-scale knowledge graphs. *PVLDB*, 15(2):237–
245.
Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. 2021. Beyond iid:
three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021, pages 3477–3488.
Jiale Han, Bo Cheng, and Xu Wang. 2021. Twophase hypergraph based reasoning with dynamic relations for multi-hop kbqa. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 3615–3621.
Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. 2021. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. In Proceedings of the 14th ACM
International Conference on Web Search and Data Mining, pages 553–561.
Daniel Hershcovich, Omri Abend, and Ari Rappoport.
2018. Multitask parsing across semantic representations. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 373–385, Melbourne, Australia. Association for Computational Linguistics.
Xia Hou, Jintao Luo, Junzhe Li, Liangguo Wang, and Hongbo Yang. 2022. A novel knowledge base question answering method based on graph convolutional network and optimized search space. *Electronics*,
11(23):3897.
Dasol Hwang, Jinyoung Park, Sunyoung Kwon, KyungMin Kim, Jung-Woo Ha, and Hyunwoo J Kim.
2021. Self-supervised auxiliary learning for graph neural networks via meta-learning. *arXiv preprint* arXiv:2103.00771.
Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308.
Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. 2021. A survey on complex knowledge base question answering:
Methods, challenges and solutions. arXiv preprint arXiv:2105.11644.
Yunshi Lan and Jing Jiang. 2020. Query graph generation for answering multi-hop complex questions from knowledge bases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 969–974, Online. Association for Computational Linguistics.
Sascha Lange and Martin Riedmiller. 2010. Deep autoencoder neural networks in reinforcement learning.
In The 2010 international joint conference on neural networks (IJCNN), pages 1–8. IEEE.
Zujie Liang, Weitao Jiang, Haifeng Hu, and Jiaying Zhu. 2020. Learning to contrast the counterfactual samples for robust visual question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 3285–3292, Online. Association for Computational Linguistics.
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In *Twentyninth AAAI conference on artificial intelligence*.
Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017.
Adversarial multi-task learning for text classification.
arXiv preprint arXiv:1704.05742.
Shikun Liu, Andrew Davison, and Edward Johns. 2019.
Self-supervised generalisation with meta auxiliary learning. Advances in Neural Information Processing Systems, 32.
Xiaoze Liu, Junyang Wu, Tianyi Li, Lu Chen, and Yunjun Gao. 2023. Unsupervised entity alignment for temporal knowledge graphs. In WWW.
Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston.
2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409, Austin, Texas. Association for Computational Linguistics.
Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base.
In *Proceedings of the 2013 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 777–782, Atlanta, Georgia. Association for Computational Linguistics.
Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2022.
UniK-QA: Unified representations of structured and unstructured knowledge for open-domain question answering. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1535–1546, Seattle, United States. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Yunqi Qiu, Yuanzhuo Wang, Xiaolong Jin, and Kun Zhang. 2020. Stepwise reasoning for multi-relation question answering over knowledge graph with weak supervision. In *Proceedings of the 13th International* Conference on Web Search and Data Mining, pages 474–482.
Nazneen Fatema Rajani and Raymond Mooney. 2018.
Stacking with auxiliary features for visual question answering. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2217–2226, New Orleans, Louisiana. Association for Computational Linguistics.
Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2121–
2130, Vancouver, Canada. Association for Computational Linguistics.
Apoorv Saxena, Aditay Tripathi, and Partha Talukdar.
2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4498–
4507, Online. Association for Computational Linguistics.
Haitian Sun, Tania Bedrax-Weiss, and William Cohen.
2019. PullNet: Open domain question answering with iterative retrieval on knowledge bases and text.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2380–
2390, Hong Kong, China. Association for Computational Linguistics.
Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen.
2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231–4242, Brussels, Belgium. Association for Computational Linguistics.
Yawei Sun, Lingling Zhang, Gong Cheng, and Yuzhong Qu. 2020. Sparqa: skeleton-based semantic parsing for complex questions over knowledge bases. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8952–8959.
Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. arXiv preprint arXiv:1803.06643.
Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *International conference on machine learning*, pages 2071–
2080. PMLR.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Denny Vrandeciˇ c and Markus Krötzsch. 2014. Wiki- ´
data: a free collaborative knowledgebase. *Communications of the ACM*, 57(10):78–85.
Zhiguo Wang, Patrick Ng, Ramesh Nallapati, and Bing Xiang. 2021. Retrieval, re-ranking and multi-task learning for knowledge-base question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 347–357, Online. Association for Computational Linguistics.
Peiyun We, Yunjie Wu, Linjuan Wu, Xiaowang Zhang, and Zhiyong Feng. 2021. Modeling global semantics for question answering over knowledge bases.
In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE.
Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2019. Improving question answering over incomplete KBs with knowledgeaware reader. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4258–4264, Florence, Italy. Association for Computational Linguistics.
Kun Xu, Yuxuan Lai, Yansong Feng, and Zhiguo Wang.
2019. Enhancing key-value memory neural networks for knowledge based question answering. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2937–2947, Minneapolis, Minnesota. Association for Computational Linguistics.
Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, Ferdous Sohel, and Dan Xu. 2021. Leveraging auxiliary tasks with affinity learning for weakly supervised semantic segmentation. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pages 6984–6993.
Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, and Caiming Xiong. 2022. RNG-KBQA: Generation augmented iterative ranking for knowledge base question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6032–6043, Dublin, Ireland. Association for Computational Linguistics.
Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics* and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 1321–1331, Beijing, China. Association for Computational Linguistics.
Jiaxin Yu, Wenyuan Liu, Yongjun He, and Chunyue Zhang. 2021. A mutually auxiliary multitask model with self-distillation for emotion-cause pair extraction. *IEEE Access*, 9:26811–26821.
Shanshan Yu, Jindian Su, and Da Luo. 2019. Improving bert-based text classification with auxiliary sentence and domain knowledge. *IEEE Access*, 7:176600–
176612.
Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, and Hong Chen. 2022a. Subgraph retrieval enhanced model for multi-hop knowledge base question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5773–
5784, Dublin, Ireland. Association for Computational Linguistics.
Jinhao Zhang, Lizong Zhang, Bei Hui, and Ling Tian.
2022b. Improving complex knowledge base question answering via structural information learning.
Knowledge-Based Systems, page 108252.
Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. 2018. Variational reasoning for question answering with knowledge graph. In Thirty-second AAAI conference on artificial intelligence.
Shuguang Zhu, Xiang Cheng, and Sen Su. 2020.
Knowledge-based question answering by tree-tosequence learning. *Neurocomputing*, 372:64–72.
## A Appendix A.1 Qa-Vgae Training
In this section, we introduce the details of the QAVGAE training procedure and demonstrate its effectiveness.
Training Goal. We adopt encoder-decoder models to conduct relation reconstruction tasks. Given the prepared adjacent matrix A, and feature matrix X, we use a two-layer GCN as a distribution learning model to estimate its mean and variance. The training loss function is formalized as:
$$\mathcal{L}_{P}=\mathbb{E}_{q(Z|X,A)}[\log p(A\,|\,Z)]-\text{KL}(q(X,A),p(Z))\tag{15}$$ where $Z$ is calculated by Equation 5, KL is the
Kullback-Leibler divergence, q(·) and p(·) denotes the encoder and decoder respectively, please refer to Kipf and Welling (2016) for more details.
Settings. Specifically, A is defined as the matrix of neighborhood relations between nodes, where we set A(*i, j*) as 1 if there is a connection between relation ri and rj , and 0 for no connection. X is the feature matrix defined as the connectivity, which is accumulated as the number of edges between two nodes, aiming to show the importance of a relation.
We set an empirical thresh of each element in the feature matrix to avoid extremely large values to hurt the model's training, such as the degrees of
"*Common.type_of* " is quite huge, defined as:
$$X[i,j]=\begin{cases}\tau,&c\geq\tau,\\ c,&c<\tau.\end{cases}\qquad\qquad(16)$$
where c is connectivity, τ is an emprical hyperparameter, and we set τ as 2000 in our work.
## A.2 Serr Algorithm
Complexity Analysis. Definitely, applying semantic similarity between relations and given questions is a more straightforward method to identify strong relations. However, the process of such a method is more complicated and time-consuming.
To prove the efficiency of our method, we conduct a comparison experiment to reflect the complexity of the two methods. As is shown in Table 4, the top two rows denote semantic similarity method, and the last row denotes our method. Obviously, our method is more lightweight without extra pre-trained models and the dependence on GPU resources. For comparison, we adopt *Bertbase-uncased* model to conduct the semantic similarity process in this experiment, which can be downloaded in https://huggingface.co/
bert-base-uncased.
| Module | WebQSP | CWQ | | | | |
|--------------------|----------|-------|--------|--------|------|----|
| Params | Time | GPU | Params | Time | GPU | |
| Cosine Distance | 420.10 | 28.8 | √ | 420.10 | 53.5 | √ |
| Euclidean Distance | 420.10 | 30.5 | √ | 420.10 | 55.5 | √ |
| Stem Extraction | - | 4.9 | × | - | 18.3 | × |
Table 4: Comparing SERR module with semantic similarity method, i.e., cosine distance and euclidean distance in terms of model parameters and computing resources. *Time* row denotes total handling time (*minutes*).
Params row denotes model size (MB)
Performance Analysis. Besides, to demonstrate it can be plug-in and infer cases quickly, we further validate its accuracy and inference time, as is shown in Table 5, Note that, since SERR relies on traditional stem extraction rather than semantic understanding to identify the key paths, there is no training period for SERR, and it can be applied to any information-retrieval(IR)-based methods.
Finally, to demonstrate the plug-in attributes of the SERR module, we integrate this module into BiNSM network (He et al., 2021) and the results are shown in Table 6. The results show that SERR can indeed increase the Hit@1/F1 score from 74.3/67.4 to 74.8/68.0 in the WebQSP dataset, and from 48.8/44.0 to 49.5/45.3 in the CWQ dataset.
$$\frac{2}{5}$$
| Factor | Webqsp | CWQ |
|----------------|----------|-------|
| Accuracy (%) | 63.2 | 75.5 |
| Infer Time (s) | 0.18 | 0.32 |
Table 5: Performance of SERR algorithm in terms of accuracy score and inference time in two benchmark datasets. The accuracy score is calculated among recalled cases where close facts lie in its subgraph.
## A.3 Hyper-Parameter Setting.
In order to help reproduce RE-KBQA and its reasoning performance, as shown in Table 7, we list the hyper-parameters of the best results on two benchmark datasets. For the WebQSP dataset, the
| Different cases | WebQSP | CWQ | | |
|-------------------|----------|--------|-------|------|
| Hits@1 | F1 | Hits@1 | F1 | |
| BiNSM | 74.3 | 67.4 | 48.8 | 44.0 |
| with SERR | 74.8 | 68.0 | 49.5 | 45.3 |
| 0.5 ↑ | 0.6 ↑ | 0.7 ↑ | 1.3 ↑ | |
best results are obtained by using the initial learning rate of 0.0008, training batch size of 40, dropout rate of 0.30, reasoning step of 3, and max epoch size of 100. For the CWQ dataset, the best results are obtained by using the initial learning rate of 0.0008, training batch size of 100, dropout rate of 0.30, reasoning step of 3, and max epoch size of 200. For more experiment details, please refer to our code which will be published upon the publication of this work.
| Parameter | WebQSP | CWQ |
|----------------|----------|-------|
| Learning rate | 8e−4 | 8e−4 |
| Batch size | 40 | 100 |
| Eps | 0.95 | 0.95 |
| Dropout | 0.30 | 0.30 |
| Num_step | 3 | 3 |
| Entity_dim | 50 | 50 |
| Word_dim | 300 | 300 |
| Num_epoch | 200 | 100 |
| Relations | 6105 | 6649 |
| Num_candidates | 2000 | 2000 |
## A.4 More Ablation Study
Reasoning Network. One minor modification of our work is that we adopt Transformer Encoder as a reasoning network, of its self-attention mechanism and superior capability of encoding information. As is shown in Table 8, compared with the backbone model (Linear layer), LSTM can acquire slight performance but with obviously longer training time, and Transformer Encoder can obtain promotion for KBQA task with tolerable extra training time. Therefore, different reasoning layers also affect the performance, and adopting Transformer Encoder can benefit a lot with three modules.
![13_image_0.png](13_image_0.png)
![13_image_1.png](13_image_1.png)
Training Settings. From Figure 8(a) and 8(b),
we further study that 5e−4and 3 is the best hyperparameter setting for the learning rate and reasoning step. It is worth noting that, for embeddingfused methods, the more reasoning steps are not the determinant for network performance. We conduct our experiments on 2* V100 GPUs.
| Models | WebQSP | CWQ | | | | |
|-------------|----------|-------|--------|-------|-------|------|
| Hits@1 | F1 | Train | Hits@1 | F1 | Train | |
| RE-KBQAb | 68.7 | 62.8 | 4.3 | 46.8 | 40.5 | 21.1 |
| LSTM | 70.9 | 66.6 | 6.5 | 47.6 | 41.5 | 27.0 |
| 2.2 ↑ | 3.8 ↑ | 2.2 ↑ | 0.8 ↑ | 1.0 ↑ | 5.9 ↑ | |
| Transformer | 71.0 | 66.1 | 4.5 | 47.1 | 42.7 | 21.5 |
| 2.3 ↑ | 3.3 ↑ | 0.2 ↑ | 0.3 ↑ | 1.2 ↑ | 0.4 ↑ | |
## A.5 More Case Analysis
In this section, we deliver more case analysis on simple questions, similarity entity identification, and the intuitive reasoning process of our method.
![14_image_0.png](14_image_0.png)
Simple questions. As shown in Figure 9 , we show a case of one-hop reasoning on the WebQSP
dataset, which proved that RE-KBQA performs well in simple question answering, as the main network can recall correct candidates and the SERR
module can effectively re-rank the candidates.
Similarity entity identification.
To demonstrate our method can indeed distinguish similar entities, we choose a case that needs to reason across similar entities as is shown in Figure10(a). While most of the surrounding edges are the same among candidates of the first step, our method can still select the correct node as the final answer.
RE-KBQA reasoning process.
Figure 10(b)
shows a three-hop reasoning case of our method, to intuitively demonstrate that our method can effectively conduct a multi-hop reasoning process. Note that, the reasoning process of our method can be illustrated as the status transfer of the relation V ( t )
and candidate vectors V ( t ) from one distribution into another, which is not strictly consistent along the reasoning path, thus in some degree solve the problem of knowledge base incompleteness.
![15_image_0.png](15_image_0.png)
.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1
✓ A4. Have you used AI writing assistants when working on this paper?
ChatGPT for some writing checking.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5.2 and appendix D
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5.2 and appendix D
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
haemmerl-etal-2023-speaking | Speaking Multiple Languages Affects the Moral Bias of Language Models | https://aclanthology.org/2023.findings-acl.134 | Pre-trained multilingual language models (PMLMs) are commonly used when dealing with data from multiple languages and cross-lingual transfer. However, PMLMs are trained on varying amounts of data for each language. In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MORALDIRECTION framework to multilingual models, comparing results in German, Czech, Arabic, Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that, indeed, PMLMs encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions. We release our code and models. | # Speaking Multiple Languages Affects The Moral Bias Of Language Models Katharina Hämmerl1,2And **Björn Deiseroth**3,4And **Patrick Schramowski**4,5,9
Jindrich Libovický ˇ
6and **Constantin A. Rothkopf**5,7,8 Alexander Fraser1,2and **Kristian Kersting**4,5,8,9 1Center for Information and Language Processing, LMU Munich, Germany
{lastname}@cis.lmu.de 2Munich Centre for Machine Learning (MCML), Germany 3Aleph Alpha GmbH, Heidelberg, Germany 4Artificial Intelligence and Machine Learning Lab, TU Darmstadt, Germany 5Hessian Center for Artificial Intelligence (hessian.AI), Darmstadt, Germany 6Faculty of Mathematics and Physics, Charles University, Czech Republic 7Institute of Psychology, TU Darmstadt, Germany 8Centre for Cognitive Science, TU Darmstadt, Germany 9German Center for Artificial Intelligence (DFKI)
## Abstract
Pre-trained multilingual language models
(PMLMs) are commonly used when dealing with data from multiple languages and crosslingual transfer. However, PMLMs are trained on varying amounts of data for each language.
In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MORALDIRECTION framework to multilingual models, comparing results in German, Czech, Arabic, Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that PMLMs do encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions. We release our code and models.1
## 1 Introduction
Recent work demonstrated large pre-trained language models capture some symbolic, relational
![0_image_0.png](0_image_0.png)
(Petroni et al., 2019), but also commonsense (Davison et al., 2019) knowledge. The undesirable side of this property is seen in models reproducing biases and stereotypes (e.g., Caliskan et al., 2017; Choenni et al., 2021). However, in neutral terms, language models trained on large data from particular contexts will reflect cultural "knowledge" from those contexts. We wonder whether multilingual models will also reflect cultural knowledge from multiple contexts, so we study moral intuitions and norms that the models might capture.
Recent studies investigated the extent to which language models reflect human values (Schramowski et al., 2022; Fraser et al., 2022).
These works addressed monolingual English mod1https://github.com/kathyhaem/
multiling-moral-bias els. Like them, we probe what the models encode, but we study multilingual models in comparison to monolingual models. Given constantly evolving social norms and differences between cultures and languages, we ask: Can a PMLM capture cultural differences, or does it impose a Western-centric view regardless of context? This is a broad question which we cannot answer definitively. However, we propose to analyse different aspects of monoand multilingual model behaviour in order to come closer to an answer. In this paper, we pose three research questions, and present a series of experiments that address these questions qualitatively:
1. If we apply the MORALDIRECTION framework (Schramowski et al., 2022) to pretrained multilingual language models (PMLMs), how does this behave compared to monolingual models and to humans? (§ 3)
2. How does the framework behave when applied to parallel statements from a different data source? To this end, we analyse model behaviour on Czech-English and GermanEnglish OpenSubtitles data (§ 4).
3. Can the mono- and multi-lingual models make similar inferences to humans on a Moral Foundations Questionnaire (Graham et al., 2011)?
Do they behave in ways that appropriately reflect cultural differences? (§ 5)
The three experiments reinforce each other in finding that our models grasp the moral dimension to some extent in all tested languages. There are differences between the models in different languages, which sometimes line up between multiand mono-lingual models. This does not necessarily correspond with differences in human judgements. As an illustration, Figure 1 shows examples of the MORALDIRECTION score for several verbs in our monolingual and multilingual models.
We also find that the models are very reliant on lexical cues, leading to problems like misunderstanding negation, and disambiguation failures.
This unfortunately makes it difficult to capture nuances. In this work we compare the behaviour of the PMLM both to human data and to the behaviour of monolingual models in our target languages Arabic, Czech, German, Chinese, and English.
## 2 Background 2.1 Pre-Trained Multilingual Lms
PMLMs, such as XLM-R (Conneau et al., 2020), are trained on large corpora of uncurated data, with an imbalanced proportion of language data included in the training. Although sentences with the same semantics in different languages should theoretically have the same or similar embeddings, this language neutrality is hard to achieve in practice (Libovický et al., 2020). Techniques for improving the model's internal semantic alignment
(e.g., Zhao et al., 2021; Cao et al., 2020; Alqahtani et al., 2021; Chi et al., 2021; Hämmerl et al., 2022)
have been developed, but these only partially mitigate the issue. Here, we are interested in a more complex type of semantics and how well they are cross-lingually aligned.
## 2.2 Cultural Differences In Nlp
Several recent studies deal with the question of how cultural differences affect NLP. A recent comprehensive survey (Hershcovich et al., 2022) highlights challenges along the cultural axes of aboutness, values, *linguistic form*, and *common ground*.
Some years earlier, Lin et al. (2018) mined crosscultural differences from Twitter data, focusing on named entities and slang terms from English and Chinese. Yin et al. (2022) probed PMLMs for "geo-diverse commonsense", concluding that for this task, the models are not particularly biased towards knowledge about Western countries. However, in their work the knowledge in question is often quite simple. We are interested in whether this holds for more complex cultural values. In the present study, we assume that using a country's primary language is the simplest way to probe for values from the target cultural context. Our work analyses the extent to which one kind of cultural difference, moral norms, is captured in PMLMs.
## 2.3 Moral Norms In Pre-Trained Lms
Multiple recent studies have investigated the extent to which language models reflect human values
(Schramowski et al., 2022; Fraser et al., 2022).
Further, benchmark datasets (Hendrycks et al.,
2021; Emelin et al., 2021; Ziems et al., 2022) aiming to align machine values with human labelled data have been introduced. Several such datasets
(Forbes et al., 2020; Hendrycks et al., 2021; Alhassan et al., 2022) include scenarios from the "Am I the Asshole?" subreddit, an online community where users ask for an outside perspective on personal disagreements. Some datasets use the community judgements as labels directly, others involve crowdworkers in the dataset creation process.
Other works have trained models specifically to interpret moral scenarios, using such datasets. A
well-known example is Jiang et al. (2021), who propose a fine-tuned UNICORN model they call DELPHI. This work has drawn significant criticism, such as from Talat et al. (2021), who argue "that a model that generates moral judgments cannot avoid creating and reinforcing norms, i.e., being normative". They further point out that the training sets sometimes conflate moral questions with other issues such as medical advice or sentiments.
Hulpus, et al. (2020) explore a different direction. They project the Moral Foundations Dictionary, a set of lexical items related to foundations in Moral Foundations Theory (§ 2.4), onto a knowledge graph. By scoring all entities in the graph for their relevance to moral foundations, they hope to detect moral values expressed in a text. Solaiman and Dennison (2021) aim to adjust a pre-trained model to specific cultural values as defined in a targeted dataset. For instance, they assert "the model should oppose unhealthy beauty [...] standards".
A very interesting and largely unexplored area of research is to consider whether *multilingual* language models capture differing moral norms. For instance, moral norms in the Chinese space in a PMLM might systematically differ from those in the Czech space. Arora et al. (2022) attempt to probe pre-trained models for cultural value differences using Hofstede's cultural dimensions theory (Hofstede, 1984) and the World Values Survey
(Haerpfer et al., 2022). They convert the survey questions to cloze-style question probes, obtaining score values by subtracting the output distribution logits for two possible completions from each other.
However, they find mostly very low correlations of model answers with human references. Only a few of their results show statistically significant correlations. They conclude that the models differ between languages, but that these differences do not map well onto human cultural differences.
Due to the observation that the output distributions themselves do not reflect moral values well, we choose the MORALDIRECTION framework for our studies. In previous work, this approach identified a subspace of the model weights relating to a sense of "right" and "wrong" in English.
## 2.4 Moral Foundations Theory
Moral Foundations Theory (Haidt and Joseph, 2004) is a comparative theory describing what it calls *foundational moral principles*, whose relative importance can be measured to describe a given person's or culture's moral attitudes. Graham et al. (2009) name the five factors "Care/Harm",
"Fairness/Reciprocity", "Authority/Respect", "Ingroup/Loyalty", and "Purity/Sanctity". Their importance varies both across international cultures
(Graham et al., 2011) and the (US-American) political spectrum (Graham et al., 2009). The theory has been criticised by some for its claim of innateness and its choice of factors, which has been described as "contrived" (Suhler and Churchland, 2011). Nevertheless, the associated Moral Foundations Questionnaire (Graham et al., 2011) has been translated into many languages and the theory used in many different studies (such as Joeckel et al., 2012; Piazza et al., 2019; Dogruyol et al. ˘ , 2019).
An updated version of the MFQ is being developed by Atari et al. (2022). In § 5, we score these questions using our models and compare with human responses from previous studies on the MFQ.
## 2.5 Sentence Transformers
By default, BERT-like models output embeddings at a subword-token level. However, for many applications, including ours, sentence-level representations are necessary. In our case, inducing the moral direction does not work well for mean-pooled token representations, leading to near-random scores in many cases (see § 3.1). Reimers and Gurevych
(2019) proposed Sentence-Transformers as a way to obtain meaningful, constant sized, sentence representations from BERT-like models. The first Sentence-BERT (S-BERT) models were trained by tuning a pre-trained model on a sentence pair classification task. By encoding each sentence separately and using a classification loss, the model learns more meaningful representations.
To obtain multilingual sentence representations, they proposed a student-teacher training approach using parallel corpora (Reimers and Gurevych, 2020), where a monolingual S-BERT model acts as a teacher and a pre-trained multilingual model as a student model. Such an approach forces the parallel sentences much closer together than in the original PMLM, which is not always desirable. In our case, we might be unable to distinguish the effects of the S-BERT training from the original model, which would interfere with probing the original model.
Unlike their work, we train a multilingual sentence transformer by translating the initial training data into our target languages (§ 3.2), and show that this is effective. With this contribution, we show that multilingual S-BERT models can be trained in the same way as monolingual ones. Our approach does not require a teacher-student training setup.
Note that we do require comparable datasets in size and ideally topics for each language. While we do not explicitly align the data, we solve this by using machine translated versions of existing datasets, which means we have implicitly parallel data.
## 3 **Inducing Moral Dimensions In Pmlms**
We choose five languages to evaluate the behaviour of the multilingual models: Modern Standard Arabic (ar), Czech (cs), German (de), English (en),
and Mandarin Chinese (zh). These are all relatively high-resource languages, so we hope the model will be able to reliably detect cultural knowledge in each language. Since we rely on machine translation, using high-resource languages also ensures good translation quality. We note here that languages and cultures or countries are at best approximately equivalent (cf. Lin et al., 2018; Sahlgren et al.,
2021). For instance, Arabic, English, and Chinese are standardised varieties that are written in a wide range of different contexts or cultures, which are likely merged together in the model to some degree.
However, separation by language is the best way we have to distinguish cultural contexts within a multilingual model. As a point of comparison, we choose a monolingual language model for each of our target languages (see App. A for details).
## 3.1 Moraldirection **Framework**
We use the MORALDIRECTION framework by Schramowski et al. (2022). In the first step, this method encodes a set of positively and negatively connotated action verbs with a sentence embedding model. In Schramowski et al. (2022), this is an SBERT model. Each action verb is inserted into a set of ten template questions, such as "Should I
[verb]?", "Is it examplary to [verb]?", and the output embedding for a verb is the mean over the embeddings of these questions. Next, PCA is applied to the outputs, to obtain the "moral direction" subspace of the model. Since the inputs are templates with only individual verbs changing, they are linguistically homogeneous, and the most salient differences for the PCA are the value judgements. Ideally, a high amount of variance should be explained by the first principal component. The scores of these initial verbs are then normalised to lie within
[−1, 1]. Subsequent scores can sometimes lie outside this range despite applying the normalisation.
The scores are then read as a value estimation along one axis, with scores around 0 being "neutral",
scores close to -1 being very "bad", and scores close to 1 very "good". However, note that the results we list in Tables 1-3 and 8 are correlations of model scores with user study data or correlations of model scores with other model scores.
We choose to use MORALDIRECTION because it is able to work directly with sentence embeddings and extract a reasonably human-correlated moral direction from them, producing a value score along a single axis. This makes it computationally inexpensive to transfer to other languages and datasets.
A drawback is that it is induced on short, unambiguous phrases, and can be expected to work better on such phrases. Deriving a score along a single axis can also be limiting or inappropriate in certain contexts. See also the discussion in Limitations.
For a list of the verbs and questions used to derive the transformation, see the source paper.
Schramowski et al. (2022) also conduct a user study on Amazon MTurk to obtain reference scores for the statements in question.
To test this method on multilingual and nonEnglish monolingual models, we machine translate both the verbs and the filled question templates used in the above study. See Appendix B for the MT systems used, and a discussion of translation quality. We edited some of the questions to ensure good translation.2 Our primary measure is the correlation of resulting model scores with responses from the study in Schramowski et al. (2022).
We initially tested the method on mBERT (Devlin et al., 2019) and XLM-R (Conneau et al.,
2020), as well as a selection of similarly sized monolingual models (Devlin et al., 2019; Antoun et al., 2020; Straka et al., 2021; Chan et al., 2020),
by mean-pooling their token representations. See Appendix Table 5 for a list of the models used. Table 1 shows these initial results with mean-pooling.
However, this generally did not achieve a correlation with the user study. There were exceptions to this rule—i.e., the Chinese monolingual BERT, and the English and Chinese portions of mBERT. This 2e.g. "smile to sb." → "smile at sb."
2140
| Model | en | ar | cs | de | zh |
|---------------------------------|-------------------------|------------|------|------|------|
| mBERT (mean-pooled) | 0.65 -0.10 | 0.12 -0.18 | 0.62 | | |
| XLM-R (mean-pooled) | -0.30 -0.07 -0.03 -0.14 | 0.10 | | | |
| monolingual (mean-pooled) -0.13 | 0.46 | 0.07 | 0.10 | 0.70 | |
| monolingual S-BERT-large | 0.79 | - | - | - | - |
| XLM-R (S-BERT) | 0.85 | 0.82 | 0.85 | 0.83 | 0.81 |
Table 1: Correlation of MORALDIRECTION scores with user study data for different pre-trained monoand multi-lingual models. First three rows used meanpooled sentence embeddings; last two rows used embeddings resulting from sentence-transformers (Reimers and Gurevych, 2019).
may be due to details in how the different models are trained, or how much training data is available for each language in the multilingual models. Table 1 also includes results from the monolingual, large English S-BERT, and an existing S-BERT
version of XLM-R
3(Reimers and Gurevych, 2020).
These two models did show good correlation with the global user study, highlighting that this goal requires semantic sentence representations.
## 3.2 Sentence Representations
The existing S-BERT XLM-R model uses the student-teacher training with explicitly aligned data mentioned in § 2.5. As we discuss there, we aim to change semantic alignment in the PMLM
as little as possible before probing it. We also need S-BERT versions of the monolingual models.
Therefore, we train our own S-BERT models. We use the sentence-transformers library (Reimers and Gurevych, 2019), following their training procedure for training with NLI data.4 Although we do not need explicitly aligned data, we do require comparable corpora in all five languages, so we decide to use MNLI in all five languages. In addition to the original English MultiNLI dataset (Williams et al.,
2018), we take the German, Chinese and Arabic translations from XNLI (Conneau et al., 2018), and provide our own Czech machine translations (cf.
Appendix B). Each monolingual model was tuned with the matching translation, while XLM-R*Base* was tuned with all five dataset translations. Thus, our multilingual S-BERT model was not trained directly to align parallel sentences, but rather trained with similar data in each involved language (without explicit alignment). For more training details,
| Model | en | ar | cs | de | zh |
|----------------------------------------------|------|------|------|------|------|
| XLM-R + MNLI (S-BERT, all 5 langs) | 0.86 | 0.77 | 0.74 | 0.81 | 0.86 |
| monolingual + MNLI (S-BERT, respective lang) | 0.86 | 0.76 | 0.81 | 0.84 | 0.80 |
Table 2: Correlation of MORALDIRECTION scores from our mono- and multi-lingual S-BERT models with user study data.
see Appendix D. We release the resulting S-BERT models to the Huggingface hub.
## 3.3 Results
Table 2 shows the user study correlations of our S-BERT models. Clearly, sentence-level representations work much better for inducing the moral direction, and the method works similarly well across all target languages. Figures 1 and 5 show examples of verb scores across models and languages, further illustrating that this method is a reasonable starting point for our experiments.
For Arabic and the Czech portion of XLM-R, the correlations are slightly lower than the other models. Notably, Arabic and Czech are the smallest of our languages in XLM-R, at 5.4 GB and 4.4 GB of data (Wenzek et al., 2020), while their monolingual models contain 24 GB and 80 GB of data.
Since in the case of Czech, the correlation is higher in the monolingual model, and XLM-R and the monolingual model disagree somewhat (Table 3), the lower correlation seems to point to a flaw of its representation in XLM-R. For Arabic, the correlation of the monolingual model with English is similar to that seen in XLM-R, but the monolingual model also disagrees somewhat with the XLM-R representation (Table 3). This may mean there is actually some difference in attitude
(based on the monolingual models), but XLM-R
also does not capture it well (based on the XLM-R
correlations). Unfortunately, Schramowski et al.
(2022) collected no data specifically from Arabic or Czech speakers to illuminate this.
In Table 3 we compare how much the scores correlate with each other when querying XLM-R
and the monolingual models in different languages.
The diagonal shows correlations between the monolingual model of each language and XLM-R in that language. Above the diagonal, we show how much the monolingual models agree with each other, while below the diagonal is the agreement of different languages within XLM-R. On the diagonal, we compare each monolingual model with the match-
| language | en | ar | cs | de | zh |
|------------|------|------|------|------|------|
| en | 0.93 | 0.86 | 0.92 | 0.89 | 0.91 |
| ar | 0.86 | 0.84 | 0.89 | 0.89 | 0.86 |
| cs | 0.90 | 0.78 | 0.86 | 0.92 | 0.92 |
| de | 0.95 | 0.87 | 0.88 | 0.95 | 0.91 |
| zh | 0.94 | 0.89 | 0.84 | 0.94 | 0.94 |
![5_image_0.png](5_image_0.png)
ing language in XLM-R. For English, German and Chinese, these show high correlations. The lowest correlation overall is between the Czech and Arabic portions of XLM-R, while the respective monolingual models actually agree more. The monolingual S-BERT models are generally at a similar level of correlation with each other as the multilingual model. German and Chinese, however, show a higher correlation with English in the multilingual model than in their respective monolingual models, which may show some interference from English.
We also show the correlations of languages within the pre-existing S-BERT model,5 which was trained with parallel data, in Table 8. Here, the correlations between languages are much higher, showing that parallel data training indeed changes the model behaviour on the *moral dimension*. These correlations are higher than that of any one model with the user study data, so this likely corresponds to an artificial similarity with English, essentially removing cultural differences from this model.
Summarised, the experiments in this section extend Schramowski et al. (2022) to a multilingual setting and indicate that multilingual LMs indeed capture moral norms. The high mutual correlations of scores show that the differences between models and languages are relatively small in this respect.
Note, however, that the tested statements provided by Schramowski et al. (2022) are not explicitly designed to grasp cultural differences. We thus add further experiments to address this question.
## 4 Qualitative Analysis On Parallel Data
To better understand how these models generalise for various types of texts, we conduct a qualita-5sentence-transformers/xlm-r-100langsbert-base-nli-mean-tokens tive study using parallel data. For a parallel sentence pair, the MORALDIRECTION scores should be similar in most cases. Sentence pairs where the scores differ considerably may indicate cultural differences, or issues in the models. In practice, very large score differences appear to be more related to the latter. This type of understanding is important for further experiments with these models.
We conduct our analysis on OpenSubtitles parallel datasets (Lison and Tiedemann, 2016),6 which consist of relatively short sentences. Given that the MORALDIRECTION is induced on short phrases, we believe that short sentences will be easier for the models. The subtitles often concern people's behaviour towards each other, and thus may carry some moral sentiment. We use English-German and English-Czech data for our analysis. To obtain the moral scores, we encode each sentence with the respective S-BERT model, apply the PCA transformation, and divide the first principal component by the normalising parameter.
Our analysis focuses on sentence pairs with very different scores. We take steps to filter out mistranslated sentence pairs—see Appendix H. Below, we discuss examples of where scores differ noticeably even when the translations are adequate. Using Czech-English and German-English data, we compare the monolingual models with XLM-R,
XLM-R with the monolingual models, and the monolingual models with each other. This analysis is based on manual inspection of 500 sentence pairs with the highest score differences for each combination. Note that many of the sentence pairs were minor variations of each other, which significantly sped up the analysis. Relevant examples are listed with their MORALDIRECTION scores in Table 4 and Table 10 in the Appendix.
## 4.1 Reliance On Lexical Items
A common theme for many examples is an overreliance on individual lexical items. For example,
"Traitors ... like you!" receives a positive score in English, while the German equivalent is correctly scored as negative. Most likely, the English models took a shortcut: "like you" is seen as a good thing.
Similarly, XLM-R in English scores "They're dying to meet you." somewhat negatively. The English BERT gives a positive score. However, arguably this is a case where the most correct answer would be neutral, since this is more a positive 6http://www.opensubtitles.org/
monoling XLM-R
| de | en | de | en | de | en |
|------------------------------------|----------------------------|-------|-------|-------|-------|
| Pures Gift. | Pure poison. | -0.61 | -0.71 | 0.65 | -0.69 |
| Ich erwürg dich! | I'll strangle you! | -0.41 | -0.58 | 0.90 | -0.62 |
| Hab jemandem einen Gefallen getan. | I did someone a favour. | 0.39 | 0.28 | -0.41 | 0.73 |
| Verräter ... wie Sie! | Traitors ... like you! | -0.56 | 0.19 | -0.39 | 0.72 |
| Sie brennen darauf, dich kennenzulernen. | They're dying to meet you. | 0.44 | 0.73 | 0.52 | -0.31 |
| Ich vermisse ihn sehr. | I really miss him. | 0.69 | 0.23 | -0.41 | -0.26 |
| Er schätzt mich. | He values me. | 1.12 | 0.31 | 0.04 | 0.88 |
Table 4: Examples from the German-English OpenSubtitles data for which there is a large, spurious contrast between MORALDIRECTION scores. Scores that stand out as unreasonable are *italicised*.
sentiment than any moral concern.
## 4.2 Multilinguality And Polysemy
Continuing the theme of literalness, another dimension is added to this in the multilingual setting.
For instance, XLM-R scores the German "Pures Gift." (*pure poison*) as positive, likely because the key word "Gift" looks like English "gift", as in present. However, the model also makes less explainable mistakes: many sentences with "erwürgen" (*to strangle*) receive a highly positive score.
In the Czech-English data, there are even more obvious mistakes without a straightforward explanation. Some Czech words are clearly not understood by XLM-R: For instance, sentences with "šte-ˇ
drý" (*generous*) are negative, while any sentence with "pácidlo" ( ˇ *crowbar*) in it is very positive in XLM-R. Phrases with "vrah" (*murderer*) get a positive score in XLM-R, possibly because of transliterations of the Russian word for medical doctor.
Most of these obvious mistakes of XLM-R are not present in RobeCzech. However, "Otrávils nás"
(*You poisoned us*) receives a positive score from RobeCzech for unknown reasons.
Confusing one word for another can also be a problem within a single language: For example,
"Gefallen" (a favour) receives a negative score from XLM-R in many sentences. It is possible this model is confusing this with "gefallen" (past participle of
"fallen", to fall), or some other similar word from a different language. "Er schätzt mich" and similar are highly positive in gBERT, as well as English XLM-R, but have a neutral score in German XLM-R. Likely the latter is failing to disambiguate here, and preferring "schätzen" as in *estimate*.
## 5 Moral Foundations Questionnaire
The MFQ has been applied in many different studies on culture and politics, meaning there is human response data from several countries available. We pose the MFQ questions from Graham et al. (2011)
to our models, in order to compare the model scores with data from previous studies. We use the translations provided on the Moral Foundations website.7 Since the first part of the MFQ consists of very complex questions, we rephrase these into simple statements (see Appendix J). Many of the statements in the first half of the questionnaire become reverse-coded by simplifying them, that is, someone who values the aspect in question would be expected to answer in the negative. For these statements, we multiply the model score by -1. Further, we know that language models struggle with negation (Kassner and Schütze, 2020), so we remove
"not" or "never" from two statements and flip the sign accordingly. In the same way, we remove "a lack of" from two statements.
These adjustments already improved the coherence of the resulting aspect scores, but we found further questions being scored by the models as if reverse-coded, i.e., with a negative score when some degree of agreement was expected. These were not simply negated statements, but they did tend to contain lexical items that were strongly negatively associated, and in multiple cases contained a negative moral judgement of the action or circumstance in question. Because the models appear to be so lexically focused (see § 4.1), this combination led to a strong negative score for some of these questions. We decided to rephrase such statements as well, usually flipping their sign while changing the wording as little as possible. Still, we 7https://moralfoundations.org/questionnaires/
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
note here that this should be considered a type of prompt engineering, and that implicatures of the statements may have changed through this process.
We provide the list of rephrased English statements and multipliers in Appendix Table 11.
We manually apply the same changes to the translations. The full list of English and translated statements, as well as model scores for each question, is available as a CSV file. Finally, we mean-pool the question scores within each aspect to obtain the aspect scores. Most of the model scores for each question will be within [-1, 1]. The results are shown in Figure 2.
## 5.1 Human Response Data
Also in Figure 2, we show German data from Joeckel et al. (2012), Czech data from Beneš
(2021), US data from Graham et al. (2011), and Chinese data from Wang et al. (2019) for comparison. Note that these are not necessarily representative surveys. The majority of the data in question were collected primarily in a university context and the samples skew highly educated and politically left. For Germany, the US and the Czech Republic, the individual variation, or variation between political ideologies seems to be larger than the variation between the countries. The Chinese sample scores more similarly to conservative respondents in the Western countries. Although many individuals score in similar patterns as the average, the difference between individuals in one country can be considerable. As an example, see Figure 7.
None of our models' scores map directly onto average human responses. The model scores do not use the full range of possible values, but even the patterns of relative importance do not match the average human patterns. Scores sometimes vary considerably in different models and different languages within XLM-R, and not necessarily in a way that would follow from cultural differences.
The average scores within XLM-R are somewhat more similar to each other than the scores from the monolingual models are, giving some weak evidence that the languages in the multilingual model assimilate to one another. However, some differences between the monolingual models are also reflected in the multilingual model.
## 5.2 Sanity Check
We compare against scores from the unmodified, mean-pooled XLM-R models, shown in Figure 3.
These models did not have the Sentence-BERT tuning applied to them, but otherwise we used the same procedure to obtain the scores. The inconsistent and very unlike human scores reinforce the finding from § 3 that mean-pooled representations are not useful for our experiments. They also confirm that the results in our main MFQ experiments are not arbitrary.
## 6 Conclusions
We investigated the *moral dimension* of pre-trained language models in a multilingual context. In this section, we discuss our research questions:
(1) Multilingual MORALD**IRECTION**. We applied the MORALDIRECTION framework to XLM-R, as well as monolingual language models in five languages. We were able to induce models that correlate with human data similarly well as their English counterparts in Schramowski et al.
(2022). We analysed differences and similarities across languages.
In the process, we showed that sentence-level representations, rather than mean-pooled tokenlevel representations, are necessary in order to induce a reasonable moral dimension for most of these models. We trained monolingual S-BERT
models for our five target languages Arabic, Czech, German, English, and Mandarin Chinese. As well, we created a multilingual S-BERT model from XLM-R which was trained with MNLI data in all five target languages.
(2) Behaviour on Parallel Subtitles. A limitation of the MORALDIRECTION is that it is induced on individual words, and thus longer sentences are a significant challenge for the models. Still, we were able to test them on parallel subtitles data, which contains slightly longer, but predominantly still short, sentences. Problems that showed up repeatedly in this experiment were an over-reliance on key lexical items and a failure to understand compositional phrases, particularly negation. Additionally, typical problems of PMLMs, such as disambiguation problems across multiple languages, were noticeable within XLM-R. Non-English languages appeared more affected by such issues, despite the fact that all our target languages are relatively high resource.
(3) Moral Foundations Questionnaire. Our experiments with the MFQ reinforce the conclusion that the MORALDIRECTION models capture a general sense of right and wrong, but do not display entirely coherent behaviour. Again, compositional phrases and negation were an issue in multiple cases. We had set out to investigate whether cultural differences are adequately reflected in the models' cross-lingual behaviour. However, our findings indicate that rather, there are other issues with the cross-lingual transfer that mean we cannot make such nuanced statements about the model behaviour. To the extent that model behaviour differs for translated data, this does not seem to match cultural differences between average human responses from different countries.
We had initially wondered whether models would impose values from an English-speaking context on other languages. Based on this evidence, it seems that the models do differentiate between cultures to some extent, but there are caveats: The differences are not necessarily consistent with human value differences, which means the models are not always adequate. The problem appears to be worse when models are trained on smaller data for a given language. Meanwhile, German and Chinese have noticeably high agreement with English in our multilingual model, and all languages are extremely highly correlated in the pre-existing parallel-data S-BERT model (Table 8). This clearly shows that training with parallel data leads to more similar behaviour in this dimension, more or less removing cultural differences, but indeed there may be some transference even without parallel data.
Future Work. This leads to several future research questions: (i) Can we reliably investigate encoded (moral) knowledge reflected by PMLMs on latent representations or neuron activations? Or do we need novel approaches? For instance, Jiang et al. (2021) suggest evaluating the output of generative models and, subsequently, Arora et al. (2022)
apply masked generation using PMLMs to probe cultural differences in values. However, the generation process of LMs is highly dependent, among other things, on the sampling process. Therefore, it is questionable if such approaches provide the required transparency. Nevertheless, Arora et al.
(2022) come to a similar conclusion as indicated by our results: PMLMs encode differences between cultures. However, these are weakly correlated with human surveys, which leads us to the second future research question: (ii) How can we reliably teach large-scale LMs to reflect cultural differences but also commonalities? Investigating PMLMs' moral direction and probing the generation process leads to inconclusive results, i.e., these models encode differences, which, however, do not correlate with human opinions. But correlating with human opinions is a requirement for models to work faithfully in a cross-cultural context. Therefore, we advocate for further research on teaching cultural characteristics to LMs.
## Limitations
The MORALDIRECTION framework works primarily for short, unambiguous phrases. While we show that it is somewhat robust to longer phrases, it does not deal well with negation or certain types of compositional phrases. We showed that in such cases, prompt engineering seems to be necessary in order to get coherent answers. Inducing the MORALDIRECTION was done on a small set of verbs, and the test scenarios in this paper—apart from § 4—are also relatively small.
The scope of our work is specific to our stated target languages, which are all relatively highresource, meaning the method may not hold up for languages with smaller corpora, especially in the context of PMLMs. This work presents primarily an exploratory analysis and qualitative insights.
Another point is that the monolingual models we used may not be precisely comparable. Table 5 lists details of parameter size, training, tokenizers, data size and data domain. The models are all similarly sized, but data size varies considerably. XLM-R
and RobeCzech do not use next sentence prediction as part of their training objective. However, the authors of RoBERTa (Liu et al., 2019) argue this difference does not affect representation quality.
Further, exactly comparable models do not exist for every language we use. We rather choose wellperforming, commonly-used models. Thus, we believe the model differences play a negligible role in the context of our scope.
More broadly speaking, the present work makes the strong assumption that cultural context and language are more or less equivalent, which does not hold up in practice. Furthermore, MORALDIREC-TION, like related methods, only consider a single axis, representing a simplistic model of morality.
In the same vein, these models will output a score for any input sentence, including morally neutral ones, sometimes leading to random answers.
## Broader Impacts
Language models should not decide moral questions in the real world, but research in that direction might suggest that this is in fact possible. Besides undue anthropomorphising of language models, using them to score moral questions could lead to multiple types of issues: The models may reproduce and reinforce questionable moral beliefs. The models may hallucinate beliefs. And particularly in the context of cross-lingual and cross-cultural work, humans might base false, overgeneralising, or stereotyping assumptions about other cultures on the output of the models.
## Acknowledgements
We thank Sven Jöckel for providing us with their raw results from their MFQ studies, and Hashem Sellat and Wen Lai for their help with formulating the MFQ questions in Arabic and Chinese. Thank you to Morteza Dehghani.
This publication was supported by LMUexcellent, funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the Federal Government and the Länder; and by the German Research Foundation (DFG; grant FR 2829/4-1). The work at CUNI was supported by Charles University project PRIMUS/23/SCI/023 and by the European Commission via its Horizon research and innovation programme (No. 870930 and 101070350). Further, we gratefully acknowledge support by the Federal Ministry of Education and Research (BMBF) under Grant No. 01IS22091.
This work also benefited from the ICT-48 Network of AI Research Excellence Center "TAILOR" (EU
Horizon 2020, GA No 952215), the Hessian research priority program LOEWE within the project WhiteBox, the Hessian Ministry of Higher Education, and the Research and the Arts (HMWK)
cluster projects "The Adaptive Mind" and "The Third Wave of AI".
## References
Areej Alhassan, Jinkai Zhang, and Viktor Schlegel.
2022. 'Am I the bad one'? Predicting the moral judgement of the crowd using pre–trained language models. In Proceedings of the Language Resources and Evaluation Conference, page 267–276, Marseille, France. European Language Resources Association.
Sawsan Alqahtani, Garima Lalwani, Yi Zhang, Salvatore Romeo, and Saab Mansour. 2021. Using optimal transport as alignment objective for fine-tuning multilingual contextualized embeddings. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3904–3919, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Wissam Antoun, Fady Baly, and Hazem Hajj. 2020.
AraBERT: Transformer-based model for Arabic language understanding. In *Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language*
Detection, pages 9–15, Marseille, France. European Language Resource Association.
Arnav Arora, Lucie-Aimée Kaffee, and Isabelle Augenstein. 2022. Probing pre-trained language models for cross-cultural differences in values. *CoRR*,
abs/2203.13722.
Mohammad Atari, Jonathan Haidt, Jesse Graham, Sena Koleva, Sean T Stevens, and Morteza Dehghani.
2022. Morality beyond the WEIRD: How the nomological network of morality varies across cultures.
PsyArXiv.
Michal Beneš. 2021. Psychometrické hodnocení dotazníku moral foundations questionnaire [online].
Master thesis, Masarykova univerzita, Filozofická fakulta, Brno, Czech Republic. Supervisor: Helena Klimusová.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases.
Science, 356(6334):183–186.
Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations.
CoRR, abs/2002.03518.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Branden Chan, Stefan Schweter, and Timo Möller. 2020.
German's next language model. In *Proceedings of* the 28th International Conference on Computational Linguistics, pages 6788–6796, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Pinzhen Chen, Jindˇrich Helcl, Ulrich Germann, Laurie Burchell, Nikolay Bogoychev, Antonio Valerio Miceli Barone, Jonas Waldendorf, Alexandra Birch, and Kenneth Heafield. 2021. The University of Edinburgh's English-German and English-Hausa submissions to the WMT21 news translation task. In Proceedings of the Sixth Conference on Machine Translation, pages 104–109, Online. Association for Computational Linguistics.
Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, XianLing Mao, Heyan Huang, and Furu Wei. 2021. Improving pretrained cross-lingual language models via self-labeled word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3418–3430, Online. Association for Computational Linguistics.
Rochelle Choenni, Ekaterina Shutova, and Robert van Rooij. 2021. Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1477–1491, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics.
Joe Davison, Joshua Feldman, and Alexander Rush.
2019. Commonsense knowledge mining from pretrained models. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 1173–1178, Hong Kong, China. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Burak Dogruyol, Sinan Alper, and Onurcan Yilmaz. ˘
2019. The five-factor model of the moral foundations theory is stable across weird and non-weird cultures.
Personality and Individual Differences, 151:109547.
Denis Emelin, Ronan Le Bras, Jena D. Hwang, Maxwell Forbes, and Yejin Choi. 2021. Moral stories: Situated reasoning about norms, intents, actions, and their consequences. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 698–718, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In *Proceedings of the 2020 Conference on*
Empirical Methods in Natural Language Processing
(EMNLP), pages 653–670, Online. Association for Computational Linguistics.
Kathleen C. Fraser, Svetlana Kiritchenko, and Esma Balkir. 2022. Does moral code have a moral code? probing delphi's moral philosophy. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 26–42, Seattle, U.S.A. Association for Computational Linguistics.
Jesse Graham, Jonathan Haidt, and Brian A. Nosek.
2009. Liberals and conservatives rely on different sets of moral foundations. *Journal of Personality and* Social Psychology, 96(5):1029–46.
Jesse Graham, Brian Nosek, Jonathan Haidt, Ravi Iyer, Sena P Koleva, and Peter H Ditto. 2011. Mapping the moral domain. *Journal of Personality and Social* Psychology, 101 (2):366–385.
C Haerpfer, R Inglehart, A Moreno, C Welzel, K Kizilova, J Diez-Medrano, M Lagos, P Norris, E Ponarin, and B Puranen. 2022. World values survey: Round seven—country-pooled datafile version 3.0. *JD Systems Institute: Madrid, Spain*.
Jonathan Haidt and Craig Joseph. 2004. Intuitive ethics:
How innately prepared intuitions generate culturally variable virtues. *Daedalus*, 133(4):55–66.
Katharina Hämmerl, Jindˇrich Libovický, and Alexander Fraser. 2022. Combining static and contextualised multilingual embeddings. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 2316–2329, Dublin, Ireland. Association for Computational Linguistics.
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yunhsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. 2017. Efficient natural language response suggestion for smart reply. *CoRR*, abs/1705.00652.
Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt.
2021. Aligning AI with shared human values. In Proceedings of the International Conference on Learning Representations (ICLR). OpenReview.net.
Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, and Anders Søgaard. 2022. Challenges and strategies in crosscultural NLP. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6997–7013, Dublin, Ireland. Association for Computational Linguistics.
Geert Hofstede. 1984. *Culture's Consequences: International Differences in Work-Related Values*. Cross Cultural Research and Methodology. SAGE Publications.
Ioana Hulpus,, Jonathan Kobbe, Heiner Stuckenschmidt, and Graeme Hirst. 2020. Knowledge graphs meet moral values. In *Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics*,
pages 71–80, Barcelona, Spain (Online). Association for Computational Linguistics.
Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Maxwell Forbes, Jon Borchardt, Jenny Liang, Oren Etzioni, Maarten Sap, and Yejin Choi. 2021. Delphi: Towards machine ethics and norms.
CoRR, abs/2110.07574.
Sven Joeckel, Nicholas David Bowman, and Leyla Dogruel. 2012. Gut or game? the influence of moral intuitions on decisions in video games. *Media Psychology*, 15(4):460–485.
Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888–895, Belgium, Brussels. Association for Computational Linguistics.
Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models:
Birds can talk, but cannot fly. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics.
Jindˇrich Libovický, Rudolf Rosa, and Alexander Fraser.
2020. On the language neutrality of pre-trained multilingual representations. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 1663–1674, Online. Association for Computational Linguistics.
Bill Yuchen Lin, Frank F. Xu, Kenny Zhu, and Seungwon Hwang. 2018. Mining cross-cultural differences and similarities in social media. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 709–719, Melbourne, Australia. Association for Computational Linguistics.
Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In *Proceedings of the Tenth* International Conference on Language Resources and Evaluation (LREC'16), pages 923–929, Portorož, Slovenia. European Language Resources Association
(ELRA).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *CoRR*.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 2463–2473, Hong Kong, China. Association for Computational Linguistics.
Jared Piazza, Paulo Sousa, Joshua Rottman, and Stylianos Syropoulos. 2019. Which appraisals are foundational to moral judgment? Harm, injustice, and beyond. Social Psychological and Personality Science, 10(7):903–913.
Martin Popel, Marketa Tomkova, Jakub Tomek, Łukasz Kaiser, Jakob Uszkoreit, Ondˇrej Bojar, and Zdenekˇ
Žabokrtsky. 2020. Transforming machine transla- `
tion: a deep learning system reaches news translation quality comparable to human professionals. *Nature* communications, 11(1):1–15.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics.
Magnus Sahlgren, Fredrik Carlsson, Fredrik Olsson, and Love Börjeson. 2021. It's basically the same language anyway: the case for a Nordic language model.
In *Proceedings of the 23rd Nordic Conference on* Computational Linguistics (NoDaLiDa), pages 367–
372, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf, and Kristian Kersting. 2022.
Large pre-trained language models contain humanlike biases of what is right and wrong to do. *Nature* Machine Intelligence.
Irene Solaiman and Christy Dennison. 2021. Process for adapting language models to society (palms) with values-targeted datasets. In *Advances in Neural Information Processing Systems*, volume 34, pages 5861–
5873. Curran Associates, Inc.
Milan Straka, Jakub Náplava, Jana Straková, and David Samuel. 2021. RobeCzech: Czech RoBERTa, a monolingual contextualized language representation model. In *Text, Speech, and Dialogue*, pages 197–
209, Cham. Springer International Publishing.
Christopher Suhler and Pat Churchland. 2011. Can innate, modular "foundations" explain morality? Challenges for Haidt's Moral Foundations Theory. *Journal of Cognitive Neuroscience*, 23:2103–16; discussion 2117.
Zeerak Talat, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell, and Adina Williams. 2021.
A word on machine ethics: A response to Jiang et al. (2021). *CoRR*, abs/2111.04158.
Jörg Tiedemann and Santhosh Thottingal. 2020. OPUSMT - building open translation services for the world.
In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 479–480, Lisboa, Portugal. European Association for Machine Translation.
Ruile Wang, Qi Yang, Peng Huang, Liyang Sai, and Yue Gong. 2019. The association between disgust sensitivity and negative attitudes toward homosexuality:
The mediating role of moral foundations. Frontiers in Psychology, 10.
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet:
Extracting high quality monolingual datasets from web crawl data. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 4003–4012, Marseille, France. European Language Resources Association.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, and Kai-Wei Chang. 2022. GeoMLAMA: Geo-diverse commonsense probing on multilingual pre-trained language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2039–2055, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2021. Inducing language-agnostic multilingual representations. In Proceedings of *SEM
2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 229–240, Online.
Association for Computational Linguistics.
Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A
benchmark for ethical dialogue systems. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 3755–3773, Dublin, Ireland. Association for Computational Linguistics.
## A Details Of Models Used
Table 5 lists the models we tuned and evaluated with their exact names, sizes, objectives and data.
The models are all of a similar size, although data size varies by up to one order of magnitude between monolingual models. XLM-R has much larger data in total, but data size for individual languages is more comparable to the other models.
The data domains vary but overlap (Web, Wiki, News). XLM-R and RobeCzech do not use next sentence prediction as part of their training objective. However, we believe these differences play a negligible role in the context of our work.
## B Machine Translation Quality
Machine translation is used to translate the templated sentences from English into Arabic, Czech, German and Chinese. For Arabic and Chinese, we use Google Translate. The sentences are short and grammatically very simple.
For translation into Czech, we use CUBBITT
(Popel et al., 2020), a machine translation system that scored in the first cluster in WMT evaluation campaigns 2019–2021. For translation into German, we use the WMT21 submission of the University of Edinburgh (Chen et al., 2021). To validate our choice of machine translation systems, we estimate the translation quality using the referencefree version of the COMET score (Rei et al., 2020)
(model wmt21-comet-qe-mqm) on the 2.7k generated questions.
To train the S-BERT models, we use the TRANSLATE-TRAIN part of the XNLI dataset that is distributed with the dataset (without specifying what translation system was used). For translation into Czech, we use CUBBITT again. To ensure the translation quality is comparable, we use the same evaluation metric as in the previous case on 5k randomly sampled sentences.
## C **Correlations In Existing (Parallel Data)** S-Bert
Table 8 shows the correlations of languages within the pre-existing multilingual S-BERT, trained with parallel data. The correlations within this model are extremely high, considerably higher than that of any one model with the user study.
## D Sentence-Bert Tuning Procedure
We follow the training script provided by Reimers and Gurevych (2019) in the sentence-tranformers repository. As training data, we use the complete MNLI (Williams et al., 2018; 433k examples) in the five respective languages. The dev split from the STS benchmark (Cer et al., 2017; 1500 examples) serves as development data. We also machine translate this into the target languages. The loss function is Multiple Negatives Ranking Loss (Henderson et al., 2017), which benefits from larger batch sizes. We use sentence-transformers version 2.2.0. Table 9 lists further training parameters.
## E Computational Resources
In addition to the six models used for further experiments, we trained five XLM-R with singlelanguage portions of data. Each of the monolingual models, as well as the XLM-R versions tuned with one part of the data, took around 0.6 hours to train.
Tuning XLM-R with data in all five languages accordingly took around three hours. S-BERT tuning was done on one Tesla V100-SXM3 GPU, with 32 GB RAM, at a time. We also trained one version of XLM-R on English data with a smaller batch size on an NVIDIA GeForce GTX 1080 GPU with 12 GB RAM. In all other experiments, the language models were used in inference mode only, and they were mostly run on the CPU.
## F Variance In Moraldirection **Scores**
In this section we discuss another aspect of MORALDIRECTION scores in multilingual versus monolingual models: How much they vary between different languages for each statement. For instance, if the variance is smaller in the multilingual model, this would mean that the multilingual model applies more similar judgements across languages.
To quantify this, we calculate the score variance for each of the basic verbs from Schramowski et al.
(2022) over the five monolingual models, as well as over the five portions of the multilingual model.
We furthermore grouped the verbs into "positive" and "negative", depending on whether their mean score from the multilingual model is greater or lower than zero. This results in 35 positive and 29 negative verbs. Figure 4 shows box-plots of
Lng Name Params Objective Tokenizer Data size Domain
ar aubmindlab/bert-base-arabertv02
(Antoun et al., 2020)
110M MLM+NSP SP, 60k 24 GB Wiki, News
cs ufal/robeczech-base (Straka et al., 2021) 125M MLM BPE, 52k 80 GB News, Wiki,
Web
de deepset/gbert-base (Chan et al., 2020) 110M MLM+NSP WP, 31k 136 GB Web, Wiki,
Legal
en bert-base-cased (Devlin et al., 2019) 110M MLM+NSP WP, 30k 16 GB Books, Wiki
zh bert-base-chinese (Devlin et al., 2019) 110M MLM+NSP WP, 21k ? Wiki
- xlm-roberta-base (Conneau et al., 2020) 125M MLM SP, 250k 2.5TB Web
Table 5: The monolingual pre-trained language models used. We tuned each model with the S-BERT framework before using it for our experiments. Objectives: MLM = masked language modelling, NSP = next sentence prediction, Tokenization: WP = WordPiece, SP = SentecePiece, unigram model.
Lng Model COMET
![14_image_1.png](14_image_1.png) ar Google Translate .1163
![14_image_2.png](14_image_2.png)
cs CUBBITT .1212 de UEdin WMT21 .1191 zh Google Translate .1111
![14_image_5.png](14_image_5.png)
![14_image_0.png](14_image_0.png)
Table 8: In-model correlation of scores on the user study
![14_image_3.png](14_image_3.png)
questions, within sentence-transformers/xlm-r100langs-bert-base-nli-mean-tokens.
![14_image_4.png](14_image_4.png)
the variance for those groups. Overall, variances are similar for monolingual and multilingual models. The positive verbs have a lower variance in the multilingual than in the monolingual models.
However, the opposite is true for the group of negative verbs, averaging out to very similar variances overall. Therefore, analysing variances does not lead us to conclusions about differing behaviour of monolingual versus multilingual models.
## G More Examples Morald**Imension** For Verbs
Additional examples to Figure 1 are shown in Figure 5.
## H Opensubtitles Filtering Details
Figure 6 shows the statistical correlation of the MORALDIRECTION scores on the OpenSubtitles dataset, evaluated for the German-English text
![15_image_0.png](15_image_0.png)
pairs. The high Pearson correlation values give further evidence for a strong correlation of the compared scores and the plausibility of this experiment.
As observed before with Section 3, evaluating on the multilingual XLM-R model strengthens the correlation of the MORALDIRECTION.
Initially, the most "controversial" sentence pairs—i.e., ones with extremely different MORALDIRECTION scores—in the OpenSubtitles data seemed to be due to mistranslated or misaligned subtitles. In order to exclude these cases, we applied filtering using dual cross-entropy score
(Junczys-Dowmunt, 2018) based on OpusMT
models (Tiedemann and Thottingal, 2020). For German-English, the correlation of the moral score delta and the translation score pre-filtering was -0.28 for XLM-R, -0.19 for the monolingual models, underlining that translation quality was a frequent issue. After filtering this dropped to
-0.10 and -0.05. The score deltas are normally distributed.
## I Opensubtitles: Further Czech-English Examples
Table 10 shows further examples from the CzechEnglish OpenSubtitles corpus. Several of them contain verbs with prefix o- which seems to confuse the multilingual model.
## J Rephrased Mfq Questions
The first 15 questions of MFQ have the format,
"When you decide whether something is right or wrong, to what extent are the following considerations relevant to your thinking? Please rate each statement using this scale: - not at all relevant (This consideration has nothing to do with my judgments of right and wrong) - not very relevant - slightly relevant - somewhat relevant - very relevant - extremely relevant (This is one of the most important factors when I judge right and wrong): Whether or not someone suffered emotionally ". We rephrase this to "Someone suffered emotionally."
Table 11 lists the English MFQ questions that we rephrased along with the sign of the multiplier applied (either -1 or +1).
## K **Role Of The "Catch" Questions In Mfq**
The MFQ contains two catch questions, which are designed to have an obvious, uncontroversial answer. For human respondents, their purpose is to filter out people who are not paying attention. For the language models, they may indeed be informative as well. In English, these questions are:
"Someone was good at math." and "It is better to do good than to do bad." For the first, we would expect the answer to be 0—this should be a perfectly neutral statement in a moral sense. For the other, we expect an answer at least close to the maximum score, since "doing good" is trivially better than
"doing bad".
The English, Chinese, and Czech models do give scores close to 0 for the maths question. In Arabic, our monolingual model assigns a slight negative score, while XLM-R gives a moderately positive score. In German, both models give a moderately positive score, likely because the chosen translation
"Jemand zeigt in Mathematik gute Leistungen" contains the somewhat positively connotated "Leistungen" (*performance, accomplishments*, etc.). The second catch question gets anything from fairly negative (-0.55), to neutral, to slightly positive scores, which again seems to fit with an overreliance on lexical cues. This behaviour shows again that while the models do capture the "moral dimension" to some degree, they have significant weaknesses, particularly with respect to the compositional meanings of longer phrases. In a real survey, they may not even have been considered
"serious" respondents.
![16_image_0.png](16_image_0.png)
![16_image_1.png](16_image_1.png)
| monoling | XLM-R | | | | |
|-------------------------------|---------------------------------------|-------|-------|-------|-------|
| cs | en | cs | en | cs | en |
| To je opravdu štedré. | It's very generous. | 0.50 | 0.88 | | |
| ˇ | -0.46 | 1.00 | | | |
| Tvá neteˇr me velmi ohromila. | I was very impressed with your niece. | 0.34 | 0.42 | | |
| ˇ | -0.58 | 0.84 | | | |
| Ohrožuje vaši budoucnost. | He threatens your future. | -0.88 | -0.70 | 0.42 | -0.56 |
| Políbila jsi ho. | You kissed him. | 0.52 | 0.21 | 1.01 | -0.36 |
| Jste vrah! | You're a murderer! | -0.97 | -0.99 | 0.38 | -0.77 |
| Otrávils nás. | You poisoned us. | 0.37 | -0.82 | -0.62 | -0.63 |
![16_image_2.png](16_image_2.png)
| Original (after simplifying) | sign | Rephrased | sign |
|-------------------------------------------------------|---------------------------------------------------|---------------------------------------------------------|--------|
| Someone showed a lack of respect for authority. | - | Someone showed respect for authority. | + |
| Someone showed a lack of loyalty. | - | Someone showed loyalty. | + |
| People should not do things that are disgusting, | + | People should do things that are disgusting, if no | - |
| even if no one is harmed. | one is harmed. | | |
| One of the worst things a person could do is hurt a | + | One of the best things a person could do is hurt a | - |
| defenseless animal. | defenseless animal. | | |
| I would call some acts wrong on the grounds that | + | I would call some acts right on the grounds that | + |
| they are unnatural. | they are natural. | | |
| It can never be right to kill a human being. | + | It can be right to kill a human being. | - |
| I think it's morally wrong that rich children inherit | + | I think it's morally right that rich children inherit a | - |
| a lot of money while poor children inherit nothing. | lot of money while poor children inherit nothing. | | |
| If I were a soldier and disagreed with my commanding officer's orders, I would obey anyway because that is my duty. | + | If I were a soldier and disagreed with my commanding officer's orders, I would disobey. | - |
Table 11: Rephrased MFQ statements in English. Unchanged statements are omitted from this table.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
nguyen-etal-2023-retrieving | Retrieving Relevant Context to Align Representations for Cross-lingual Event Detection | https://aclanthology.org/2023.findings-acl.135 | We study the problem of cross-lingual transfer learning for event detection (ED) where models trained on a source language are expected to perform well on data for a new target language. Among a few recent works for this problem, the main approaches involve representation matching (e.g., adversarial training) that aims to eliminate language-specific features from the representations to achieve the language-invariant representations. However, due to the mix of language-specific features with event-discriminative context, representation matching methods might also remove important features for event prediction, thus hindering the performance for ED. To address this issue, we introduce a novel approach for cross-lingual ED where representations are augmented with additional context (i.e., not eliminating) to bridge the gap between languages while enriching the contextual information to facilitate ED. At the core of our method involves a retrieval model that retrieves relevant sentences in the target language for an input sentence to compute augmentation representations. Experiments on three languages demonstrate the state-of-the-art performance of our model for cross-lingual ED. | # Retrieving Relevant Context To Align Representations For Cross-Lingual Event Detection
Chien Van Nguyen1, Linh Van Ngo2**, and Thien Huu Nguyen**3 1 VinAI Research, Vietnam 2 Hanoi University of Science and Technology, Hanoi, Vietnam 3 Department of Computer Science, University of Oregon, Eugene, OR, USA
[email protected], [email protected], [email protected]
## Abstract
We study the problem of cross-lingual transfer learning for event detection (ED) where models trained on a source language are expected to perform well on data for a new target language. Among a few recent works for this problem, the main approaches involve representation matching (e.g., adversarial training)
that aims to eliminate language-specific features from the representations to achieve the language-invariant representations. However, due to the mix of language-specific features with event-discriminative context, representation matching methods might also remove important features for event prediction, thus hindering the performance for ED. To address this issue, we introduce a novel approach for cross-lingual ED where representations are augmented with additional context (i.e., not eliminating) to bridge the gap between languages while enriching the contextual information to facilitate ED. At the core of our method involves a retrieval model that retrieves relevant sentences in the target language for an input sentence to compute augmentation representations. Experiments on three languages demonstrate the state-of-the-art performance of our model for cross-lingual ED.
## 1 Introduction
As one of the core tasks in Information Extraction (IE), the goal of Event Detection (ED) is to identify and classify the word(s) that most clearly evoke events in text (called event triggers). For instance, in the sentence "*He was* **fired** *from the* corporation yesterday.", an ED system needs to predict "*fired*" as an event trigger of the type *Attack*.
Due to its applications, ED has been well studied over the last decade, featuring deep learning as the most recent approach with state-of-the-art performance (Nguyen and Grishman, 2015; Chen et al.,
2015; Nguyen et al., 2016; Liu et al., 2017; Lu and Nguyen, 2018; Lin et al., 2020).
However, despite intensive studies, most prior work has focused on monolingual learning settings for ED where models are trained and evaluated on labeled data of the same languages (Nguyen and Grishman, 2018; Wadden et al., 2019; Lai et al.,
2020; Yang et al., 2019; Ngo et al., 2020; Liu et al.,
2020; Nguyen et al., 2021a). As such, to extend current ED models to another language, monolingual learning will require new annotated data to train the models, which can be very expensive to obtain for different languages. To this end, there has been a growing interest in cross-lingual transfer learning for ED where models trained on a source language are directly applied to data of a new target language
(M'hamdi et al., 2019). In this way, labeled data from high-resource languages (e.g., English) can be leveraged to develop effective ED models for other languages (e.g., low-resource ones). In this work, we focus on zero-shot cross-lingual transfer learning for ED to avoid the need for labeled data in the target languages, thus enabling fast adaptation of ED models to multiple languages.
A key strategy for cross-lingual transfer learning is to align input text representations for the source and target languages to facilitate cross-lingual extraction of events. As such, prior work on crosslingual ED has explored multilingual word embeddings (e.g., MUSE) (Joulin et al., 2018; Liu et al.,
2019) or recent multilingual pre-trained language models (e.g., mBERT) (M'hamdi et al., 2019) to represent source- and target-language texts in the same space. Recently, state-of-the-art crosslingual ED methods have leveraged unlabeled data in the target language with representation matching frameworks to further align text representations for the source and target languages (Nguyen et al.,
2021b). Given two sentences in the source and target languages, these methods aim to encode the two sentences to obtain representation vectors for language-universal objects, e.g., sentences, event types, universal dependency relations or parts of speech (Nguyen et al., 2021b). Afterward, the representations of the same language-universal objects
(computed with the source or language data) are regulated to be similar to each other to improve the alignment between the source and target languages for cross-lingual ED (e.g., with adversarial training to fool the language discriminators).
As such, to achieve representation similarity between languages, previous representation matching methods for ED will need to filter information/features that are specific to each language in the representations (Nguyen et al., 2021b). However, as the representations for each language are computed from input sentences, the languagespecific information might involve/mix with important contextual structures, which are necessary to reveal event types in the representations. Consequently, removing language-specific information might also eliminate important discriminative context features, thus limiting the performance for ED
models. To address this issue, our work explores a novel approach for alignment of language representations for cross-lingual ED that avoids direct similarity regularization and removal of important context information from the representation vectors. Instead, our approach seeks to add relevant context information into original input representations for the source and target language texts to make them closer for effective cross-lingual ED. In particular, starting with the representation vectors S and T to perform ED in the source and target languages (respectively), we aim to induce additional context representations A(S) and A(T) that will be added into S and T (i.e., leading to the augmented representations S + A(S) and T + A(T))
to achieve greater similarity between representations for the source and target languages. One the one hand, the additional representations A(S) and A(T) will be obtained over sentences in the target language to bias the prediction representations toward the target space and enhance representation alignment for cross-lingual ED. On the other hand, we will leverage external sentences with relevant/related contexts to the original representations S and T to compute the augmented representations A(S) and A(T). As such, with enriched context information, we expect that the augmented representations S + A(S) and T + A(T) will facilitate the prediction of event types to boost performance for cross-lingual ED. Note that this representation augmentation with relevant context is not possible in previous cross-lingual transfer learning methods for ED, thus highlighting the advantage of our proposed approach for cross-lingual ED in this work.
To implement the representation augmentation idea for cross-lingual ED, we introduce a "retrievethen-classify" framework with two major steps. In the first step of retrieval, given an input sentence, our model first retrieves relevant/related sentences from an unlabeled dataset (e.g., focusing on sentences with similar event types). Next, the retrieved sentences will be encoded and their representations will be injected into the representation for the input sentence to perform ED. In our method, the unlabeled dataset will be taken from the target language
(for input sentences from both the source and target languages) to shift the augmented representations to the target language space, thus implicitly bridging the gap between representations for different languages for ED. In addition, to better customize the context retrieval for our cross-lingual ED task, the retrieval model will be jointly trained with the ED model in an end-to-end fashion to encourage their interactions/feedback for better overall performance. Our framework also introduces a novel regularization mechanism to promote the shared awareness of relevant context sentences for an input sentence between retrieval and ED models to further improve the induced representations for cross-lingual ED. Finally, we conduct extensive experiments on the multilingual ACE 2005 dataset for ED (with three languages: English, Chinese, and Arabic), demonstrating the state-of-the-art performance of the proposed method over different language pairs for cross-lingual transfer learning of ED. To our knowledge, this is the first work on retrieval-based models for cross-lingual ED.
## 2 Model
We follow prior work (M'hamdi et al., 2019) to formalize the cross-lingual transfer learning (CLTL)
task for ED as a sequence labeling problem. Given an input sentence W = w1, w2*, . . . , w*n with n words, we need to assign a label yi for each word wi ∈ W using the BIO annotation schema to capture event triggers and their types in W. In CLTL,
the input sentence W belongs to the source language in the training time while sentences in the new target language are leveraged for evaluation in test time. Similar to recent methods on CLTL
for ED (Nguyen et al., 2021b), our model assumes an unlabeled dataset U = {U1, U2*, . . . , U*m} that 2158 contains m sentences in the target language (Utis the t-th sentence in U).
Given the input sentence W, the first step in our model involves retrieving a set of relevant sentences R in the unlabeled dataset U (i.e., R ⊂ U)
to provide augmented context information for W for cross-lingual ED. Note that the unlabeled set U will be used to retrieve sentences for input texts in both training and testing phases. The representations for the retrieved sentences in R will later be integrated into the representation vectors for the words in W to perform sequence labeling for ED.
The benefit of this representation augmentation approach is twofold. First, as the representations for the retrieved sentences R are computed over the target language sentences, during the training time with the source-language input sentence W,
the representation augmentation will shift the representations for the words wi ∈ W closer to the target language space. This helps to bridge the gap between the source- and target-language representation spaces that enables the training of ED
models over source-language data to better generalize to data in the target language (i.e., cross-lingual generalization). Second, during the test time with the target language, incorporating context information from the retrieved relevant sentences R will enrich/strengthen the representations for the words in the original input sentence W, thus facilitating the predictions of labels to boost performance for cross-lingual ED. Our following sections will describe the retrieval and ED models in our method.
## 2.1 Relevant Context Retrieval
To retrieve relevant sentences for W in U for ED,
our intuition is to identify sentences in U that express the same event types using similar context patterns as in W. We expect that such relevant sentences can strengthen the necessary context to predict event triggers in W, and improve the targetlanguage orientation of the representations to boost cross-lingual performance. To this end, our retrieval model first aims to compute a representation vector for W and each sentence Ut ∈ U to capture their event contexts. For W, we append the special token [CLS] to the beginning and send it into a multilingual pre-trained language model to learn representations for each token. In particular, we leverage miniLM (Wang et al., 2020), a multilingual language model distilled from the pre-trained model XLM-RoBERTa (large version) (Conneau et al., 2020), to obtain representation vectors for the words in W in our retrieval component. Compared to XLM-RoBERTa with 24 transformer layers and 1024 hidden dimensions, the multilingual miniLM version only includes 6 transformer layers with 384 hidden dimensions that can make our retrieval component more efficient for representation computation. As such, the representation vector for [CLS] in the last layer of miniLM will be used as the representation vector W for W. Similarly, we also compute the representation vector Ut for each sentence Utin the unlabeled set U with miniLM. Here, we employ two separate versions of the pre-trained miniLM model to encode the input sentence W and the unlabeled sentences U in the target language (called miniLMW and miniLMU
respectively), thus enabling the flexibility to capture context information for each type of data, i.e.,
W = miniLMW (W) and Ut = miniLMU (Ut).
Given the representation vectors W and Ut, we compute a similarity score between W and each unlabeled sentence Ut ∈ U using the cosine similarity: sim(*W ,U*t) = W · Ut/||W*||||*Ut||.
Afterward, we select the top K sentences in U
that have the highest similarities sim with W to serve as the retrieved set R of relevant sentences:
R = {R1, R2*, . . . , R*K} (i.e., Rk is the k-th sentence in R ⊂ U and K is a hyper-parameter).
Warm-up Training: The computed representation vectors W and Ut so far are generic and not customized for our goals of same event types and similar context. To this end, we propose to fine-tune the language models miniLMW and miniLMU to adapt their encoding mechanisms to the retrieval problem for ED using contrastive learning (Khosla et al., 2020). Given a sentence W in the training dataset L of the source language, let TW be the set of event types that are presented in W.
We focus on the sentences W with at least one event in this contrastive learning process (e.g.,
|TW | > 0). As such, to obtain a positive example, we identify another sentence P ∈ L that involves at least one event type in TW (i.e., containing the same event types). For negative examples, we leverage a set of sentences N(W) in L that do not express any event type in TW . In the implementation, we compute N(W) for each sentence using the other sentences in the same minibatch. As such, our contrastive loss to fine-tune the miniLM models for event retrieval is formed via:
L*const* = − log exp(sim(*W ,P*))
PN∈N(W)
exp(sim(*W ,N*)) where W = miniLMW (W), P = miniLMU (P), and N = miniLMU (N). Note that this contrastive training process is only used as a warm-up step to prepare our retrieval model event types and context in our task; we will later jointly train the retrieval model with the ED model to leverage the training signals for ED to improve the retrieval model.
## 2.2 Event Detection Model
To solve the cross-lingual ED problem for the input sentence W, our model aims to perform sequence labeling over W conditioning on the retrieved relevant sentence R ⊂ U. For convenience, let Rk be the representation vector for Rk ∈ R induced from miniLMU . The similarity score between W and Rk is thus sim(W , Rk).
Also, let Rk = rk,1, rk,2, . . . , rk,Ik be the sequence of words for Rk (i.e., Ik is the length of Rk and rk,j is the j-th word in Rk).
To this end, our ED model first feeds W
(prepended with [CLS]) into the multilingual pretrained language model XLM-RoBERTa (base version) (Conneau et al., 2020), called XLMR, to obtain representations for the words wi ∈ W.
In particular, using the hidden vectors in the last transformer layer of XLMR, we leverage the average of the hidden vectors for the subtokens of wi ∈ W to compute the representation vector hi for wi, denoted by h1, h2*, . . . ,* hn =
XLMR(w1, w2*, . . . , w*n). In a typical sequence labeling model, the representation hi can be sent into a feed-forward network to produce a distribution over possible BIO tags for wi for ED. In our model, to augment the representation hi for wi with the retrieved sentence context in R for cross-lingual ED,
we further seek to incorporate context representations for the words rk,j in the sentences Rk ∈ R to improve hi for cross-lingual prediction. As such, we also feed each sentence Rk into the multilingual model XLMR to generate the representation vectors rk,j for the words rk,j ∈ Rk, following the same procedure for hi: rk,1, rk,2, . . . , rk,Ik =
XLMR(rk,1, rk,2, . . . , rk,Ik
).
In the next step, using the attention mechanism, we quantify the contribution of each representation vector rk,j for the augmentation of wi for W with the attention weight a*i,k,j* . In particular, our motivation for a*i,k,j* is that the attention weight of rk,j for wi needs to capture their context similarity within their corresponding sentences Rk and W. In addition, the attention weight a*i,k,j* should also condition on the retrieval similarity between the corresponding retrieved sentence Rk and the input sentence W (i.e., sim(W , Rk)). The rationale is that the words in a retrieved sentence Rk with higher retrieval similarity score with W
should be more preferable than the words in other sentences in R for the context augmentation of wi (i.e., a retrieval bias). To this end, the attention weight ai,k,j of rk,j for wiis computed via: ai,k,j =b*i,k,j* PK
k′=1 PIk′
j′=1 bi,k′,j′
where b*i,k,j* =
exp(wiArk,j +αsim(W , Rk)). Here, α is a tradeoff parameter between context and retrieval similarities and A is the learnable matrix. Afterward, the augmentation context representation ai from retrieved sentences R for wiis obtained via the weighted sum: ai =PK
k=1 PIk j=1 ai,k,jrk,j .
Finally, the representation vector for event prediction for wiis computed by: vi = wi + ai. vi is then fed into a two-layer feed-forward network F F to compute a score vector to capture the possibilities for wito receive the possible BIO labels for ED: pi = F F(vi). Next, the score vectors pi are sent into a Conditional Random Field (CRF) layer to encode the tag dependencies and compute the conditional probability P(·|*W, R*) for the possible label sequences for W. The negative log-likelihood for the golden label sequence Y∗is then used to train the model: Lseq = − log P(Y∗|*W, R*).
In the test time, given an input sentence W in the target language, we also compute the augmentation representations ai for the words in W using the same unlabeled set U. Viterbi decoding with P(·|*W, R*) is then employed to predict the label sequence for W for ED. As such, the augmentation representations ai are computed over the same unlabeled set U of the target language for both training and testing phases, thus shifting the prediction representations vitoward the target language space to achieve better cross-lingual alignment for ED.
Joint Training: The inclusion of the retrieval similarity score sim(W , Rk)) (computed from miniLMW and miniLMU ) in the attention weight a*i,k,j* for the ED model implies that the training signals for ED in Lseq are also back-propagated to the retrieval model, thus better adapting the retrieval model to our problem of similar event context retrieval. However, this back-propagation also entails updating the miniLMW and miniLMU models in the retrieval component after each mini-batch in the training process. As such, the retrieval model will also need to recompute the representations Ut for each unlabeled sentence Ut ∈ U after each training step, which can be very expensive and slow down the training. To this end, instead of updating the retrieval model after each training step, in the implementation, we only update miniLMW and miniLMU after every Q training steps/mini-batches
(Q is a hyper-parameter). In this way, although we cannot leverage the latest updates for the retrieval component, our model can maintain the synchronization between miniLMW and miniLMU , reduce training time significantly, and still retrieve relevant sentences from U for cross-lingual ED.
## 2.3 Similarity Regularization
In our model, the retrieved sentences Rk ∈ R are expected to be relevant/similar to the input sentence W according to the retrieval model with miniMLW and miniMLU . As such, to achieve a consistency between the retrieval model and the ED model, we argue that the retrieved sentences Rk should also be similar to W according to the ED model with the XLMR model for sentence encoding. Consequently, we propose to explicitly encourage the the similarities between the representations for Rk and W as computed by the XLMR model for ED, serving as a regularization to improve representation learning in our model. In particular, when W and Rk ∈ R are encoded by XLMR for the ED model, we also use the hidden vectors for the [CLS] token in the last transformer layer of XLMR represent these sentences, leading to the representation vectors W
XLMR and Rk XLMR for W and Rk respectively.
Afterward, we enforce the XLMR-based similarity between W and Rk by minimizing the negative cosine similarity between W
XLMR and Rk XLMR:
Lreg = −PK
k=1 sim(W
XLMR, Rk XLMR). The overall loss function to train our model is thus:
L = Lseq + λLreg (λ is a trade-off parameter).
During the training time, as W and Rk belong to the source and target languages respectively, the minimization of Lreg also serves to align the representations for the source and target languages, thus similar to the representation matching frameworks in prior work for cross-lingual ED (Nguyen et al., 2021b). However, a key difference is that previous representation matching methods tend to match randomly chosen sentences in the source and target languages that might involve different event contexts. To align the source- and target-language representations, such previous methods might thus learn to exclude those event contexts from the representations, causing poorer discriminative features for ED. In contrast, our cross-lingual similarity regularization with Lreg is performed over the sentences W and Rk with similar event context (due to the retrieval component). As such, our model might be able to learn to only eliminate language-specific features that do not overlap with the common event context features. The event context information is thus preserved to best perform cross-lingual ED.
## 3 Experiments
Datasets and Hyper-parameters: We evaluate our cross-lingual retrieval-based model for ED
(called CLRED) on the multilingual dataset ACE
2005 (Walker et al., 2006), following previous work (M'hamdi et al., 2019; Nguyen et al., 2021b).
ACE 2005 provides event trigger annotations for 33 event types in documents of three languages:
English (EN), Chinese (ZH) and Arabic (AR). To achieve a fair comparison, we use the exact data split and preprocessing provided by previous work
(Nguyen et al., 2021b). The data split includes training, development, and test data for each of the three languages. To perform cross-lingual transfer learning evaluation, we will consider six possible pairs of languages chosen from English, Chinese, and Arabic. For each language pair, we train the models on the training data of one language (i.e., the source language) and evaluate the models on the test data of the other language (i.e., the target language). Similar to previous work (Nguyen et al.,
2021b), the unlabeled dataset U in our experiments is obtained from the training data of the target language where the labels are completely removed.
To tune the hyper-parameters for our model, we use the performance over the development data of the source languages. In particular, the selected hyper-parameters from our tuning process involve:
1e-5 for the learning rate with the AdamW optimizer, 16 for the mini-batch size, 300 dimensions for the hidden layers of the feed-forward network F F, K = 2 for the number of retrieved sentences in R, Q = 30 for the number of steps to update miniLMW and miniLMU , α = 1 for the trade-off parameter between context and retrieval similarities in the attention weights, and λ = 0.1 for the trade-off parameter in the overall loss function. Finally, we utilize the base version of XLMRoBERTa (Conneau et al., 2020) with 768 dimensions for the hidden vectors for our ED model.
Baselines: We consider two groups of baselines for our cross-lingual model CLRED. The first group concerns previous methods that only leverage training data in the source language for learning (i.e., no unlabeled data in the target language). The stateof-the-art model in this group involves the BERTCRF model in (M'hamdi et al., 2019) that applies a CRF layer on top of multilingual BERT (mBERT)
(Devlin et al., 2019). To make it fair, we also report the performance of XLMR-CRF that replaces mBERT1in BERT-CRF with our XLM-RoBERTa model. Note that XLRM-CRF is equivalant to our CLRED model when the retrieval component and augmentation context are excluded.
The second group of baselines additionally uses the unlabeled dataset U in the target language to train cross-lingual models for ED. A state-of-theart model in this group features the BERT-CRFCCCAR model in (Nguyen et al., 2021b) that utilizes unlabeled data to match representations for universal word categories and event types computed from BERT-CRF. In the experiments, we also provide the performance of XLMR-CRF-CCCAR
that is similar to BERT-CRF-CCCAR, but replaces BERT with XLM-RoBERTa. To make it compatible, we obtain the original code and implementation for BERT-CRF-CCCAR from (Nguyen et al.,
2021b) to perform the replacement and evaluation.
In addition, we explore the language adversarial training (LAT) method to leverage unlabeled target data to induce language-universal representations for cross-lingual ED. In LAT, a base model for cross-lingual ED is also either BERT-CRF or XLMR-CRF. Further, a language discriminator is introduced to classify whether a representation vector is computed over a sentence in the source or target language (Chen et al., 2019; Huang et al., 2019; Keung et al., 2019). We follow the same implementation of LAT for cross-lingual ED in
(Nguyen et al., 2021b) that jointly trains the language discriminator with the sequence labeling model for ED. The Gradient Reversal Layer (GRL)
(Ganin and Lempitsky, 2015) is employed to fool the discriminator and eliminate language-specific features from the representations. To this end, we report the performance of LAT for both BERT-CRF
and XLMR-CRF, leading to BERT-CRF-LAT and XLMR-CRF-LAT in our experiments.
Motivated by prior work on cross-lingual learning (Pfeiffer et al., 2020), we also evaluate the language model fine-tuning (LMFT) method where a multilingual pre-trained model is first fine-tuned on the unlabeled data U of the target language using mask language modeling (Devlin et al., 2019).
The fine-tuned model is then directly employed as the encoder in the base sequence labeling model
(e.g., XLMR-CRF) with CRF for cross-lingual ED.
Considering both mBERT and XLM-RoBERTa, we also have two versions for this LMFT method, i.e., BERT-CRF-LMFT and XLMR-CRF-LMFT.
Here, the *huggingface* library is utilized to finetune mBERT and XLM-RoBERTa on unlabeled target data for 100, 000 steps.
Finally, we report the performance of the recent model OACLED (Guzman et al., 2022) that has the best reported performance for cross-lingual ED so far. OACLED is also based on the idea of LAT;
however, it introduces a a new component to leverage optimal transport and XLMR to perform data selection for the language discriminator.
Comparison: Table 1 presents the cross-lingual performance for six different language pairs.
The first observation is that the XLMR-based models are significantly better than their corresponding BERT-based models across most language pairs and models (e.g., *-CRF and *-CRFCCCAR). This demonstrates the advantages of the multilingual language model XLM-RoBERTa over multilingual BERT for cross-lingual ED. Second, comparing the models with and without unlabeled target-language data, we find that the *-CRFCCCAR and OACLED models substantially outperform the *-CRF models regardless of the multilingual pre-trained models over different language pairs. The *-CRF-LAT and *-CRF-LMFT models are also better than the *-CRF models in most situations (except for some language pairs). As such, it highlights the benefits of using unlabeled data in the target language to improve the languageuniversal representations and cross-lingual performance for ED if introduced appropriately. Most importantly, Table 1 shows that the proposed model CLRED achieves significantly better performance than all the baseline methods (with p < 0.01)
across different language pairs. The state-of-the-art performance of CLRED thus clearly demonstrates the advantages of our new retrieval-based approach with representation augmentation for cross-lingual transfer learning for ED.
Ablation Study: Compared to the base model
Model Langauge Pairs
Source EN EN ZH ZH AR AR
Target ZH AR EN AR EN ZH
BERT-CRF 68.5 30.9 37.5 20.1 40.1 58.8
BERT-CRF-LAT 70.0 33.5 41.2 20.3 37.2 55.6
BERT-CRF-LMFT 69.4 33.4 42.9 20.0 36.5 56.3
BERT-CRF-CCCAR 72.1 42.7 45.8 20.7 40.7 59.8
XLMR-CRF 70.5 43.5 41.7 32.8 45.4 61.8
XLMR-CRF-LAT 70.2 43.4 42.3 33.2 45.2 60.9
XLMR-CRF-LMFT 71.1 43.7 42.1 32.9 45.9 62.1
XLMR-CRF-CCCAR 74.4 44.1 49.5 34.3 46.3 62.9
OACLED 74.6 44.9 45.8 35.1 48.0 63.1
CLRED (ours) 76.6 46.4 50.8 39.2 49.2 **67.3**
# Model
Langauge Pairs
EN EN ZH ZH AR AR
ZH AR EN AR EN ZH
1 CLRED (full) 76.6 46.4 50.8 39.2 48.2 67.3 2 No retrieval 70.5 43.5 41.7 32.8 45.4 61.8
3 No sim(W , Rk) in a*i,k,j* 72.9 43.7 46.8 37.5 45.9 65.4
4 Not update miniLM∗ 74.0 44.6 47.4 37.8 45.6 64.3
5 Not update miniLMW 72.5 45.3 46.3 36.6 45.3 61.5 6 Not update miniLMU 73.5 45.7 47.7 38.0 46.2 66.3
7 No warm up 75.5 44.4 46.9 32.9 47.4 63.6
8 No Lreg 74.1 45.3 48.4 38.3 47.8 66.7
9 With unlabeled source 74.4 43.8 45.5 38.5 46.7 66.2
XLMR-CRF, the key distinction in our model involves the retrieval model. Table 2 studies the performance of the ablated/varied versions of the retrieval model in CLRED over the test sets of different language pairs. In particular, line 2 "No retrieval" completely removes the retrieval component from CLRED (i.e., XLMR-CRF with no augmentation representation ai). As the performance is significantly reduced, it demonstrates the benefit of the retrieval model for our CLRED
model. In line 3 with "No sim(W , Rk) in a*i,k,j*",
we do not include the retrieval similarity between the retrieved and input sentences in the attention weights a*i,k,j* for augmentation representation.
This model also implies that the retrieval and ED
models are disconnected and the retrieval components miniLMW and minLMU are freeze during the training process for the ED model. As such, the poorer performance of "No sim(W , Rk) in a*i,k,j*" in Table 2 clearly confirms the importance of sim(W , Rk) in the attention weights for CLRED.
Next, as we update the two retrieval models miniLMW and minLMU after every Q steps in the joint training, lines 4, 5, and 6 explore the variants where we fix the two models (line 4) or only update one of them (lines 5 and 6) during the training of ED. As can be seen, the degraded performance in lines 4, 5, and 6 highlight the necessity to update and synchronize miniLMW and minLMU to achieve the best performance for CLRED. In addition, line 7 "**No warm up**" and line 8 "No Lreg" demonstrate the benefits of our warm up step and XLMR-computed similarity regularization (respectively) for the retrieval model as removing any of them will lead to significant performance drops. Finally, in line 9, instead of using unlabeled data in the target language, the retrieval component retrieves relevant sentences from unlabeled data of the source language that is obtained by removing labels from the training data of the source language
(i.e., excluding the input sentence W) for our crosslingual learning setting. As can be seen, unlabeled data in the source language cannot guarantee the best cross-lingual performance for ED, thus testifying to the importance of using unlabeled sentences in the target language for cross-lingual ED.
![7_image_0.png](7_image_0.png)
Speed Evaluation: Given the retrieval component with representation computation for U with miniLM, we evaluate the running time for our model CLRED. Using the time for XLMR-CRF
as the baseline, Table 3 presents the training and inference time for the full model CLRED (averaged over six language pairs). For reference, we also report the time for the variant of CLRED where the retrieval model with miniLMW and miniLMU is fixed during the training of the ED model. Overall, the training time of our retrieval-based model is double that for the base model XLMR-CRF; however, our inference time is only increased by 1.18 times. Note that in practice, the FAISS open-source toolkit (Johnson et al., 2021) can be used to precompute and index the representations for the sentences in U. This will allow us to handle larger unlabeled set U and achieve efficient vector search.
| Model | Training | Inference |
|----------------------------|------------|-------------|
| XLMR-CRF | 1.00x | 1.00x |
| CLRED with fixed retrieval | 1.42x | 1.18x |
| CLRED (full) | 2.07x | 1.18x |
Analysis: To better understand the operation of CLRED, we analyze the examples in the test sets for the target languages that can be correctly predicted by CLRED, but cannot be recognized by the non-retrieval baseline XLMR-CRF. A key insight from our analysis is that XLMR-CRF tends to incorrectly recognize event types in the input texts of the target languages due to the ambiguity of context. CLRED can fix the errors in these cases as the retrieval component is able to return relevant sentences that contains the same correct event types as the inputs. As such, the augmentation representation from the retrieved sentences can strengthen the context information to produce correct type prediction. For instance, consider the language pair ZH→EN (i.e., Chinese is the source and English is the target) with the sentence "*Blasphemy is punishable by death under the Pakistan Penal Code.*" in the target language. XLMR-CRF incorrectly predicts "*death*" as an event trigger of type *Life:Die* while CLRED can correctly identify "*punishable*"
as an event trigger of type *Justice:Sentence*. This is understandable given that the two retrieved sentences from CLRED involves: "Big "snake head" Weng Jinshun sentenced to life imprisonment." and
"*Roman was sentenced to seven years in prison.*",
which clearly express *Justice:Sentence* events.
In addition, to illustrate the impact of augmentation representation from the retrieved targetlanguage sentences for CLRED, Figure 1 presents the t-SNE visualization for the representation vectors that are computed by XLMR-CRF and CLRED
to predict event types for the words in the sourceand target-language test data. As can be seen, the representations learned by XLMR-CRF for the source language examples are quite separate from those for the target language. In contrast, with augmentation representation, CLRED can better align representations for the source and target examples of the same event types, thus improving cross-lingual performance for ED.
## 4 Related Work
ED has been studied mostly for monolingual settings, involving feature-based models (Liao and Grishman, 2011; Li et al., 2013; Yang and Mitchell, 2016) and recent deep learning models (Nguyen and Grishman, 2015; Chen et al., 2015; Nguyen and Grishman, 2018; Man Duc Trong et al., 2020; Zhang et al., 2019; Lin et al., 2020; Pouran Ben Veyseh et al., 2021a,b). Cross-lingual transfer learning for ED has gained more interests recently where different resources are leveraged to project the representations for different languages into the same space, including bilingual dictionaries/parallel corpora (Muis et al., 2018; Liu et al.,
2019) and multilingual language models (M'hamdi et al., 2019; Ahmad et al., 2021; Majewska et al.,
2021). To further bridge the gap between the representations for cross-lingual ED, (Nguyen et al.,
2021b) explores adversarial training with language discriminators (Huang et al., 2019; Lange et al., 2020; He et al., 2020; Guzman et al., 2022) and representation matching of similar objects to remove language-specific features. We also note that these methods are motivated from domain adaptation methods that aim to avoid domain-specific features (Ganin and Lempitsky, 2015; Cicek and Soatto, 2019; Tang et al., 2020; Trung et al., 2022; Ngo et al., 2022). In contrast, our model introduces additional augmentation representations from retrieval to achieve language-universal representations.
## 5 Conclusion
We present a novel method for cross-lingual transfer learning for ED. Instead of removing languagespecific features, our model augments the representations for the input sentences with those from relevant sentences in the target language to align the representations for the source and target languages.
Our method involves a retrieval component to obtain relevant sentences that is jointly trained with the ED model. Our proposed method demonstrates the state-of-the-art cross-lingual performance over six different language pairs.
## Limitations
In this work we present a novel method based on representation augmentation to solve cross-lingual transfer learning for event detection (ED). Although our experiments demonstrate the effectiveness of the proposed method, there are still some limitations that can be improved in future work.
First, our current method only leverages sentencelevel context in input document to perform ED
over different languages. This might not be optimal as document-level context has been shown to be helpful for ED (Pouran Ben Veyseh et al.,
2021b) that can be explored in future research to improve our cross-lingual models. Second, the evaluation for our model is limited to only three popular languages (English, Chinese, and Arabic) that are supported by existing pre-trained language models, unlabeled data, and text processing tools. As such, it is unclear whether the method can be adapted to many other languages with limited access to such resources (e.g., low-resource languages). We believe this is an important direction that can be investigated in future work to advance our understanding for ED models. Finally, our method requires joint training with a retrieval model (based on multilingual pre-trained language models) that can impose additional computational costs (as shown in Table 3). Reducing necessary computational costs for our model is an important direction to make it more accessible for different applications and domains.
## Acknowledgement
This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112, the NSF grant CNS-1747798 to the IUCRC Center for Big Learning, and the NSF grant \# 2239570.
This research is also supported in part by the Office of the Director of National Intelligence (ODNI),
Intelligence Advanced Research Projects Activity
(IARPA), via the HIATUS Program contract 202222072200003. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S.
Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
## References
Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang.
2021. Gate: Graph attention transformer encoder for cross-lingual relation and event extraction. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI).
Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multi-source
cross-lingual model transfer: Learning what to share.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3098–
3112, Florence, Italy. Association for Computational Linguistics.
Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
Safa Cicek and Stefano Soatto. 2019. Unsupervised domain adaptation via regularized conditional alignment. In Proceedings of the International Conference on Computer Vision (ICCV).
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In *Proceedings of the International Conference on Machine* Learning (ICML).
Luis Nateras Guzman, Minh Van Nguyen, and Thien Nguyen. 2022. Cross-lingual event detection via optimized adversarial training. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5588–5599, Seattle, United States. Association for Computational Linguistics.
Keqing He, Yuanmeng Yan, and Weiran Xu. 2020. Adversarial cross-lingual transfer learning for slot tagging of low-resource languages. In *Proceedings of* the International Joint Conference on Neural Networks (IJCNN).
Lifu Huang, Heng Ji, and Jonathan May. 2019. Crosslingual multi-level adversarial transfer to enhance low-resource name tagging. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3823–3833, Minneapolis, Minnesota. Association for Computational Linguistics.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021.
Billion-scale similarity search with gpus. In *IEEE*
Transactions on Big Data.
Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2979–2984, Brussels, Belgium.
Association for Computational Linguistics.
Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2019.
Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1355–
1360, Hong Kong, China. Association for Computational Linguistics.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In *Proceedings of the* Conference on Neural Information Processing Systems (NeurIPS).
Viet Dac Lai, Tuan Ngo Nguyen, and Thien Huu Nguyen. 2020. Event detection: Gate diversity and syntactic importance scores for graph convolution neural networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5405–5411, Online. Association for Computational Linguistics.
Lukas Lange, Anastasiia Iurshina, Heike Adel, and Jannik Strötgen. 2020. Adversarial alignment of multilingual models for extracting temporal expressions from text. *arXiv preprint arXiv:2005.09392*.
Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features.
In *Proceedings of the 51th Annual Meeting of the* Association for Computational Linguistics (ACL).
Shasha Liao and Ralph Grishman. 2011. Acquiring topic features to improve event extraction: in preselected and balanced collections. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP).
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020.
A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL).
Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020. Event extraction as machine reading comprehension. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP).
Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2019.
Neural cross-lingual event detection with minimal parallel resources. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 738–748, Hong Kong, China. Association for Computational Linguistics.
Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017.
Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.
Weiyi Lu and Thien Huu Nguyen. 2018. Similar but not the same: Word sense disambiguation improves event detection via neural representation matching.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 4822–4828, Brussels, Belgium. Association for Computational Linguistics.
Olga Majewska, Ivan Vulic, Goran Glavaš, ´
Edoardo Maria Ponti, and Anna Korhonen.
2021. Verb knowledge injection for multilingual event processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP).
Hieu Man Duc Trong, Duc Trong Le, Amir Pouran Ben Veyseh, Thuat Nguyen, and Thien Huu Nguyen.
2020. Introducing a new dataset for event detection in cybersecurity texts. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5381–5390, Online. Association for Computational Linguistics.
Meryem M'hamdi, Marjorie Freedman, and Jonathan May. 2019. Contextualized cross-lingual event trigger extraction with minimal resources. In *Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)*, pages 656–665, Hong Kong, China. Association for Computational Linguistics.
Aldrian Obaja Muis, Naoki Otani, Nidhi Vyas, Ruochen Xu, Yiming Yang, Teruko Mitamura, and Eduard Hovy. 2018. Low-resource cross-lingual event type detection via distant supervision with minimal effort. In Proceedings of the 27th International Conference on Computational Linguistics (COLING).
Nghia Ngo, Bonan Min, and Thien Nguyen. 2022. Unsupervised domain adaptation for joint information extraction. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 5894–
5905, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Nghia Ngo, Tuan Ngo Nguyen, and Thien Huu Nguyen.
2020. Learning to select important context words for event detection. In *Proceedings of the 24th PacificAsia Conference on Knowledge Discovery and Data* Mining (PAKDD).
Minh Van Nguyen, Viet Lai, and Thien Huu Nguyen.
2021a. Cross-task instance representation interactions and label dependencies for joint information extraction with graph convolutional networks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 27–38, Online. Association for Computational Linguistics.
Minh Van Nguyen, Tuan Ngo Nguyen, Bonan Min, and Thien Huu Nguyen. 2021b. Crosslingual transfer learning for relation and event extraction via word category and class alignments. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5414–5426, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In *Proceedings of the 2016 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309, San Diego, California.
Association for Computational Linguistics.
Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In *Proceedings of the 53rd Annual* Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365–371, Beijing, China. Association for Computational Linguistics.
Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI).
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Se- ´
bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 7654–7673, Online. Association for Computational Linguistics.
Amir Pouran Ben Veyseh, Viet Lai, Franck Dernoncourt, and Thien Huu Nguyen. 2021a. Unleash GPT2 power for event detection. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6271–6282, Online. Association for Computational Linguistics.
Amir Pouran Ben Veyseh, Minh Van Nguyen, Nghia Ngo Trung, Bonan Min, and Thien Huu Nguyen.
2021b. Modeling document-level context for event detection via important context selection. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, pages 5403–5413, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hui Tang, Ke Chen, and Kui Jia. 2020. Unsupervised domain adaptation via structurally regularized deep clustering. In *Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)*.
Nghia Ngo Trung, Linh Ngo Van, and Thien Huu Nguyen. 2022. Unsupervised domain adaptation for text classification via meta self-paced learning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4741–4752, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. In Technical report, Linguistic Data Consortium.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In *Proceedings of the Conference on Neural Information Processing Systems*
(NeurIPS).
Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context.
In *Proceedings of the 2016 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies
(NAACL-HLT).
Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In *Proceedings of the Annual Meeting of the Association* for Computational Linguistics (ACL).
Junchi Zhang, Yanxia Qin, Yue Zhang, Mengchi Liu, and Donghong Ji. 2019. Extracting entities and events as a single task using a transition-based neural model. In *Proceedings of the International Joint* Conference on Artificial Intelligence (IJCAI).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
peng-sun-2023-normnet | {N}orm{N}et: Normalize Noun Phrases for More Robust {NLP} | https://aclanthology.org/2023.findings-acl.136 | A critical limitation of deep NLP models is their over-fitting over spurious features. Previous work has proposed several approaches to debunk such features and reduce their impact on the learned models. In this work, a normalization strategy is proposed to eliminate the false features caused by the textual surfaces of noun phrases. The motivation for this strategy is that noun phrases often play the role of slots in textual expressions and their exact forms are often not that important for performing the final task. As an intuitive example, consider the expression {''}$x \text{ like eating } y$''. There are a huge number of suitable instantiations for $x$ and $y$ in the locale. However, humans can already infer the sentiment polarity of $x$ toward $y$ without knowing their exact forms.Based on this intuition, we introduce NormNet, a pretrained language model based network, to implement the normalization strategy. NormNet learns to replace as many noun phrases in the input sentence as possible with pre-defined base forms. The output of NormNet is then fed as input to a prompt-based learning model to perform label prediction. To evaluate the effectiveness of our strategy, we conducted experimental studies on several tasks, including aspect sentiment classification (ASC), semantic text similarity (STS), and natural language inference (NLI). The experimental results confirm the effectiveness of our strategy. | # Normnet: Normalize Noun Phrases For More Robust Nlp
Minlong Peng, Mingming Sun Cognitive Computing Lab, Baidu Research, Beijing, China
{pengminlong, sunmingming01}@baidu.com
## Abstract
A critical limitation of deep NLP models is their over-fitting over spurious features. Previous work has proposed several approaches to debunk such features and reduce their impact on the learned models. In this work, a normalization strategy is proposed to eliminate the false features caused by the textual surfaces of noun phrases. The motivation for this strategy is that noun phrases often play the role of slots in textual expressions and their exact forms are often not that important for performing the final task. As an intuitive example, consider the expression "x like eating y". There are a huge number of suitable instantiations for x and y in the locale. However, humans can already infer the sentiment polarity of x toward y without knowing their exact forms.
Based on this intuition, we introduce NormNet, a pretrained language model based network, to implement the normalization strategy.
NormNet learns to replace as many noun phrases in the input sentence as possible with pre-defined base forms. The output of NormNet is then fed as input to a prompt-based learning model to perform label prediction.
To evaluate the effectiveness of our strategy, we conducted experimental studies on several tasks, including aspect sentiment classification
(ASC), semantic text similarity (STS), and natural language inference (NLI). The experimental results confirm the effectiveness of our strategy.
## 1 Introduction
Deep learning has proven quite effective in many NLP tasks (Collobert et al., 2011; Mikolov et al.,
2013; Devlin et al., 2019). However, despite their great success and various NLP applications they power, critical limitations persist. A typical issue is their tendency to learn spurious features instead of the true signals of the task (Leino et al., 2019; Sagawa et al., 2020; Wang and Culotta, 2021; Yang et al., 2021b). This often leads to corrosive outcomes, from degraded performance on data in which the features no longer present (Kumar et al., 2019; Gui et al., 2021), to pernicious biases in model decisions (Blodgett et al., 2020), and to overall reduced trust in technology (Han and Tsvetkov, 2021).
This work proposes to address the spurious features caused by the textual surfaces of noun phrases. Here, we mainly consider noun phrases because they are often highly variable and replacing their forms would not make the sentence unreasonable. As a motivating example, consider the task to identify the sentiment polarity of I toward apples based on the textual expression I like eating apples, which belongs to the positive class. Here, "I" and "apples" can be changed to many other forms (e.g., "I" → "They",
"Many people", etc.) with the resulting sentences still reasonable. If the model is trained on such an example, it may over-fit the spurious correlations between the positive class and I or apples.
These spurious correlations will result in overfitting, degrading the generalization performance and interpretability of the learned model.
Such a problem can be mitigated by preprocessing the original expression into x like eating y before feeding it as input to the learning model and reformulating the task to predict the sentiment of x toward y, where x and y denote two variables. In this way, the processed expression, together with its label, captures abstract knowledge independent of the specific forms of x and y—x is positive toward y no matter x is I or They and y is apples or bananas. In addition, such a pre-processing can make the learned model more interpretable and facilitate symbolic learning.
With this consideration, we propose the idea of sentence normalization, which aims to replace as many noun phrases in the input sentence as possible with specifically designed base forms. Figure 1 shows a normalized input sentence for an intuitive understanding. To implement the idea, we introduce a pretrained language model (PLM)
based network, NormNet. Given an input sentence, NormNet will first identify noun phrases in the sentence. Then, it applies a PLM to evaluate the variability of every noun phrase conditional on its context. Phrases with high variability will be normalized to a base form, ranging from "A" to "Z".
The resulting sentence will then be fed as input to the learning model to perform label prediction.
We tested the effectiveness of NormNet on three typical NLP tasks, i.e., Aspect Sentiment Classification (ASC), Semantic Text Similarity
(STS), and Natural Language Inference (NLI). The experimental results show that our normalization strategy can improve both the models' in-domain and cross-domain performance. The contribution of this work is three-fold:
- We propose a novel idea of normalization for addressing the spurious features caused by the textual surfaces of noun phrases to deep NLP models.
- We introduce a pretrained language model based network, NormNet, to implement the proposed normalization strategy.
- Experimental studies on ASC, STS, and NLI
tasks verify the effectiveness and reasonability of the proposed strategy.
## 2 Related Work 2.1 Data Augmentation
One of the related techniques to our idea is word-substitution-based data augmentation. This technique randomly replaces words or phrases with other strings, such as synonyms (Fadaee et al.,
2017; Kobayashi, 2018), words having the same morphological features (Silfverberg et al., 2017),
or words predicted by a pretrained language model
(Wu et al., 2019; Wang et al., 2022; Bayer et al.,
2022). For instance, give the expression "I like eating apples.", the technique may generate augmented expressions: "They like eating apples.", "I like eating bananas.",
"I love eating apples", etc. Then, it will generate task labels for these expressions using some heuristics and append the generated samples to the training data set for model training. Such a technique has been found effective for various natural language processing tasks, such as machine translation (Xia et al., 2019), text classification
(Feng et al., 2021), and dialogue understanding
(Niu and Bansal, 2019).
However, such a data augmentation technique can be an expensive process. It will dramatically increase the overall size of the dataset by orders of magnitude. For example, if just substituting 2 words with 10 possible candidates for each sentence of the training data set, the dataset can easily grow by a factor of 10×10 = 100 (if applied independently). While this may have some benefits in terms of over-fitting, it can also significantly increase data storage costs and training time, which can scale linearly or super-linearly with respect to the training set size.
Instead of explicitly listing all the possible substitutions of a word or phrase through data augmentation, our method seeks to represent the possible substitutions with a consistent form, e.g., representing
"I like eating apples.", "They like eating apples.", and "I like eating bananas." with a consistently form "x like eating y.". Compared with the data augmentation technique, this method is much more efficient, with each sentence corresponding to only one normalized sentence.
## 2.2 Word Normalization
Another related techniques to the idea of our proposed normalization strategy are wordnormalization and lemmatization (Schütze et al.,
2008), which are two prevalent techniques in NLP to alleviate model over-fitting. They reduce inflectional forms and sometimes derivationally related forms of a word to a common base form based on heuristic rules and morphological analysis. For example, am, is, are will be stemmed to a consistent base form be.
The idea of our strategy is somehow motivated by the two techniques but has several critical differences. First, the base form of word-normalization and lemmatization is word dependent, making the normalized word vocabulary still large. In comparison, our strategy uses much simpler base forms. Second, word-normalization applies the normalization process to every word independently, ignoring its context. While our strategy uses a pretrained language model to model its context to determine which word or phrase should be normalized.
![2_image_0.png](2_image_0.png)
## 2.3 Symbolic Learning
A potential application of our strategy is to connect deep learning with symbolic learning. Symbolic learning uses symbols to represent certain objects and concepts, and allows developers to define relationships between them explicitly, e.g., (x is the father of y) ∧ (y is the father of z) ⇒ (x is the grandfather of z),
with x, y, and z denoting three different variables
(Mao et al., 2019). Based on the defined symbolic rules, it builds a rule system to perform the end tasks.
Because symbolic systems learn ideas or instructions explicitly and not implicitly, they are extremely data-efficient, interpretable, and robust to a cleverly designed adversarial attack (Evans and Grefenstette, 2018). However, a critical limitation of symbolic learning is that it requires developers to provide symbolic rules manually and the dominant data in the real world is non-symbolic, e.g., the natural language data. Thus, some recent work proposed to automatically mine rules from natural language data (Evans and Grefenstette, 2018) and applied deep learning to symbolic learning (Zhang and Sornette, 2017).
Our strategy can be seen as an combination of symbolic learning and deep learning. It replaces some noun phrases of natural language expressions with symbols and applies deep model to the resulting expressions to perform the end tasks. This can enhance the robustness of the deep model to the noise on the symbolic phrases. In addition, based on the normalized expression and learned model, it may mine symbolic rules with logic mining techniques like Markov Logic Network (MLN)
(Richardson and Domingos, 2006) and perform symbolic reasoning with pre-defined rules. We leave this research in future work.
## 2.4 Prompt-Tuning-Based Model Learning
Nowadays, most NLP tasks are built on pre-trained language models (PLMs) (Kenton and Toutanova, 2019; Brown et al., 2020). A typical practice to utilize PLMs is adding a task-specific head on top of PLMs, and then fine-tuning the entire model by optimizing task-specific objectives on training data.
However, most existing PLMs are trained with language modeling objectives, which usually differ from the learning objectives of downstream tasks.
There is a gap between PLMs and downstream tasks, and the performance degradation introduced by the gap is often considerable when the downstream training data set is small.
To overcome the gap between pre-training and downstream tasks, another popular technique for utilizing PLMs has been introduced, which we call *prompt-tuning* in this work. In prompt-tuning, downstream tasks are formalized as language modeling problems by inserting language prompts, and the results of language modeling are heuristically mapped to the solutions of downstream tasks (Schick et al., 2020; Han et al., 2022). As shown in Figure 1, a typical prompt template has the form: "<Input> <Prompt Words> [MASK]." (the numbers and positions of each component may change). And there are a set of label words (e.g. "positive" and "negative")
![3_image_0.png](3_image_0.png)
serving as the candidate set for predicting [MASK].
By fusing the original input with the prompt template for predicting [MASK] and then mapping predicted words to corresponding labels, prompt tuning converts a classification task into a language modeling task. Compared to the conventional fine-tuning method, prompt-tuning is more similar to the pre-training objectives, thereby helping to better use knowledge in PLMs and often obtaining better performance, especially when the training data set of the downstream task is small (Gu et al.,
2022).
## 3 Methodology
Figure 1 shows the general architecture of the proposed method. It adopts a prompt-tuning-based learning model to perform the end tasks. And compared with the traditional prompt-tuning-based method, it additionally introduces a NormNet to pre-process the input sentence, which replaces some noun phrases in the input sentence with special tokens (we use "A"-"Z" in this work).
The resulting sentence is then fed as input to the prompt-tuning-based learning model to perform model training and inference. In the following, we illustrate the detail of the prompt-tuning-based learning model and NormNet, respectively.
## 3.1 Prompt-Tuning-Based Learning Model
We adopt the popular prompt-tuning-based method to perform the end tasks. Here, we illustrate the template and verbalizer we used for ASC, STS, and NLI, respectively.
Template for ASC. Let s denote the input sentence, a denote the queried aspect, and
[MASK] denote the mask placeholder. For performing the Aspect Sentiment Classification task, we apply the hard template with the form: The sentiment polarity of the writer toward "a" is [MASK]
according to the statement "s".
![3_image_1.png](3_image_1.png)
Figure 3: Template design for STS
Premise: A soccer game with multiple males playing.
![3_image_2.png](3_image_2.png)
Hypothesis: Some men are playing a soccer game.
Prompt: From the statement "A soccer game with multiple males playing." we can deduce that "Some men are playing a soccer game." [MASK] happen." Verbalizer **(Label):** can (entailment), cannot (contradiction),
may (neutral)
Figure 4: Template design for NLI
![3_image_3.png](3_image_3.png)
It accepts "positive", "negative", and
"neutral" as the three possible predicted words at the position of [MASK], one mapped to an unique ASC label. We show an intuitive example of this template in Figure 2.
Template for STS. Let sa and sb denote the two text expressions of a STS example, and [MASK]
denote the mask. For performing the Semantic Text Similarity task, we apply the hard template with the form: The meaning of the statement "sa" is [MASK] [MASK]
the statement of "sb". It accepts
"consistent with" and "different from" as the possible predicted phrase at the mask positions, which is then mapped to the label
"1" and "0", respectively. We show an intuitive example of this template in Figure 3.
Template for NLI. Let sp denote the premise expression, sh denote the hypothesis expression, and [MASK] denote the mask placeholder. For performing the Natural Language Inference task, we apply the hard template with the form:
From the statement "sp" we deduce that "sh" [MASK] happen. It accepts
"can", "cannot", and "may" as the possible predicted words at the position of [MASK], which is then mapped to the NLI label "entailment", "contradiction", and "neutral", respectively. We show an intuitive example of this template in Figure 4.
![4_image_0.png](4_image_0.png)
Figure 5: Normalized example for ASC
![4_image_3.png](4_image_3.png)
![4_image_4.png](4_image_4.png)
Figure 6: Normalized example for STS
We finetune a PLM to perform the above tasks.
Specifically, at training time, we finetune the PLM
to predict the target words at the positions of
[MASK]s using the mask prediction objective. At inference time, we applied the finetuned PLM to predict the masked words and accordingly, make label prediction.
## 3.2 Normnet
The right part of Figure 1 gives a working process of NormNet on an sampled input sentence, and Figure 5, 6, and 7 show a normalized example given by NormNet for ASC, STS, and NLI, respectively.
Note that, for STS and NLI, the two sentences of a single sample are concatenated into a single sentence, separated by [SEP], at this process. After that, the normalized sentence will be separated into two sentences using [SEP]. In general, NormNet involves three steps: 1) identify noun phrases; 2)
determine which phrases should be normalized; and 3) normalize the phrases and generate the output sentence.
Specifically, given an input sentence s, NormNet first identifies the set of noun phrases, denoted as N , in s using the spaCy chunking tool (Honnibal and Montani, 2017) with the "en_core_web_sm" model. Phrases occurring in different positions but having the same surface form in s correspond to a unique element in N .
Then, for each phrase, *P ∈ N* , it replaces all the occurrences of P in s with [MASK]s (every constituent token of the phrase is replaced with a
[MASK] token). The resulting sentence, sP, is then fed as input to a pretrained masked language model to determine if P should be normalized. For
![4_image_1.png](4_image_1.png)
Figure 7: Normalized example for NLI
the purpose, we calculate the mask prediction score
![4_image_2.png](4_image_2.png)
of P given sP:
$$s({\mathcal{P}}|\mathbf{s_{\mathcal{P}}})={\frac{-1}{C({\mathcal{P}})\times|{\mathcal{P}}|}}\sum_{i=1}^{C({\mathcal{P}})}\sum_{t\in{\mathcal{P}}_{i}}\log p(t|\mathbf{s_{\mathcal{P}}};\mathbf{\theta}),\eqno(1)$$
where sP denotes the sentence s with P being masked, C(P) denotes the times of occurrence of P in s, Pi denotes the i-th occurrence of P, and θ denotes the parameter of the pretrained language model, which is fixed at the process. *Intuitively,*
a high value of s(P|sP) indicates either P does not frequently occur in the background of sP or the content occurring in the position of P is highly variable. In both the cases, fitting the joint distribution of P and sP using finite training data will easily result in over-fitting. Motivated by the intuition, we apply the following strategy to determine if P should be normalized: If s(P|sP) > δ, then P in s should be normalized, with δ being a scalar hyper-parameter.
Finally, we replace each normalized phrase in s with a special token, ranging from "A" to
"Z". The resulting input sentence is then fed as input to the learning model for model training and inference. At training time, the computational cost of NormNet is O(nsnpnplm), where ns is the training data size, np is the average number of noun phrases in a sentence, and nplm denotes the forward cost of the PLM. At inference time, the computational cost of NormNet is O(npnplm).
4 Experiment
## 4.1 Tasks & Datasets
Aspect Sentiment Classification. The task of aspect sentiment classification (ASC) involves predicting the sentiment polarity of a person toward a given aspect mentioned in the text written by the person. We performed experiments on two datasets for this task: the Multi-Aspect MultiSentiment **MAMS** (Jiang et al., 2019) dataset and the Restaurant review (**Rest14**) dataset from SemEval 2014 (Pontiki et al., 2016).
Semantic Text Similarity. Semantic text similarity tasks involve predicting whether two sentences are semantically equivalent or not. We performed experiments on two datasets for this task: the Microsoft Paraphrase corpus (MRPC) (Dolan and Brockett, 2005), and the Quora Question Pairs
(QQP) dataset (Chen et al., 2018).
Natural Language Inference. The task of natural language inference (NLI) involves reading a pair of sentences and judging the relationship between them from one of entailment, contradiction or neutral. We evaluate the task on two datasets: the Multi-Genre Natural Language Inference (**MNLI**)
dataset (Williams et al., 2018), and the Recognizing Textual Entailment (RTE) dataset (Bentivogli et al.,
2009). For MNLI, we reported performance on its matched testing set.
For each task, we evaluated the model's *indomain* and *cross-domain* performance. To evaluate the model's cross-domain performance, we trained the model on the training set of one dataset and tested its performance on the testing set of another dataset of the same task.
## 4.2 Baselines
We compared our method with three baselines. The first one is the method that applies the conventional finetuning method to perform the end tasks. We call this baseline as **PLMTuning**. The second baseline is the method that directly applies the prompttuning-based learning model to perform the task.
We call this baseline as **PLMPrompt**. The third baseline is the method that additionally applies the word-substitution-based data augmentation technique based on PLMPrompt. We call this baseline as **PLMPrompt+SubAug**. To make a fair comparison between PLMPrompt+SubAug and our method (referred to as **PLMPrompt+NormNet**),
we only performed substitution on phrases that were determined to be normalized by our method for implementing PLMPrompt+SubAug.
## 4.3 Implementation Detail
Implementation of PLMTuning. To perform the ASC task, we inserted the "[unused1]" token before and after the occurrence of the queried aspect in the input sentence. For instance, given the input sentence "I like eating apples."
and the query "apples", we reformulated the input sentence to "I like eating [unused]
apples [unused]." and fed it as input to the learning model to perform model learning and inference. To perform the STS task, we concatenated the two input sentences (denoted as sa and sb) of a sample into a single sentence connected by the special token "[SEP]": "sa [SEP]
sb" and "sb [SEP] sa". At inference time, the label of the sample, (sa, sb), was obtained by averaging the prediction results on its two generated inputs. In similar, to perform the NLI task, we concatenated the two input sentences (denoted as Premise sp and Hypothesis sh) of a sample into a single sentence connected "[SEP]" for the form: "sp [SEP] sh".
For all the tasks, label prediction was built on the final representation of the "[CLS]" token in PLMs.
AdamW optimizer (Loshchilov and Hutter, 2018)
with linear decay warm-up was applied for model learning. The initial learning rate was set to 2e-5 and the warm-up ratio was set to 0.1. Batch size was set to 32 for MAMS, Rest14, RTE, and MRPC
and 64 for QQP and MNLI. Implementation of PLMPrompt+SubAug.
We performed substitution on phrases that were determined to be normalized by our method, and we applied the BART pretrained model (Lewis et al., 2020) to perform the substitution. Take the input "I like eating apples." as an example and suppose that we are to do substitution on "apples". We would feed "I like eating [MASK]." as input to BART, which would generate 5 (2 for QQP and MNLI) phrases in the [MASK] position, each one consisting of up to 6 tokens. For each generated phrase, we would place it to the [MASK] position and then feed the resulting sentence to the chunking model to check if it is a noun phrase. If so, the phrase would be preserved as a candidate substitution of
"apples". Otherwise, it would be deprecated.
Implementation of NormNet. The pretrained language model of NormNet was implemented by ERNIE-Gram (Xiao et al., 2021), which was explicitly trained on a n-gram mask language model objective. For determining the value of δ, we first extracted named entities from the training set of each dataset using the spaCy NER tool with the "en_core_web_sm" model. We tuned the value of δ so that 70% of the extracted entities would be normalized. This was motivated by the fact that named entities often play the role of slot values in an expression (Louvan and Magnini, 2020) and it can often improve the NER performance with the
Rest14 MAMS **In-Domain** *Acc F1 Acc F1*
![6_image_0.png](6_image_0.png)
BERT-base-uncased∗ 82.74 73.73 78.86 78.01
PLMTuning 84.23 78.22 82.89 82.78
PLMPrompt 84.58 79.74 83.17 83.09 PLMPrompt+SubAug 84.69 79.81 83.25 83.17
PLMPrompt+NormNet 84.97 **80.18** 83.28 83.19
Cross-Domain
Rest14
,→ **MAMS**
MAMS
,→ **Rest14**
Acc F1 Acc F1
PLMTuning 67.61 67.47 79.44 73.28
PLMPrompt 69.53 69.44 81.16 75.17
PLMPrompt+SubAug 70.24 70.15 82.73 76.08
PLMPrompt + NormNet 72.21 72.09 84.01 **77.65**
augmentation method that replacing an entity of a sentence with other entities of the same type (Dai and Adel, 2020).
All the experiments were run three times and the medium value within the three runs was reported.
## 4.4 Results On Asc
Table 1 shows the in-domain and cross-domain performance on the ASC task. From the table, we can see that: 1) Compared with PLMTuning, PLMPrompt achieves slightly better in-domain performance and significantly better cross-domain performance on the two ASC datasets. This verifies the advantage of the prompt-tuning technique over the conventional fine-tuning technique on the two ASC datasets. 2) PLMPrompt+SubAug achieves a little in-domain and about 1% absolute cross-domain performance improvement over PLMPrompt on the two tested datasets. This verifies the effectiveness of the word-substitutionbased data augmentation technique. However, the computational cost of PLMPrompt+SubAug is about 10 times of that of PLMPrompt. 3)
PLMPrompt+NormNet achieves further improvement over PLMPrompt+SubAug, especially in the perspective of cross-domain performance, on the two ASC datasets. This verifies the advantage of our normalization strategy over the augmentation strategy on the two ASC datasets. As a conclusion, the proposed normalization strategy can bring consistent performance improvement to the prompttuning-based learning model and does better than the word-substitution-based data augmentation strategy, especially in the perspective of cross-
![6_image_1.png](6_image_1.png)
## Domain Performance.
Influence of δ. Here, we study the influence of δ. In the study, we adjusted the value of δ so that 10%-90% of the extracted entities would be normalized. Figure 8 shows the performance of the study. From the figure, we can see that our method performed quite robustly to the variation of δ for in-domain performance. Specifically, on Rest14, the accuracy varied between 84.39 (with 90% of entities normalized) to 85.02 (with 50% of entities normalized). On MAMS, the accuracy varied between 82.91 (with 10% of entities normalized) to 83.35 (with 60% of entities normalized). While the results on the cross-domain Reset14 to MAMS task showed that the accuracy varied from 70.32 (with 10% of entities normalized) to 73.18 (with 80% of entities normalized). On the cross-domain MAMS
to Rest14 task, the accuracy varied between 81.68
(with 10% of entities normalized) to 84.57 (with 80% of entities normalized). It is worth noting that our method outperformed PLMPrompt in all settings of δ, which did not perform normalization.
An interesting observation is that the when the normalization ratio increases from 0.8 to 0.9, both the in-domain and across-domain performance will slightly decrease. Our explanation to this phenomenon is that the some common entities can help knowledge generalization, like "sunshine" often indicating "positive" polarity. Removing such entities will slightly degrade the performance.
## 4.5 Results On Sts
Table 2 shows the in-domain and cross-domain performance on the STS task. From the table, we can see that: 1) From the perspective of in-domain
In-Domain **MRPC QQP**
BERT-base-uncased∗ 81.99 90.27
PLMTuning 82.35 **90.86**
PLMPrompt 84.56 90.27
PLMPrompt+SubAug 85.33 90.41
PLMPrompt+NormNet **85.87** 90.65
Cross-Domain **MRPC**
,→ QQP
QQP
,→ **MRPC**
PLMTuning 68.97 70.83
PLMPrompt 69.52 68.54 PLMPrompt+SubAug 70.08 68.83 PLMPrompt + NormNet 72.11 **72.32**
performance, PLMPrompt performs considerably better than PLMTuning on MRPC but worse on QQP. While from the perspective of cross-domain performance, PLMPrompt achieves about 0.6% absolute improvement over PLMTuning on the cross-domain MRPC to QQP task but about 2.3%
absolute decrease on the QQP to MRPC task. Our explanation to this phenomenon is that MRPC is a small dataset. Its only contains about 3.7k training samples, not large enough to completely adopt the language modeling objective to the finetuning objective. In contrast, QQP has about 370k samples. Thus, on QQP, prompt-tuning does not show advantage over the conventional fine-tuning technique. 2) PLMPrompt+SubAug achieves a little performance improvement on the in-domain MRPC and the cross-domain MRPC
to QQP tasks over PLMPrompt. However, on the in-domain QQP task and the cross-domain QQP to MRPC task, the performance improvement introduced by data augmentation is negligible.
We believe this phenomenon is also resulting from the size of the training data size. 3)
PLMPrompt+NormNet achieves consistent improvement over PLMPrompt+SubAug on both indomain and cross-domain tasks. This verifies the effectiveness of our normalization strategy and the advantage of our normalization over the wordsubstitution-based data augmentation technique.
## 4.6 Results On Nli
Table 3 shows the in-domain and cross-domain performance on the NLI task. From the table, we can see that: 1) On the in-domain RTE
task, PLMPrompt performs much better than
In-Domain **RTE MNLI** BERT-base-uncased∗ 59.98 83.73 PLMTuning 63.54 83.56
PLMPrompt 67.17 83.62
PLMPrompt+SubAug 68.23 83.57
PLMPrompt+NormNet 68.51 **84.21**
Cross-Domain RTE
,→ **MNLI**
MNLI
,→ RTE
PLMTuning 30.63 18.77
PLMPrompt 30.33 44.04
PLMPrompt+SubAug 30.67 44.28 PLMPrompt + NormNet 32.48 **46.57**
PLMTuning, while on MNLI, the two models perform similarly. This observation is similar to that observed on the STS task, considering that RTE is also a small dataset (only contains about 2.5k training samples) and MNLI is a much larger dataset (contains about 393k training samples). On the two cross-domain tasks, PLMPrompt performs similar to PLMTuning. We think this is because the gap between RTE and MNLI is quite large and it does no matter what kind of tuning method applied. 2) PLMPrompt+SubAug achieves about 1% absolute improvement over PLMPrompt on the in-domain RTE task and only about 0.1%
absolute improvement on the in-domain MNLI task. On the two cross-domain NLI tasks, PLMPrompt+SubAug does not achieve much improvement over PLMPrompt. Here, we give our explanation to the results on the cross-domain RTE to MNLI task. Based on our data analysis, we think this results from the large gap between RTE and MNLI. The generated substitutions do not often occur in MNLI, making the data augmentation not so effective. 3) On the in-domain RTE
and MNLI tasks, PLMPrompt+NormNet achieves about 0.3% and 0.7% absolute improvement over PLMPrompt+SubAug, respectively. While on the cross-domain RTE to MNLI and the MNLI to RTE task, PLMPrompt+NormNet achieves about 1.8% and 2.3% absolute improvement over PLMPrompt+SubAug, respectively. Our explanation to this phenomenon is that our normalization strategy normalizes phrases of different domains into a consistent form, which is somehow equivalent to applying all possible substitutions. This makes our method more effective in the cross-domain
- The **coffee** is ok, but the **service** is slow. ⇒
"A" is ok, but "B" is slow.
```
- What is the scope of research in biomedical engineering ? [SEP]
What is the scope for biomedical engineering in india ? ⇒
What is the scope of research in "A" ? [SEP] What is the scope
for "A" in india ?
```
Figure 9: Samples on which PLMPrompt performs incorrectly but our method performs correctly.
## Scenario. 4.7 Qualitative Study
Here, we selected some samples, on which PLMPrompt made an incorrect prediction but our method made a correct one, and empirically study the reason. Figure 9 shows some of the selected samples. Through case study on these samples, we found that the class conditional distributions, p(P|y), of the normalized phrases in these samples are usually extreme. For example,
"coffee" only occurs in positive training samples of Rest14, resulting in a strong connection between
"coffee" and the positive class label. This may be the reason why PLMPrompt makes an incorrect prediction for "coffee" based on the expression "The coffee are ok , but the service is slow .", which belongs to the neutral class. In similar, "biomedical engineering" occurs 37 times in positive class training data and 7 times in negative class training data of QQP.
## 5 Conclusion
This work proposes a normalization strategy to overcome the spurious features caused by noun phrase surfaces. Experimental studies on Aspect Sentiment Classification (ASC), Semantic Text Similarity (STS), and Natural Language Inference (NLI) show that the proposed strategy can improve both models' in-domain and crossdomain performance. A potential extension of this work is extending the strategy to other types of phrases.
## 6 Limitations
We think this work has the following limitations: The **first** limitation is that our method involves additional computation for identifying noun phrases and determining which phrases should be normalized. The **second** limitation is that our method is only performed on noun phrases.
Other phrases may also introduce spurious features.
![8_image_0.png](8_image_0.png)
Extending our method to other types of phrases is a potential research direction. The **third** limitation is that due to the cost limitation, we did not test on the more powerful GPT-based PLMs, which proves to be more powerful and leads to heated discussions recently.
## References
Markus Bayer, Marc-André Kaufhold, and Christian Reuter. 2022. A survey on data augmentation for text classification. *ACM Computing Surveys*, 55(7):1–39.
Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC.
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in nlp. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454–5476.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. 2018. Quora question pairs, https://www.kaggle.com/c/quora-question-pairs.
Leshem Choshen, Elad Venezian, Shachar Don-Yehia, Noam Slonim, and Yoav Katz. 2022. Where to start? analyzing the potential value of intermediate models. arXiv preprint arXiv:2211.00107.
Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
2011. Natural language processing (almost) from scratch. *Journal of machine learning research*,
12(ARTICLE):2493–2537.
Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition.
In *Proceedings of the 28th International Conference* on Computational Linguistics, pages 3861–3867.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
Richard Evans and Edward Grefenstette. 2018. Learning explanatory rules from noisy data. *Journal of* Artificial Intelligence Research, 61:1–64.
Marzieh Fadaee, Arianna Bisazza, and Christof Monz.
2017. Data augmentation for low-resource neural machine translation. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 2: Short Papers), pages 567–
573.
Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for nlp. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 968–988.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2022. Ppt: Pre-trained prompt tuning for few-shot learning. In *Proceedings of the 60th Annual Meeting* of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 8410–8423.
Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, et al. 2021. Textflint: Unified multilingual robustness evaluation toolkit for natural language processing. *arXiv preprint* arXiv:2103.11441.
Xiaochuang Han and Yulia Tsvetkov. 2021. Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates. In Findings of the Association for Computational Linguistics:
EMNLP 2021, pages 4398–4409.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022. Ptr: Prompt tuning with rules for text classification. *AI Open*, 3:182–192.
Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. *To appear*, 7(1):411–420.
Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 6280–6285.
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pages 4171–4186.
Sosuke Kobayashi. 2018. Contextual augmentation:
Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–
457.
Sachin Kumar, Shuly Wintner, Noah A Smith, and Yulia Tsvetkov. 2019. Topics to avoid: Demoting latent confounds in text classification. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4153–4163.
Klas Leino, Matt Fredrikson, Emily Black, Shayak Sen, and Anupam Datta. 2019. Feature-wise bias amplification. In International Conference on Learning Representations (ICLR).
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer.
2020. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 7871–7880.
Ting Lin, Aixin Sun, and Yequan Wang. 2022.
Aspect-based sentiment analysis through edu-level attentions. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining*, pages 156–168.
Springer.
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International* Conference on Learning Representations.
Samuel Louvan and Bernardo Magnini. 2020. Recent neural methods on slot filling and intent classification for task-oriented dialogue systems: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 480–496.
Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B
Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision.
arXiv preprint arXiv:1904.12584.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S
Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. *Advances in neural information* processing systems, 26.
Tong Niu and Mohit Bansal. 2019. Automatically learning data augmentation policies for dialogue tasks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 1317–1323.
Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016.
Semeval-2016 task 5: Aspect based sentiment analysis. In International workshop on semantic evaluation, pages 19–30.
Matthew Richardson and Pedro Domingos. 2006.
Markov logic networks. *Mach Learn*, 62:107–136.
Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. 2020. An investigation of why overparameterization exacerbates spurious correlations.
In *International Conference on Machine Learning*, pages 8346–8356. PMLR.
Timo Schick, Helmut Schmid, and Hinrich Schütze.
2020. Automatically identifying words that can serve as labels for few-shot text classification. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5569–5578.
Hinrich Schütze, Christopher D Manning, and Prabhakar Raghavan. 2008. *Introduction to information* retrieval, volume 39. Cambridge University Press Cambridge.
Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. In *Proceedings* of the CoNLL SIGMORPHON 2017 Shared Task:
Universal Morphological Reinflection, pages 90–99.
Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang. 2022. Promda: Prompt-based data augmentation for low-resource nlu tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 4242–4255.
Zhao Wang and Aron Culotta. 2021. Robustness to spurious correlations in text classification via automatically generated counterfactuals. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 14024–14031.
Adina Williams, Nikita Nangia, and Samuel R
Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, pages 1112–1122.
Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional bert contextual augmentation. In *International conference on* computational science, pages 84–95. Springer.
Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5786–5796.
Dongling Xiao, Yu-Kun Li, Han Zhang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-gram: Pre-training with explicitly n-gram masked language modeling for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1702–1715.
Heng Yang, Biqing Zeng, Mayi Xu, and Tianxing Wang. 2021a. Back to reality: Leveraging pattern-driven modeling to enable affordable sentiment dependency learning. *arXiv preprint* arXiv:2110.08604.
Linyi Yang, Jiazheng Li, Pádraig Cunningham, Yue Zhang, Barry Smyth, and Ruihai Dong. 2021b. Exploring the efficacy of automatically generated counterfactuals for sentiment analysis. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 306–316.
Qunzhi Zhang and Didier Sornette. 2017. Learning like humans with deep symbolic networks. arXiv preprint arXiv:1707.03377.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✗ A2. Did you discuss any potential risks of your work?
We perform experiments on common public datasets.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We used a public model for our experiments.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4.3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lee-etal-2023-cross | Cross Encoding as Augmentation: Towards Effective Educational Text Classification | https://aclanthology.org/2023.findings-acl.137 | Text classification in education, usually called auto-tagging, is the automated process of assigning relevant tags to educational content, such as questions and textbooks. However, auto-tagging suffers from a data scarcity problem, which stems from two major challenges: 1) it possesses a large tag space and 2) it is multi-label. Though a retrieval approach is reportedly good at low-resource scenarios, there have been fewer efforts to directly address the data scarcity problem. To mitigate these issues, here we propose a novel retrieval approach CEAA that provides effective learning in educational text classification. Our main contributions are as follows: 1) we leverage transfer learning from question-answering datasets, and 2) we propose a simple but effective data augmentation method introducing cross-encoder style texts to a bi-encoder architecture for more efficient inference. An extensive set of experiments shows that our proposed method is effective in multi-label scenarios and low-resource tags compared to state-of-the-art models. | # Cross Encoding As Augmentation: Towards Effective Educational Text Classification
Hyun Seung Lee∗ 1,2 **Seungtaek Choi**∗ † 1 Yunsung Lee1 Hyeongdon Moon1 **Shinhyeok Oh**1 Myeongho Jeong1 Hyojun Go1 **Christian Wallraven**† 2 1Riiid AI Research 2Department of Artificial Intelligence, Korea University
{hyunseung.lee, seungtaek.choi}@riiid.co,
{hslrock, wallraven}@korea.ac.kr
## Abstract
Text classification in education, usually called auto-tagging, is the automated process of assigning relevant tags to educational content, such as questions and textbooks. However, auto-tagging suffers from a data scarcity problem, which stems from two major challenges:
1) it possesses a large tag space and 2) it is multi-label. Though a retrieval approach is reportedly good at low-resource scenarios, there have been fewer efforts to directly address the data scarcity problem. To mitigate these issues, here we propose a novel retrieval approach CEAA that provides effective learning in educational text classification. Our main contributions are as follows: 1) we leverage transfer learning from question-answering datasets, and 2) we propose a simple but effective data augmentation method introducing cross-encoder style texts to a bi-encoder architecture for more efficient inference. An extensive set of experiments shows that our proposed method is effective in multi-label scenarios and low-resource tags compared to state-of-the-art models.
## 1 Introduction
Due to the overwhelming amount of educational content available, students and teachers often struggle to find what to learn and what to teach. Autotagging, or text classification in education, enables efficient curation of content by automatically assigning relevant tags to educational materials, which aids in both students' understanding and teachers' planning (Goel et al., 2022).
However, applying auto-tagging for real-world education is challenging due to **data scarcity**.
This is because auto-tagging has a potentially very large label space, ranging from subject topics to knowledge components (KC) (Zhang et al.,
2015; Koedinger et al., 2012; Mohania et al., 2021; Viswanathan et al., 2022). The resulting data
∗ Equal Contribution. † Corresponding authors.
scarcity decreases performance on rare labels during training (Chalkidis et al., 2020; Lu et al., 2020; Snell et al., 2017; Choi et al., 2022).
In this paper, we aim to solve the data scarcity problem by formulating the task as a retrieval problem following a recent proposal (Viswanathan et al.,
2022). This can utilize a language model's ability to understand the tag text, such that even for an unseen tag, the models would be able to capture the relationship between the terms in the input content and labels. However, performance in the auto-tagging context still critically depends on the amount of training data.
To this end, we first propose to leverage the knowledge of language models that are fine-tuned on large question-answering datasets. Our intuition is that question of finding an answer in a passage can be a direct (or indirect) summary of the passage (Nogueira et al., 2019b), which can serve as an efficient proxy of the gold tag for educational content. The large question-answering datasets thus become a better prior for the tag spaces. Specifically, we adopt a recent bi-encoder architecture, called DPR (Karpukhin et al., 2020)
1, for transfer learning, which performs BERT encoding over the input and candidate label separately and measures the similarity between the final representations. To the best of our knowledge, our work is the first to leverage transfer learning from QA models for text classification tasks.
As a further innovation, we introduce a novel data augmentation method for training a bi-encoder architecture, named CEAA, which adds the crossencoder *view* of the input-label pair in the biencoder architecture, as shown in Figure 1. By capturing the full interaction between input and labels already during training time, the models can be further optimized to take advantage of token1DPR model is trained on 307k training questions, which is much larger than 7k questions in ARC dataset (Xu et al.,
2019) we used in experiments.
![1_image_0.png](1_image_0.png)
level interactions that are missing in traditional bi-encoder training. At the same time, the computational efficiency of the bi-encoder is maintained, which makes CEAA able to tackle large label spaces as opposed to existing solutions based on cross-encoder architectures (Urbanek et al., 2019; Wolf et al., 2019; Vig and Ramea, 2019). Experiments show that CEAA provides significant boosts to performance on most metrics for three different datasets when compared to state-of-the-art models.
We also demonstrate the efficacy of the method in multi-label settings with constraints of training only with a single label per context.
## 2 Related Work
Text classification in the education domain is reportedly difficult as the tags (or, labels) are hierarchical (Xu et al., 2019; Goel et al., 2022; Mohania et al., 2021), grow flexibly, and can be multi-labeled (Medini et al., 2019; Dekel and Shamir, 2010). Though retrieval-based methods were effective for such long-tailed and multilabel datasets (Zhang et al., 2022; Chang et al.,
2019), they relied on vanilla BERT (Devlin et al.,
2018) models, leaving room for improvement, for which we leverage question-answering fine-tuned retrieval models (Karpukhin et al., 2020).
Recently, (Viswanathan et al., 2022) proposed TagRec++ using a bi-encoder framework similar to ours, with an introduction of an additional crossattention block. However, this architecture loses the efficiency of the bi-encoder architecture in the large taxonomy space for the education domain.
Unlike TagRec++, our distinction is that we leverage the cross-attention only in training time via input augmentation.
## 3 Approach 3.1 Problem Formulation
In this paper, we address the text classification task, which aims to associate an input text with its corresponding class label, as a retrieval problem. Formally, given a context c and tag candidates T , the goal of the retrieval model is to find the correct
(or, relevant) tag t ∈ T , where its relevance score with the context s(*c, t*) is the highest among the T
or higher than a threshold. For this purpose, our focus is to better train the scoring function s(*c, t*)
to be optimized against the given relevance score between the context c and candidate tag t.
## 3.2 Bi-Encoder
In this paper, we use a bi-encoder as a base architecture for the retrieval task, as it is widely used for its fast inference (Karpukhin et al., 2020). Specifically, the bi-encoder consists of two encoders, EC,
and ET , which generate embedding for the context c and the tag t. The similarity between the context and tag is measured using the dot-product of their vectors:
$$s_{\mathrm{BE}}(c,t)=E_{C}(c)\cdot E_{T}(t)^{\top}$$
$\left(1\right)$.
⊤ (1)
Both encoders are based on the BERT architecture (Devlin et al., 2018), specifically *"bert-baseuncased"* provided by HuggingFace (Wolf et al.,
2020), that is optimized with the training objective of predicting randomly-masked tokens within a sentence. We use the last layer's hidden layer of the classification token is used as context and tag embeddings.
For training the bi-encoder, we follow the inbatch negative training in (Karpukhin et al., 2020).
Gold tags from other contexts inside the batch are treated as negative tags. As tags are often multilabeled, we use *binary cross-entropy loss*:
L = − 1 M X M i=1 X N j=1 (yi,j log(s(ci, tj ) (2) +(1 − yi,j ) log(1 − s(ci, tj ))
where s(ci, tj ) scores the similarity between context ci and tag tj , and yi,j is 1 if they are relevant and 0 otherwise. We will denote this model variant as a bi-encoder (BERT) below.
## 3.3 Cross-Encoding As Augmentation
The cross-encoder (Nogueira and Cho, 2019) is another method in information retrieval tasks in which a single BERT model receives two inputs joined by a special separator token as follows:
$$s_{\mathrm{CE}}(c,t)=F(E([c;t])),$$
sCE(*c, t*) = F(E([c;t])), (3)
where F is a neural function that takes the representation of the given sequence.
Cross-encoders perform better than bi-encoders as they directly compute cross-attention over context and tag along the layers (Urbanek et al., 2019; Wolf et al., 2019; Vig and Ramea, 2019). However, relying on this approach is impractical in our scenario as it requires processing every existing tag for a context during inference time. As a result, this method is typically used for *re-ranking* (Nogueira et al., 2019a; Qu et al., 2021; Ren et al., 2021).
As shown in Figure 1, we adopt an augmentation method that enables the bi-encoder framework to mimic cross-encoder's representation learning.
Compared to other knowledge distillation methods
(Qu et al., 2021; Ren et al., 2021; Thakur et al.,
2020), our approach does not require an additional cross-encoder network for training. Furthermore, as such cross-encoding is introduced as an augmentation strategy, it doesn't require additional memory or architecture modifications, while improving the test performance.
Specifically, for a context c, we randomly sample one of the tags in the original batch. We extend the batch in our training by introducing a context-tag concatenated input [c;t] which has "*is relevant*" as a gold tag. Our bi-encoder must be able to classify relevance when an input includes both context and tag with the following score function:
$$s_{\mathrm{CEAA}}(c,t)=E_{C}([c;t])\cdot E_{T}(\mathrm{``is~relevant''})^{\top}\mathrm{\ }\mathrm{(4)}$$
Since we use the augmentation method via input editing without an extra teacher cross-encoder model for distillation, we call this model Cross Encoding As Augmentation (CEAA).
## 3.4 Transfer Learning
To overcome the data scarcity in auto-tagging tasks, we introduce bi-encoder (DPR) models that distill knowledge from large question-answering datasets.
We argue that the training objective of question answering is similar to the context and tag matching in the auto-tagging task, as a question is a short text that identifies the core of a given context. Therefore, while the previous works have relied on vanilla BERT, here we explore whether pertaining on question-answering tasks would improve the performance in the auto-tagging tasks.
Specifically, we replace the naive BERT encoders with DPR (Karpukhin et al., 2020), which is further optimized with the Natural Question dataset (Lee et al., 2019; Kwiatkowski et al., 2019) to solve open-domain question-answering tasks of matching the representations of document and question. To match the overall length of the texts, we use *"dpr-ctx_encoder-single-nq-base"* and *"dprquestion_encoder-single-nq-base"* for context and tag encoders respectively.
$$({\mathfrak{I}})$$
## 4 Experiments 4.1 Experimental Setup
We conduct experiments on the following datasets:
ARC (Xu et al., 2019), QC-Science (Mohania et al.,
2021), and EURLEX57K (Chalkidis et al., 2019).
Details of datasets, metrics, and training details are in Appendix.
For comparison, in addition to simple baselines, we employ some state-of-the-art methods including BERT (prototype) (Snell et al., 2017), TagRec (Mohania et al., 2021), TagRec++ (Viswanathan et al.,
2022), and Poly-encoder (Humeau et al., 2019).
For ablations, built on the bi-encoder (BERT)
method, we present three variants: Bi-encoder
(BERT) + CEAA, Bi-encoder (DPR), and Bi-encoder
(DPR) + CEAA, where the comparisons between the variants could highlight the contribution of transfer learning and CEAA.
## 4.2 Results And Analysis
Overall Accuracy: The main objective of this work is to improve the bi-encoder models for the purpose of better text classification in two aspects:
| Methods | ARC | QC-Science | EURLEX57K | | | | | |
|--------------------------|-------|--------------|-------------|------|------|------|--------|------|
| R@1 | R@3 | R@5 | R@1 | R@3 | R@5 | RP@5 | nDCG@5 | |
| BM25 | 0.14 | 0.28 | 0.34 | 0.13 | 0.23 | 0.27 | 0.15 | 0.15 |
| BERT (prototype) | 0.35 | 0.54 | 0.64 | 0.54 | 0.75 | 0.83 | - | - |
| TagRec | 0.36 | 0.55 | 0.65 | 0.54 | 0.78 | 0.86 | - | - |
| TagRec++ | 0.49 | 0.71 | 0.78 | 0.65 | 0.85 | 0.90 | - | - |
| BERT (classification) | 0.53 | 0.72 | 0.79 | 0.68 | 0.87 | 0.91 | 0.78 | 0.80 |
| Poly-encoder-16 | 0.40 | 0.65 | 0.75 | 0.50 | 0.75 | 0.83 | 0.22 | 0.23 |
| Poly-encoder-360 | 0.44 | 0.68 | 0.78 | 0.64 | 0.85 | 0.90 | 0.54 | 0.54 |
| Bi-encoder (BERT) | 0.51 | 0.71 | 0.77 | 0.67 | 0.85 | 0.90 | 0.74 | 0.76 |
| Bi-encoder (BERT) + CEAA | 0.50 | 0.72 | 0.80 | 0.68 | 0.86 | 0.90 | 0.76 | 0.78 |
| Bi-encoder (DPR) | 0.54 | 0.73 | 0.80 | 0.69 | 0.87 | 0.90 | 0.76 | 0.78 |
| Bi-encoder (DPR) + CEAA | 0.56 | 0.74 | 0.80 | 0.70 | 0.86 | 0.90 | 0.79 | 0.81 |
transfer learning and CEAA. Regarding the effect of using two different pretrained models, the results from Table 1 show that models trained on DPR achieve higher performance than models from BERT. Specifically, Bi-encoder (DPR) outperforms the Bi-encoder (BERT) for ARC (0.54 > 0.51 in R@1) and QC-Science (0.69 > 0.67 in R@1). The performance of the EURLEX57K datasets in both RP@5 and nDCG@5 increases by 0.02. Applying our augmentation method to the Bi-encoder (both vanilla BERT and QA-finetuned BERT) improves the performance by 0.06, 0.02, and 0.03 points in ARC, QC-Science, and EURLEX57k, respectively.
Additionally, the Bi-encoder (DPR) + CEAA demonstrates the highest overall performance in most cases (except for R@3 and R@5 of the QC-Science dataset where differences were small). For example, compared to TagRec++, which is the current state-of-the-art model on the datasets, we observed that our best model improves on TagRec++ by 0.05 points in R@12. Figure 2 further demonstrates the change in RP@K and nDCG@K across a varying range of values for K on EURLEX57K, where CEAA shows consistently better performance. Notably, the gap from Bi-encoder (BERT) increases as K increases for both metrics.
Multi-label Generalization: To further highlight differences between single-label and multilabel settings, the two best models, Bi-encoder
(DPR) and Bi-encoder (DPR) + CEAA, were trained with a modified single-labeled EURLEX57K
dataset, where we sampled only a single tag from the multi-label space. When the models are evaluated on the original multi-label dataset, as a context in the EURLEX57K dataset has ≥ 5 gold tags on 2We discuss Poly-encoder's low performance in Appendix B.1.
![3_image_0.png](3_image_0.png)
average, it is important to achieve high nDCG@K
performance on K ≥ 5. The results are presented in Figure 3. We observe that the models show comparable performance with values of 0.65, 0.70, and 0.73 for Bi-encoder (DPR), Bi-encoder (DPR)
+ CEAA and BERT classification, respectively at K = 1. Though the classification model performs slightly better than CEAA at low K values, performance significantly degrades for K ≥ 5. Overall, the cross-encoder augmentation helped the model to better find related tags at the top rank. From
![4_image_1.png](4_image_1.png)
these results, we argue that evaluating against the single-labeled dataset may not be an appropriate testing tool for comparing the auto-tagging models, as BERT classification was considered the best at first, even though it is poorly working on multilabel scenarios. This problem is critical as multilabel issues are prevalent in education.
Specifically, we manually checked failure cases of both Bi-encoder (DPR) and Bi-encoder (DPR)
+ CEAA at top 1, to qualitatively examine which one is better at ranking the relevant tags. The results in Appendix B.2 show that Bi-encoder (DPR)
+ CEAA is able to retrieve better candidates than the Bi-encoder (DPR) more often. An interesting example is, given the context ["The sector in which employees have more job security is an organized sector"], where the gold tag is one related to the economy, the Bi-encoder (DPR) +
CEAA returns a tag ["human resources"], which is sufficiently relevant but not labeled one. From these results, we once again confirm that the multilabel problem is severe in the auto-tagging tasks and that our model yields sufficiently significant results beyond the reported performance.
Data Efficiency: To identify the effectiveness of augmentation with low-resource labels, we measured nDCG@5 on the splits of labels based on their occurrence in training data. EURLEX57 considered the labels that occurred more than 50 times in the training set as frequent and few otherwise.
We set the ARC dataset's threshold to 5. Figure 4 shows that both CEAA and transfer learning contribute to better performance for the frequent labels.
Further, we observe that the retrieval methods are more effective for the rarely occurring tags than standard classification methods. Notably, in ARC
of a smaller dataset than EURLEX57K (5K < 45K),
![4_image_0.png](4_image_0.png)
the combination of CEAA and transfer learning, CEAA (DPR), achieves the best performance.
## 5 Conclusion
In this paper, we discuss the problem of '*autotagging*' with regard to data scarcity due to its large label space - an issue that is critical in the education domain, but also for other domains with a multi-label structure such as jurisdictional or clinical contexts. We propose two innovations to address this problem: First, exploiting the knowledge of language models trained on large questionanswering datasets. Second, applying a novel augmentation for bi-encoder architecture inspired by cross-encoders to better capture the full interaction between inputs and labels while maintaining the bi-encoder's efficiency. A set of experiments demonstrated the effectiveness of our approach, especially in the multi-label setting. Future research will explore re-ranking scenarios in which the bi-encoder trained with our cross-encoding augmentation (CEAA) is re-used to effectively rerank the tags with cross-encoding mechanism as in (Nogueira and Cho, 2019).
## 6 Limitations 6.1 Limited Size Of Language Models
Due to the recent successes of generative large language models as zero-shot (or, few-shot) text classifiers (Radford et al., 2019; Brown et al., 2020),
one may ask about the practicality of our methods.
Even when disregarding computational efficiency3, we argue that applying such large language models for XMC problems is not trivial, as it is challenging to constrain the label space appropriately.
For example, even when the tag candidates we wanted for a task were entailment, neutral, and contradiction), the generative model will output tags outside this range such as hamburger (Raffel et al., 2020). In-context learning (Min et al., 2022)
may alleviate this concern, but in the context of the large label spaces of our application, the token limits of standard language models will be exceeded.
## 6.2 Lack Of Knowledge-Level Auto-Tagging
Though we pursue text classification tasks in the education domain, the classes usually represent only superficial information, such as chapter titles, which neglects the deeper relationships between educational contents like precondition between knowledge. For example, to solve a quadratic problem mathematical problem, an ability to solve the first-order problem is required. However, the available texts have only the last superficial tags. These concerns were not considered when creating these public datasets. Instructor-driven labeling would be an effective and practical solution for knowledgelevel auto-tagging.
## 6.3 Inefficiency Of Tag Encoder
One may argue that the performance of one BERT system is good enough to cast doubt on using two BERTs for the bi-encoder. In this context, experiments showed additional efficiency of our approach for low-frequency tags. Nonetheless, the current tag encoder could be made much more efficient using a smaller number of layers in BERT which will be explored in the future.
## 7 Ethical Considerations
Incorrect or hidden decision processes of the AI tagging model could result in the wrong learning path.
The system would therefore need to be subject to 3Nevertheless, we believe that actionable language models should keep efficiency as one of their core criteria.
human monitoring for occasional supervision. At the same time, the potential benefits of properlytagged content will be large for both the learner's learning experience and the teacher's labeling cost as the model can narrow down full tag space to the top-K candidates.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Ilias Chalkidis, Manos Fergadiotis, Sotiris Kotitsas, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. An empirical study on largescale multi-label text classification including few and zero-shot labels. *arXiv preprint arXiv:2010.01653*.
Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019. Large-scale multi-label text classification on eu legislation. arXiv preprint arXiv:1906.02192.
Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, and Inderjit Dhillon. 2019. X-bert: extreme multi-label text classification with bert. arXiv preprint arXiv:1905.02331.
Seungtaek Choi, Myeongho Jeong, Hojae Han, and Seung-won Hwang. 2022. C2l: Causally contrastive learning for robust text classification. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 36, pages 10526–10534.
Ofer Dekel and Ohad Shamir. 2010. Multiclassmultilabel classification with more classes than examples. In *Proceedings of the Thirteenth International* Conference on Artificial Intelligence and Statistics, pages 137–144. JMLR Workshop and Conference Proceedings.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Vasu Goel, Dhruv Sahnan, V Venktesh, Gaurav Sharma, Deep Dwivedi, and Mukesh Mohania. 2022. K12bert: Bert for k-12 education. In *International* Conference on Artificial Intelligence in Education, pages 595–598. Springer.
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. *arXiv* preprint arXiv:1905.01969.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. *arXiv preprint* arXiv:2004.04906.
Kenneth R Koedinger, Albert T Corbett, and Charles Perfetti. 2012. The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. *Cognitive science*,
36(5):757–798.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A benchmark for question answering research. *Transactions of the* Association for Computational Linguistics, 7:452–
466.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.
2019. Latent retrieval for weakly supervised open domain question answering. *arXiv preprint* arXiv:1906.00300.
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, JhengHong Yang, Ronak Pradeep, and Rodrigo Nogueira.
2021. Pyserini: An easy-to-use python toolkit to support replicable ir research with sparse and dense representations. *arXiv preprint arXiv:2102.10073*.
Jueqing Lu, Lan Du, Ming Liu, and Joanna Dipnall.
2020. Multi-label few/zero-shot learning with knowledge aggregated from multiple label graphs. *arXiv* preprint arXiv:2010.07459.
Christopher D Manning. 2008. *Introduction to information retrieval*. Syngress Publishing,.
Tharun Kumar Reddy Medini, Qixuan Huang, Yiqiu Wang, Vijai Mohan, and Anshumali Shrivastava.
2019. Extreme classification in log memory using count-min sketch: A case study of amazon search with 50m products. *Advances in Neural Information* Processing Systems, 32.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *arXiv* preprint arXiv:2202.12837.
Mukesh Mohania, Vikram Goyal, et al. 2021.
Tagrec: Automated tagging of questions with hierarchical learning taxonomy. *arXiv preprint* arXiv:2107.10649.
Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. *arXiv preprint* arXiv:1901.04085.
Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019a. Multi-stage document ranking with bert. *arXiv preprint arXiv:1910.14424*.
Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019b. Document expansion by query prediction. *arXiv preprint arXiv:1904.08375*.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. Rocketqa: An optimized training approach to dense passage retrieval for opendomain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835–5847.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021. Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking.
arXiv preprint arXiv:2110.07367.
Jake Snell, Kevin Swersky, and Richard Zemel. 2017.
Prototypical networks for few-shot learning. *Advances in neural information processing systems*, 30.
Nandan Thakur, Nils Reimers, Johannes Daxenberger, and Iryna Gurevych. 2020. Augmented sbert: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. arXiv preprint arXiv:2010.08240.
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game. arXiv preprint arXiv:1903.03094.
Jesse Vig and Kalai Ramea. 2019. Comparison of transfer-learning approaches for response selection in multi-turn conversations. In *Workshop on DSTC7*.
Venktesh Viswanathan, Mukesh Mohania, and Vikram Goyal. 2022. Tagrec++: Hierarchical label aware attention network for question categorization. arXiv preprint arXiv:2208.05152.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. *arXiv preprint arXiv:1901.08149*.
Dongfang Xu, Peter Jansen, Jaycie Martin, Zhengnan Xie, Vikas Yadav, Harish Tayyar Madabushi, Oyvind Tafjord, and Peter Clark. 2019. Multi-class hierarchical question classification for multiple choice science exams. *arXiv preprint arXiv:1908.05441*.
Ruohong Zhang, Yau-Shian Wang, Yiming Yang, Donghan Yu, Tom Vu, and Likun Lei. 2022. Longtailed extreme multi-label text classification with generated pseudo label descriptions. arXiv preprint arXiv:2204.00958.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. Advances in neural information processing systems, 28.
## A Experimental Setup A.1 Data Statistics
ARC (Xu et al., 2019): This dataset consists of 7,775 multiple-choice questions and answer pairs from the science domain. Each data is paired with classification taxonomy. The taxonomy is constructed to categorize questions into coarse to fine chapters in a science exam. There are a total of 420 unique labels. The dataset is split in train, validation, and test by 5,597, 778, and 1400 samples.
QC-Science (Mohania et al., 2021): this larger dataset consists of 47,832 question-answer pairs also in the science domain with 312 unique tags.
Each tags are hierarchical labels in the form of subject, chapter, and topic. The train, validation, and test sets consist of 40,895, 2,153, and 4,784 samples.
EURLEX57K (Chalkidis et al., 2019): The dataset contains 57,000 English legislative documents from EUR-LEX with a split of 45,000, 6,000, and 6,000. Every document is tagged with multilabel concepts from European Vocabulary. The average number of tags per document is 5, totaling 4,271 tags. Additionally, the dataset divides the tags into frequent (746), few (3,362), and zero
(163), based on whether they appeared more than 50, fewer than 50, but at least once, or never, respectively.
## A.2 Details On Evaluation Metric
In this section, we explain the metric used in the paper. First, recall@K(R@K) is calculated as follows:
where N is the number of samples to test, Rn is the number of true tags for a sample n, and St(K)
is the number of true tags within the top-K results.
For evaluation on multi-label dataset we used RPrecision@K (RP@K) (Chalkidis et al., 2019):
$$R P@K={\frac{1}{N}}\sum_{n=1}^{N}{\frac{S_{t}(K)}{m i n(R_{n},K)}}\qquad(6)$$
RP@K divides the number of true positives within K by the minimum value between K and Rn, resulting in a more fair and informative comparison in a multi-label setting.
nDCG@K (Manning, 2008) is another metric commonly used in such tasks. The difference between RP@K and nDCG@K is the latter includes the ranking quality by accounting for the location of the relevant tags within the top-K retrieved tags as follows:
$$n D C G@K={\frac{1}{N}}\sum_{n=1}^{N}Z_{K_{n}}\sum_{k=1}^{K}{\frac{R e l(n,k)}{l o g_{2}(1+k)}}\ \ (7)$$
where Rel(*n, k*) is the relevance score given by the dataset between a retrieved tag k of a sample n. The value can be different if the tags' relevant score is uniquely given by the dataset. Without extra information, it is always one if relevant and zero otherwise. ZKn is a normalizing constant that is output of DCG@K when the optimal top-K were retrieved as true tags.
## A.3 Hyperparmeter Setting
The architecture we used can handle a maximum of 512 tokens. Therefore, to concatenate tag tokens with context tokens, we set the maximum context token to 490 and truncate if the context is longer.
The remaining space is used for tag token concatenation. For every dataset, we used 20 contexts inside a batch. The number of unique tags inside a batch can vary with multi-label settings. During cross-encoder augmentation, we sampled five negative tags for each context to be joined together and one positive tag. We used Adam optimizer with a learning rate of 1e-5. For inference, we used the Pyserini framework to index the entire tag set embeddings (Lin et al., 2021).
## B Additional Results And Comments B.1 Comments On Poly-Encoder
$$R@K={\frac{1}{N}}\sum_{n=1}^{N}{\frac{S_{t}(K)}{R_{n}}}$$
(5) $\begin{array}{l}\text{l}\\ \text{l}\end{array}$ 2191 ...
In this section, we discuss the low performance of Poly-encoder (Humeau et al., 2019) in our main results. To be more specific, poly-encoder-16 and 360 were found to be performing below TagRec++.
The value 16, and 360 is the number of vectors to represent a context. We think the low performance could be due to a potential implementation issue of the poly-encoder into the classification task. The performance could differ if we had used 16 or 360 vectors to represent the tag rather than a context.
For our future work, we also aim to investigate this change.
## B.2 Extra Qualitative Result
Table 2 shows the samples we used to find the potential of CEAA method in multi-label tasks. The shown results were randomly picked.
| Context | A good conductor of heat is a steel ruler. | | | | | | | | | | | | |
|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|-----------|---------|--------------------|----------|------------|----------|-----------|----------|---------|-------|-----------|-----|
| Ground Truth | science » heat | | | | | | | | | | | | |
| Bi-Encoder | science » fun with magnets | | | | | | | | | | | | |
| Bi-Encoder + CEAA | science » sorting materials into groups | | | | | | | | | | | | |
| Context | The | operating | system | which | allows | two | or | more | users | to | run | | |
| programs at the same time is multi-user. | | | | | | | | | | | | | |
| Ground Truth | computer science[c++] » computer overview | | | | | | | | | | | | |
| Bi-Encoder | computer science » introduction to computer | | | | | | | | | | | | |
| Bi-Encoder + CEAA | computer science[c++]»working with operating system | | | | | | | | | | | | |
| Context | The | radiation | which | will | deflect | in | electric | field | is | cathode | | | |
| rays | | | | | | | | | | | | | |
| Ground Truth | physics | » | physics | : | part | - | ii | » | dual | nature | of | radiation | and |
| matter | | | | | | | | | | | | | |
| Bi-Encoder | physics » physics : part - ii»atoms | | | | | | | | | | | | |
| Bi-Encoder + CEAA | physics » physics: part - I » electric charges and fields | | | | | | | | | | | | |
| Context | What do we call the resources that helps in production process? Factors of Production | | | | | | | | | | | | |
| Ground Truth | social science » economics » the story of village palampur | | | | | | | | | | | | |
| Bi-Encoder | social science » geography : resource and development»resources | | | | | | | | | | | | |
| Bi-Encoder + CEAA | social science » economics » people as resource | | | | | | | | | | | | |
| Context | The | Civil | Law | to | protect | women | against | domestic | violence | was | | | |
| passed in 2006. | | | | | | | | | | | | | |
| Ground Truth | social science»civics : social and political life»judiciary | | | | | | | | | | | | |
| Bi-Encoder | social | science | » | civics | : | social | and | political | life | » | | | |
| understanding laws | | | | | | | | | | | | | |
| Bi-Encoder + CEAA | social science » civics : social and political life - ii » women change the world | | | | | | | | | | | | |
| Context | In | the | mid | 18 | th | century, | major | portion | of | eastern | India | was | |
| under the control of the British. | | | | | | | | | | | | | |
| Ground Truth | social | science | » | eighteenth-century | poltical | formations | » | the | | | | | |
| later mughals and the emergence of new states | | | | | | | | | | | | | |
| Bi-Encoder | social science » history : our pasts - ii » eighteenth-century political formations | | | | | | | | | | | | |
| Bi-Encoder + CEAA | social science » history : india and the contemporary world - i » peasant and farmers | | | | | | | | | | | | |
| Context | Spirogyra is called so because chloroplasts are spiral. | | | | | | | | | | | | |
| Ground Truth | science | | | | | | | | | | | | |
| Bi-Encoder | science»cell structure and functions | | | | | | | | | | | | |
| Bi-Encoder + CEAA | science » life processes | | | | | | | | | | | | |
| Context | The element having electronic configuration 2,8,4 is silicon. | | | | | | | | | | | | |
| Ground Truth | science » periodic classification of elements | | | | | | | | | | | | |
| Bi-Encoder | chemistry » chemistry : part I » the solid state | | | | | | | | | | | | |
| Bi-Encoder + CEAA | science » structure of the atom | | | | | | | | | | | | |
| Context | The clouds are actually tiny droplets of water. | | | | | | | | | | | | |
| Ground Truth | science » water | | | | | | | | | | | | |
| Bi-Encoder | science » air around us | | | | | | | | | | | | |
| Bi-Encoder + CEAA | social science » geography : our environment»air | | | | | | | | | | | | |
| Table 2: Extra result of sampled QC-Science to show the strength of CEAA method in multi-label tasks. | | | | | | | | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, we discuss the limitation in Sec 6, about the limited size of the model, knowledge level auto tagging and inefficient of the tag encoder
✓ A2. Did you discuss any potential risks of your work?
Yes, we discuss the ethical consideration in Sec 7. We talk about potential impact in the education domain, wrong or efficient learning paths for learners, and helping instructors' labeling process.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes, we include the abstract and section 1 as an introduction to summarize the main claim.
✓ A4. Have you used AI writing assistants when working on this paper?
To be honest, we initially used AI writing assistant as a "suggestion" for a better way of organizing statements in the abstract. However, soon we found the output of AI writing assistants ("ChatGPT")
either includes wrong information or had feeling we, ourselves can already determine which part is from the assistant because it felt like a fixed template style. Therefore, after that, we neglected using the assistant but used "Grammarly" as checking simple grammatical errors throughout the writing.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Yes, we used Huggingface discussed in Section 3. We also used data QC-Science, ARC, and EURLEX57K
as well as results from TagRec in Section 4m but not sure whether these facts are considered as scientific artificats.
✓ B1. Did you cite the creators of artifacts you used?
We added data, result citations in Section 2, 3, and 4, which links to references
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✗ **Did You Run Computational Experiments?** Left Blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Yes and No, in section 4.2, we discuss one of the qualitative results we obtained after the model training.
However, we only used this as a hint for us to investigate the effectiveness of the model in a multi-label setting. We are not sure, but we think this question is more focused on using a human annotator's result for quantitave performance statement.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
nookala-etal-2023-adversarial | Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language Understanding | https://aclanthology.org/2023.findings-acl.138 | State-of-the-art few-shot learning (FSL) methods leverage prompt-based fine-tuning to obtain remarkable results for natural language understanding (NLU) tasks. While much of the prior FSL methods focus on improving downstream task performance, there is a limited understanding of the adversarial robustness of such methods. In this work, we conduct an extensive study of several state-of-the-art FSL methods to assess their robustness to adversarial perturbations. To better understand the impact of various factors towards robustness (or the lack of it), we evaluate prompt-based FSL methods against fully fine-tuned models for aspects such as the use of unlabeled data, multiple prompts, number of few-shot examples, model size and type. Our results on six GLUE tasks indicate that compared to fully fine-tuned models, vanilla FSL methods lead to a notable relative drop in task performance (i.e., are less robust) in the face of adversarial perturbations. However, using (i) unlabeled data for prompt-based FSL and (ii) multiple prompts flip the trend {--} the few-shot learning approaches demonstrate a lesser drop in task performance than fully fine-tuned models. We further demonstrate that increasing the number of few-shot examples and model size lead to increased adversarial robustness of vanilla FSL methods. Broadly, our work sheds light on the adversarial robustness evaluation of prompt-based FSL methods for NLU tasks. | # Adversarial Robustness Of Prompt-Based Few-Shot Learning For Natural Language Understanding
Venkata Prabhakara Sarath Nookala∗
Georgia Institute of Technology [email protected]
## Subhabrata Mukherjee
Microsoft Research [email protected]
## Abstract
State-of-the-art few-shot learning (FSL) methods leverage prompt-based fine-tuning to obtain remarkable results for natural language understanding (NLU) tasks. While much of the prior FSL methods focus on improving downstream task performance, there is a limited understanding of the adversarial robustness of such methods. In this work, we conduct an extensive study of several state-of-the-art FSL
methods to assess their robustness to adversarial perturbations. To better understand the impact of various factors towards robustness (or the lack of it), we evaluate prompt-based FSL
methods against fully fine-tuned models for aspects such as the use of unlabeled data, multiple prompts, number of few-shot examples, model size and type. Our results on six GLUE tasks indicate that compared to fully fine-tuned models, vanilla FSL methods lead to a notable relative drop in task performance (i.e., are less robust) in the face of adversarial perturbations.
However, using (i) unlabeled data for promptbased FSL and *(ii)* multiple prompts flip the trend. We further demonstrate that increasing the number of few-shot examples and model size lead to increased adversarial robustness of vanilla FSL methods. Broadly, our work sheds light on the adversarial robustness evaluation of prompt-based FSL methods for NLU tasks.
## 1 Introduction
Few-shot learning (FSL) capabilities of large language models have led to a remarkable performance on several natural language understanding (NLU) tasks, often with as little as 16 examples per class (Mukherjee et al., 2021; Lester et al., 2021; Li and Liang, 2021; Wang et al.,
2021c). Prompt-based few-shot learning is one such approach where NLU tasks are reformulated as prompts, which are then completed using large language models (Gao et al., 2020; Schick and
*Equal contribution.
Gaurav Verma∗
Georgia Institute of Technology [email protected]
## Srijan Kumar
Georgia Institute of Technology [email protected]
![0_image_0.png](0_image_0.png)
Figure 1: **Overview of our study.** We compare the relative gap between the in-domain and adversarial performance of different state-of-the-art prompt-based fewshot learning methods with that of the models trained with fully supervised learning.
Schütze, 2020; Tam et al., 2021; Liu et al., 2021).
By effectively bridging the gap between the pretraining objective of large language models and the fine-tuning objective, prompt-based learning has provided impressive results. Several recent studies have investigated conditioning large language models to solve downstream tasks by prompting them with a few examples.
While much of the prior FSL works (Gao et al.,
2020; Liu et al., 2021; Tam et al., 2021; Lester et al., 2021) focus on improving downstream task performance, it is also critical to evaluate language technologies for adversarial robustness as that can highlight the security and safety risks of integrating them into user-sensitive downstream applications.
The robustness and generalization capabilities of prompt-based few-shot learning models have been the focus of some recent studies. For instance, Razeghi et al. (2022) found that prompting language models is not robust to pre-training term frequencies in the context of arithmetic tasks. In a similar vein, a recent study found that promptbased FSL is susceptible to learning superficial cues that hinder the generalizability of such methods (Kavumba et al., 2022). On the other hand, encouragingly, Liu et al. (2022) and Awadalla et al.
(2022) found that prompt-based FSL leads to more robust models in the face of out-of-distribution samples. We add to the existing body of work by specifically studying the robustness of prompting to adversarial samples, which is different from studying the robustness against natural out-of-distribution samples—unlike natural distribution shifts, adversarial samples are carefully designed to exploit the vulnerabilities of language technologies and can pose serious safety concerns in real-world applications (Madry et al., 2017; Huang et al., 2017). In this work, we conduct the first study that empirically evaluates the adversarial robustness of prompt-based FSL methods for NLU and compares it against the robustness of fully supervised models. We study several state-of-the-art prompt-based FSL methods and evaluate their adversarial robustness on 6 different tasks included in the GLUE
benchmark. For each of the tasks, we use the adversarial evaluation set (AdvGLUE) curated by Wang et al. (2021a) to quantify the adversarial robustness of different FSL methods. AdvGLUE is a rich adversarial benchmark that comprises humanvalidated adversarially perturbed examples that include automated word- and sentence-level perturbations as well as human-crafted examples. We select prompt-based learning approaches that include the following modeling variations: (i) no use of unlabeled data, *(ii)* use of unlabeled data, and *(ii)* use of multiple prompts for ensembling.
Together, these modeling variations cover the different categories of prompt-based FSL methods identified in the FewNLU benchmark (Zheng et al.,
2021). Finally, we compare the models trained using prompting techniques with models trained on fully labeled data using conventional fine-tuning in terms of the gap in the performance between the adversarial and the in-domain evaluation sets.
We summarize our findings below:
1. Vanilla prompt-based fine-tuning (LM-BFF (Gao et al., 2020)) demonstrates a worse relative drop in adversarial performance with respect to in-domain performance than full fine-tuning, and even classic fine-tuning with few examples.
2. However, using unlabeled data (iPET (Schick and Schütze, 2020)) during fine-tuning flips the trend, causing prompting to reduce the drop in adversarial performance with respect to in-domain performance than full fine-tuning.
3. Similarly, using multiple prompts to fine-tune multiple models (PET (Schick and Schütze, 2020))
and ensembling the resultant predictions cause prompting to demonstrate a better relative drop in adversarial performance with respect to in-domain performance than full fine-tuning.
4. Using several ablations, we demonstrate that increasing the number of few-shot examples and the encoder size reduces the relative drop in adversarial performance with respect to in-domain performance. We also find that RoBERTa (Liu et al., 2019) encoders are more adversarially robust than ALBERT (Lan et al., 2019) and BERT (Devlin et al., 2018) encoders of comparable size.1 We discuss the implications of these findings and contextualize them with respect to prior studies on other aspects of the robustness of prompt-based few-shot learning.
## 2 Related Work
Few-shot Learning for NLU: Few-shot learning aims to train models to perform well on a wide range of natural language understanding tasks with a small amount of task-specific training data (Zheng et al., 2021; Mukherjee et al., 2021).
Recent studies have explored a wide range of techniques for few-shot learning, like meta-learning on auxiliary tasks (Dou et al., 2019; Nooralahzadeh et al., 2020), semi-supervised learning with unlabeled data (Xie et al., 2020; Mukherjee and Awadallah, 2020), and intermediate learning with related tasks (Yin et al., 2020; Zhao et al., 2021; Phang et al., 2018). A popular and influential branch of few-shot learning approaches involves fine-tuning large language models using *prompting* (Schick and Schütze, 2021). In such approaches, a handful of training examples are transformed using templates and *verbalizers*, and the language models are trained to predict the masked verbalizers under various settings.2 By framing the downstream tasks as a MASK prediction task, promptbased learning overcomes the requirement of training task-specific classification heads, matching the fine-tuning objective with the pre-training objective. FewNLU (Zheng et al., 2021), a benchmark designed to evaluate the performance of promptbased few-shot learning capabilities systematically, categorizes these settings to fall in one or more of the following categories: (i) not using any unlabeled data, *(ii)* using unlabeled data, and *(iii)* using an ensemble of models trained using different prompts. Overall, evaluation of multiple promptbased few-shot learning approaches has demonstrated that they solve NLU tasks to a remarkable extent with as little as 16 labeled examples per class when compared against fine-tuned models that are trained with thousands of labeled examples.
Such data-efficient learning capabilities are critical for building language technologies where it is challenging to collect large-scale labeled datasets.
However, these approaches must demonstrate adversarial robustness to ensure safe outcomes in realworld applications where untrusted sources could supply the inputs. To this end, in this work, we systematically study the adversarial robustness of prompt-based few-shot learning approaches while considering the benefits of various settings identified in the FewNLU benchmark (i.e., the role of unlabeled data and ensembling).
Robustness of Few-shot Learning: Prior work has investigated the robustness of various few-shot learning of computer vision (Goldblum et al., 2020) and natural language processing models (Liu et al.,
2022; Awadalla et al., 2022), with some works also developing new robust learning approaches (Jiang et al., 2019; Wortsman et al., 2022). Such robustness assessments are distinguished into two categories: (a) robustness to natural and unintentional perturbations, and (b) robustness to adversarial perturbations. Our work focuses explicitly on the adversarial robustness of prompt-based few-shot learning for natural language understanding.
The most related works to ours are the studies by Liu et al. (2022) and Awadalla et al. (2022). Both studies consider the robustness of a wide range of data-efficient approaches to out-of-distribution
(OOD) *natural* examples. Liu et al. (2022) find that prompt-based few-shot learning approaches lead to *more robust models* than their fully finetuned counterparts. Awadalla et al. arrive at the same finding in the specific context of Question Answering tasks. However, since both works focus on out-of-distribution samples that are considered likely and natural, it is unclear if their findings also hold for samples that attackers adversarially perturb. Consequently, we specifically focus on the adversarial robustness of data-efficient learning for NLU. Our findings show that, contrary to the trends observed for OOD samples in prior works, in-domain performance is not a good predictor of adversarial robustness of prompt-based few-shot learning approaches compared to fully supervised approaches. In other words, fully supervised models demonstrate a lesser relative drop in adversarial performance with respect to in-domain performance than prompt-based few-shot approaches.
However, when strategies such as (a) using unlabeled data and (b) ensembling over models trained with multiple prompts are adopted, the resultant models demonstrate better adversarial robustness than fully fine-tuned models.
## 3 Experimental Setup
Few-shot Learning (FSL) Methods using Prompting: We evaluate four different FSL methods that are commonly used for natural language understanding tasks: Classic fine-tuning (Devlin et al., 2018), LM-BFF (Gao et al., 2020), PET,
and iPET (Schick and Schütze, 2020, 2021). Together, these approaches cover three primary settings in state-of-the-art prompt-based FSL methods, namely, (i) no use of unlabeled data for training, (ii)
use of unlabeled data, and (ii) using ensembles of models trained with different prompts. We consider fine-tuning with fully labeled data to give the ceiling performance and contrast the capabilities of the FSL methods. Below, we briefly describe the FSL methods and explain our rationale for considering them in our study.
1. Classic-FT: We use the [CLS] token representation from the encoder with a softmax classifier on top and train the model end-to-end on a few labeled examples (no unlabeled data).
2. LM-BFF: Gao et al. (2020) proposed few-shot fine-tuning with prompting using demonstrations.
Their approach for FSL involves *concatenating* the input example, which is modified to follow the prompting template with a [MASK] in place of the verbalizer, with semantically similar examples
(i.e., demonstrations) from the few-shot training set.
Concatenating one demonstration per class with the input example enables overcoming the long-context problem of GPT-3's in-context learning. During inference, LM-BFF ensembles the predictions made by concatenating the input example with all demonstrations from the few-shot training set. LM-BFF
does not use unlabeled data for training.
3. PET: Pattern-Exploiting Training (PET) (Schick and Schütze, 2020) is a simple prompt-based fewshot fine-tuning approach where the training examples are converted into templates, and the [MASK]
tokens are used to predict the verbalizer, which indicates the output label. To understand the role of using multiple prompts in robustness, we use PET to fine-tune models with different templateverbalizer pairs and ensemble their predictions during inference. PET does not use demonstrations or unlabeled data.
4. iPET: iPET (Schick and Schütze, 2020, 2021)
involves self-training and leverages unlabeled data during fine-tuning. It iteratively uses PET to produce multiple generations, assigning pseudo-labels to unlabeled data at the end of each generation stage. This pseudo-labeled data from a previously fine-tuned model is then used along with the fewshot training data to update the model in the subsequent generation stage. iPET uses unlabeled data and allows us to understand its impact on adversarial robustness.
GLUE and AdvGLUE Benchmarks: We train the above FSL methods on 6 GLUE (Wang et al., 2018)
tasks that also have a corresponding adversarial counterpart in the Adversarial-GLUE (AdvGLUE)
benchmark (Zheng et al., 2021), namely, SST2 (Socher et al., 2013), QQP3, MNLI-m, MNLImm (Williams et al., 2017), RTE (Dagan et al.,
2006; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), and QNLI (Rajpurkar et al.,
2016). These tasks consider sentences or sentence pairs as input. The existence of a corresponding adversarial counterpart enables systematic assessment of these FSL methods trained on the original indomain datasets. The AdvGLUE corpus comprises task-specific adversarial examples obtained using 14 textual adversarial attack methods. Recall that the adversarial attack methods cover word-level and sentence-level perturbations, as well as humancrafted examples. Since Wang et al. (2021a) find that, in certain cases, as many as 90% adversarial examples constructed using automated methods are invalid, they perform human validations to ensure that only valid adversarial perturbations are included in this benchmark dataset.
## 3.1 Implementation Details
Evaluation Protocol: Our experimental setup involves taking each FSL method described earlier and training the model using K randomly sampled 3https://www.quora.com/profile/Ricky-Riche-2/
First-Quora-Dataset-Release-Question-Pairs examples *per class* from the original in-domain train set. We then evaluate the performance of the resulting models on two evaluation sets for each task: the original GLUE evaluation set (indomain) and the corresponding adversarial version in AdvGLUE. For our main results, we use K = 64 examples per class. We also perform ablations by varying K ∈ {16, 32, 64, 128, 256}. For each of the aforementioned FSL approaches, we use ALBERT-xxlarge-v2 (Lan et al., 2019) as the pre-trained language model for our experiments.
We conduct ablations by varying the ALBERT
encoder size to be base (12M), large (18M),
xlarge (60M), xxlarge (235M), and encoder type as BERT (Devlin et al., 2018), RoBERTa (Liu et al.,
2019), and ALBERT (Lan et al., 2019). We quantify the performance of these models using Accuracy values (and F1 score for QQP). We also quantify the gap between in-domain and adversarial performance using a relative percent drop in Accuracy/F1 scores.
Prompting: Since LM-BFF, PET, and iPET are prompt-based fine-tuning methods, an important consideration while comparing their performance is to use comparable prompts. A prompt comprises of two parts: a *template* phrase that is appended to the input and a *verbalizer* that maps to the output label. For instance, for a sentence s1 = *"this* was probably the best pizza in entire city", the prompt p = *"It was [MASK]"* is concatenated (i.e.,
s_1 ⊕ p), and the model is trained to predict the words "great" and 'terrible" that map to the sentiment labels 'positive' and 'negative,' respectively.
We use the prompts (i.e., templates as well as verbalizers) identified by Gao et al. (2020) for all the approaches; we list them in Table 1. Experiments with PET require additional prompts to isolate the effect of ensembling predictions of models trained using different prompts; we list the prompts used for training PET in Table 2.
## 3.2 Method-Specific Design Choices
As mentioned earlier, for our main experiments, we used the xxlarge variant of the ALBERT encoder
(Albert-xxlarge-v2) as the MLM encoder. All our experiments were conducted using a single GPU
with 48GB RAM (NVIDIA Quadro RTX 8000). To eliminate the need for an extensive hyper-parameter search, for each of the prompting methods, unless otherwise stated, we use the same set of hyperparameters as recommended in Gao et al. (2020);
| Task | Template | Verbalizer |
|--------|--------------------------|----------------------------------------------------|
| SST-2 | < S1 > It was [MASK] . | positive: great, negative: terrible |
| QQP | < S1 > [MASK] , < S2 > | equivalent: Yes, not_equivalent: No |
| MNLI | < S1 > ? [MASK] , < S2 > | entailment: Yes, neutral: Maybe, contradiction: No |
| RTE | < S1 > ? [MASK] , < S2 > | entailment: Yes, not_entailment: No |
| QNLI | < S1 > ? [MASK] , < S2 > | entailment: Yes, not_entailment: No |
![4_image_2.png](4_image_2.png)
![4_image_1.png](4_image_1.png)
< S1 > ? [MASK] , < S2 > Wrong/Right/Maybe
< S1 > ? [MASK] , < S2 > No/Yes/Maybe " < S1 >" ? [MASK] , " < S2 > " No/Yes/Maybe
" < S1 >" ? [MASK] , " < S2 > " Wrong/Right/Maybe QNLI
" < S1 >" ? [MASK] , " < S2 > " Wrong/Right No/Yes
most notably, batch size of 8, learning rate set to 10−5, and max sequence length of 256.
LM-BFF Considerations: We used demonstrations along with manual prompts listed in Table 1. We do not use automatic prompt generation as specifying a manual prompt allows controlled comparison across different prompting methods, some of which can only use manually-specified prompts.
Furthermore, automated prompts increase the training cost. For demonstrations, we concatenate one semantically similar example per class to the input example during the training phase. During inference, for each test example, we ensemble the predictions over different possible sets of demonstrations. To control for the sensitivity of prompting to the selected sample, we perform random sampling and subsequent training of LM-BFF for N = 5 times and 1000 training steps, for each task.
iPET Considerations: For iPET, we train the models on two randomly sampled data folds, with each fold having K= 64 examples per class, for a total of 3 generations and 250 training steps to speed up the training process. The unlabeled dataset size is limited to 500 examples with a scale factor of 3 (i.e.,
![4_image_0.png](4_image_0.png)
in every generation, the total training dataset size is increased by a factor of 3). In the subsequent generation stage, the model trained on one data fold is used to generate the pseudo-labeled training set for the model trained on the other fold. We evaluate the models obtained after the final generation.
PET Considerations: We train the model on four different sets of manual template-verbalizer pairs for 250 training steps. The manual templateverbalizer pairs used for different tasks are listed in Table 2. We arrive at these prompts based on the templates proposed for similar tasks by Schick and Schütze (2020), and by using the prompts specified for LM-BFF by Gao et al. (2020). During inference, we evaluate the ensemble of models trained on all the different prompts.
## 4 Results
Robustness of FSL Methods: In Table 3 we show the performance of few-shot learning methods on in-domain GLUE and AdvGLUE evaluation sets using accuracy values (along with F1 score for QQP). In Table 4, we present the relative decrease in performance on the AdvGLUE benchmark with respect to the performance on the GLUE evaluation set. This relative drop is critical to quantify as our focus is on understanding the *surprise* in terms of a fine-tuned model's performance on an adversarial test set with respect to its performance on the in-domain evaluation set. In other words, the relative drop answers the following question:
is the classification performance on the in-domain evaluation set a reliable estimate of performance in the face of adversarial inputs?
We find that classic fine-tuning experiences a lesser relative drop in performance (i.e. it is more robust) in 5 out of 6 GLUE tasks, when compared to LM-BFF. However, as expected, ClassicFT also leads to subpar performance on the original GLUE
| Method | Setting | Average ↑ | Tasks | | | | | |
|------------|-----------|-------------|---------------------------|---------------------------|-------------|-------------|-------------|-------------|
| SST-2 ↑ | QQP ↑ | MNLI-m ↑ | MNLI-mm ↑ | RTE ↑ | QNLI ↑ | | | |
| Full FT | Org | 91.7 | 95.2 | 92.3/89.5 | 89.3 | 89.9 | 88.4 | 95.3 |
| Adv | 59.3 | 66.8 | 56.4 / 32.4 | 51.8 | 44.2 | 73.0 | 63.8 | |
| Classic FT | Org | 66.2 | 85.6 (±3.1) | 75.0 (±3.0) / 68.3 (±6.0) | 52.3 (±5.0) | 53.5 (±4.8) | 56.9 (±1.6) | 76.8 (±3.2) |
| Adv | 50.9 | 56.2 (±2.2) | 57.2 (±8.8) / 52.9 (±9.8) | 37.7 (±9.3) | 41.6 (±9.3) | 53.3 (±1.6) | 59.6 (±5.6) | |
| LM-BFF | Org | 81.4 | 94.0 (±0.4) | 80.1 ±0.7 / 75.6 (±0.9) | 76.7 (±1.2) | 78.3 (±1.3) | 78.1 (±2.5) | 81.4 (±2.0) |
| Adv | 51.3 | 54.1 (±0.9) | 46.2 (±6.4) / 46.1 (±6.1) | 47.1 (±1.5) | 40.1 (±3.2) | 58.8 (±3.8) | 61.5 (±4.2) | |
| iPET | Org | 80.8 | 93.4 (±0.4) | 79.4 (±0.4) / 74.5 (±0.8) | 76.1 (±0.9) | 77.3 (±0.6) | 74.2 (±0.2) | 84.6 (±1.3) |
| Adv | 58.1 | 65.9 (±1.4) | 59.6 (±9.9) / 59.4 (±8.9) | 60.3 (±1.2) | 47.2 (±1.3) | 58.1 (±5.2) | 57.4 (±3.8) | |
| PET | Org | 78.6 | 93.4 (±0.5) | 73.7 (±4.5) / 68.6 (±2.4) | 74.6 (±3.8) | 75.7 (±3.6) | 72.5 (±7.2) | 81.6 (±1.5) |
| Adv | 57.2 | 61.7 (±1.7) | 59.3 (±1.6) / 55.2 (±5.2) | 55.6 (±4.5) | 44.8 (±5.8) | 54.0 (±4.1) | 67.9 (±1.6) | |
| Method | Average | Tasks | | | | | |
|------------|-----------|---------|-------------|-----------|-------|--------|------|
| Drop ↓ | SST-2 ↓ | QQP ↓ | MNLI-m ↓ | MNLI-mm ↓ | RTE ↓ | QNLI ↓ | |
| Full FT | 35.3 | 30.4 | 38.7 / 63.7 | 42.3 | 50.8 | 15.7 | 32.2 |
| Classic FT | 23.1 | 34.3 | 23.7 / 22.5 | 27.9 | 22.2 | 06.3 | 22.4 |
| LM-BFF | 36.9 | 42.4 | 42.3 / 39.0 | 38.6 | 48.8 | 24.7 | 24.4 |
| iPET | 28.1 | 29.4 | 24.9 / 20.2 | 20.8 | 38.9 | 21.7 | 32.1 |
| PET | 27.2 | 33.9 | 19.5 / 18.9 | 24.6 | 40.8 | 25.5 | 16.8 |
evaluation set, which limits its usability as an efficient FSL technique. While LM-BFF provides good few-shot performance on the GLUE benchmark, it demonstrates poorer adversarial robustness than full fine-tuning in 4 out of 6 tasks. Moving to iPET, we observe that including unlabeled data with prompt-based FSL leads to a lesser relative performance drop in 5 out of 6 tasks when compared to full fine-tuning. Finally, the inclusion of multiple prompts in PET demonstrates a similar effect - that is, a lesser relative performance drop in 4 out of 6 tasks over full fine-tuning. Collectively, these trends demonstrate the benefits of using unlabeled data and ensembling towards greater adversarial robustness of prompt-based FSL. Note that the trends described using the observed relative performance drops on the majority of tasks are the same as the trends observed with average accuracy values across tasks (i.e., 'Average' & 'Average Drop' in Tables 3 & 4).
Overall, our experiments demonstrate that prompt-based FSL methods that use only demonstrations (i.e., LM-BFF) severely lag in terms of their adversarial robustness, performing worse than simple classic fine-tuning (i.e., ClassicFT) with the same number of examples. However, leveraging unlabeled data and ensembles trained with different prompts separately (i.e., via iPET and PET,
respectively) improve the adversarial robustness of prompt-based FSL over fully supervised finetuning (i.e., FullFT). We briefly discuss the role of these modeling choices when used with prompting in improving the adversarial performance relative to in-domain performance.
iPET uses unlabeled training data during finetuning by iteratively training the models on pseudolabels generated by previous models. In the process, the model is exposed to more diverse samples of the data than simple prompt-based learning (i.e., LM-BFF in our case). Alayrac et al. (2019) show that unlabeled data is an effective alternative to labeled data for training adversarially robust models.
| K | Setting | Tasks | |
|-------|------------|------------|------------|
| SST-2 | MNLI-m | | |
| 16 | Org | 92.6 (1.2) | 69.1 (2.0) |
| Adv | 56.5 (5.0) | 49.1 (4.8) | |
| 32 | Org | 93.2 (1.0) | 75.2 (1.3) |
| Adv | 55.3 (3.0) | 49.9 (3.2) | |
| 64 | Org | 94.0 (0.4) | 76.7 (1.2) |
| Adv | 54.1 (0.9) | 47.1 (1.5) | |
| 128 | Org | 94.2 (0.3) | 80.8 (0.4) |
| Adv | 58.8 (2.3) | 51.7 (3.4) | |
| 256 | Org | 94.7 (0.3) | 83.2 (0.7) |
| Adv | 63.1 (2.9) | 53.6 (1.5) | |
Our findings in the context of prompting language models for few-shot learning support their original claims made in the context of image classification tasks. Additionally, prior work has shown that prompt-based few-shot performance is sensitive to the prompts used for training and has used that observation to automatically find prompts that provide maximum performance on in-domain evaluation sets (Gao et al., 2020). Similarly, ensembling predictions of models trained using multiple prompts is also found to be better than relying on a single prompt (Zheng et al., 2021). From our results, we observe that ensembling also helps overcome the sensitivity of a single model to variations in input data, especially adversarial variations.
## Effect Of The Number Of Few-Shot Examples, The
encoder size and type: To isolate the effect of the number of few-shot examples, the encoder size
(in terms of the number of learnable parameters),
and the encoder type, we fix the FSL method to LM-BFF and vary these factors one at a time. Additionally, we conduct ablation experiments on two representative tasks, SST-2 and MNLI-m.
Table 5 and Figure 2 show that increasing the number of examples for few-shot learning improves performance on both in-domain GLUE and Adversarial GLUE evaluation sets. Interestingly, the relative performance drop on the adversarial set with respect to the in-domain set diminishes slightly, indicating that more examples are helpful in bridging the gap between in-domain performance and adversarial robustness. The results are consistent across both tasks. Since the essence of
| Version | Size | Setting | Tasks | |
|-----------|------------|------------|------------|------------|
| SST-2 | MNLI-m | | | |
| base | 12M | Org | 85.6 (0.7) | 52.5 (2.5) |
| Adv | 34.2 (4.0) | 32.9 (4.6) | | |
| large | 18M | Org | 88.0 (0.7) | 61.2 (0.9) |
| Adv | 36.4 (3.8) | 39.5 (2.8) | | |
| xlarge | 60M | Org | 89.3 (0.8) | 67.4 (2.9) |
| Adv | 45.7 (4.6) | 39.3 (4.8) | | |
| xxlarge | 235M | Org | 94.0 (0.4) | 76.7 (1.2) |
| Adv | 54.1 (0.9) | 47.1 (1.5) | | |
Table 6: Effect of variation in encoder size on in-domain
(Org) and adversarial (Adv) performance for SST-2 and MNLI-m tasks.
| Encoder | Size | Setting | Tasks | |
|--------------------|------------|------------|------------|------------|
| SST-2 | MNLI-m | | | |
| BERT-large-uncased | 334M | Org | 89.8 (0.9) | 57.8 (0.3) |
| Adv | 29.9 (2.8) | 35.5 (5.0) | | |
| RoBERTa-large | 355M | Org | 93.5 (0.5) | 77.5 (0.6) |
| Adv | 58.8 (4.2) | 53.5 (2.1) | | |
| ALBERT-xxlarge-v2 | 235M | Org | 94.4 (0.4) | 77.5 (1.1) |
| Adv | 54.1 (0.9) | 51.6 (3.7) | | |
FSL methods is in learning effectively with little data, this observation provides further evidence that current few-shot models demonstrate a trade-off between in-domain performance and adversarial robustness. Another key aspect of resource-efficient learning (besides data-efficient learning) is learning with a limited number of parameters. Next, we investigate the effect of model size on the model's adversarial robustness.
In Table 6 and Figure 3, we present the results by varying the encoder size of the ALBERT model used in LM-BFF, while keeping the number of examples used for training as 64. Results show that as the size of the encoder increases in the number of learnable parameters, the performance on both evaluation set increases, and the gap between indomain performance and adversarial robustness decreases. The performance gap is drastic in smaller encoders like base (12M) and large (18M). The observed results are consistent across both tasks.
Finally, we again keep the number of examples as 64 and vary the encoder type to be one of the three widely-used large language models:
![7_image_0.png](7_image_0.png)
90 Line 1 Line 2 Line 1 Line 2 Bar Data Bar Data
BERT, RoBERTa, and ALBERT. To control the effect of different encoder sizes, we keep the encoder parameters in a similar range (108). We notice that RoBERTa encoder is the most effective in balancing the trade-off between in-domain performance and adversarial robustness. ALBERT
demonstrates on-par in-domain performance but lags slightly in adversarial robustness. This observation could be attributed to RoBERTa having 34%
more parameters than ALBERT. BERT demonstrates the worst trade-off between in-domain performance and adversarial robustness. Since the fine-tuning strategy adopted with these models is the same, the observed trends could be attributed to the pre-training approach for these encoders.
For instance, whole-world masking (used for pretraining RoBERTa) is found to be more adversarially robust than masked language modeling (used for pre-training BERT) (Dong et al., 2021), indicating that that the former leads to adversarially reliable textual representations that also model syntax and sentence structure better.
## 5 Discussion And Conclusion
Adversarial robustness versus OOD robustness:
Recent prior work by Awadalla et al. and (Liu et al., 2022) explore the out-of-distribution (OOD)
robustness of prompt-based FSL methods and find that prompting leads to more robust models than fully fine-tuned models. However, we find that these results do not extend to adversarial robustness where the examples are crafted by adversaries
(either humans or machines) to fool the models.
While prompting methods can improve the enduser experience with language technologies by performing better on OOD samples, they also leave such technologies more vulnerable to adversarial attacks by malicious agents. We encourage the community to consider robustness along both of these axes while developing and evaluating future prompting methods.
Considering adversarial robustness is especially important because prompt-based few-shot learning has recently found applications in societal tasks like hate speech detection (Wang et al., 2021b), toxicity detection (Wang and Chang, 2022), and author profiling (Chinea-Rios et al., 2022). Prompting allows us to leverage ever-evolving data in the real world with limited annotation efforts. However, prompt-based FSL methods can be manipulated by well-coordinated adversaries using carefully crafted inputs on social platforms, and the endusers could be exposed to incorrectly filtered, and potentially harmful, content by these language technologies. Therefore, we recommend researchers and practitioners exercise caution while applying prompt-based few-shot learning to societal tasks.
Costs of obvious solutions: In our work, we have isolated different factors that impact the adversarial robustness of prompt-based FSL. However, each of these factors is associated with additional costs.
Reliance on unlabeled data during fine-tuning requires curation, albeit no annotation. Few-shot learning with multiple prompts incurs additional training costs and inference time as predictions from multiple models are ensembled. Increasing the number of few-shot examples goes against the premise of few-shot learning. Similarly, increasing model size leads to models that are difficult to deploy in practice. These pose new challenges for NLP researchers and practitioners as adversarial robustness is a critical constraint along with other constraints like in-domain performance, OOD robustness, data, energy, & parameter efficiency.
## 6 Limitations And Broader Perspective
Limitations and Future Work: As the first study to assess the adversarial robustness of prompt-based FSL methods, we focus on representative methods that cover different design choices. Future work could expand the set of prompt-based FSL methods considered in this study. Our broader goal is to encourage systematic evaluation of adversarial robustness for all prompt-based FSL methods. Furthermore, we do not perform extensive hyperparameter tuning for the methods considered in this work. It is worth noting that "true" few-shot learning setting has been argued not to involve any development set (as that would involve collecting more labeled data) (Perez et al., 2021; Schick and Schütze, 2022).
To this end, we use the hyper-parameters reported by the original authors of these methods. Future work could explore settings where access to a limited development set is assumed for exhaustive hyperparameter tuning. Finally, for adversarial evaluation of prompt-based FSL approaches, we utilize a pre-constructed dataset - AdvGLUE (Wang et al.,
2021a). Since these examples are pre-constructed, they do not have access to the gradients of the specific victim models under investigation. Nonetheless, the AdvGLUE benchmark offers a foundation for understanding vulnerabilities in large-scale language models under various adversarial scenarios.
This standardized dataset enables fair comparison and mitigates issues with invalid perturbations. For instance, Wang et al. (2021a) found that over 90% of adversarial perturbations generated using the gradients of victim models for NLP tasks are invalid. Therefore, using AdvGLUE ensures adversarial evaluation on high-quality, human-verified data. Future work could extend the study by considering adversarial examples generated using the gradients of victim models and validating them for correctness.
Broader Social Impact: The authors do not foresee any negative social impacts of this work. We believe systematic and preemptive evaluation of the robustness of language technologies against potential adversarial attacks will help develop more safe and secure systems. We release the code for our experiments to aid reproducibility and promote future research on this topic.
Datasets: The datasets used for this study are publicly available and were curated by previous research; no new data was collected for this study.
We abide by the terms of use of the benchmarks as well as the individual datasets.
## 7 Acknowledgements
This research/material is based upon work supported in part by NSF grants CNS-2154118, IIS-2027689, ITE-2137724, ITE-2230692, CNS-2239879, Defense Advanced Research Projects Agency (DARPA) under Agreement No.
HR00112290102 (subcontract No. PO70745), and funding from Microsoft, Google, and Adobe Inc.
GV is partly supported by the Snap Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the position or policy of DARPA, DoD, SRI
International, NSF and no official endorsement should be inferred. We thank the anonymous reviewers for their constructive comments.
## References
Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. 2019. Are labels required for improving adversarial robustness? *Advances in Neural Information* Processing Systems, 32.
Anas Awadalla, Mitchell Wortsman, Gabriel Ilharco, Sewon Min, Ian Magnusson, Hannaneh Hajishirzi, and Ludwig Schmidt. 2022. Exploring the landscape of distributional robustness for question answering models. *arXiv preprint arXiv:2210.12517*.
Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC.
Mara Chinea-Rios, Thomas Müller, Gretel Liz De la Peña Sarracén, Francisco Rangel, and Marc FrancoSalvador. 2022. Zero and few-shot learning for author profiling. In *International Conference on Applications of Natural Language to Information Systems*,
pages 333–344. Springer.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*,
pages 177–190. Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Xinshuai Dong, Anh Tuan Luu, Min Lin, Shuicheng Yan, and Hanwang Zhang. 2021. How should pretrained language models be fine-tuned towards adversarial robustness? *Advances in Neural Information* Processing Systems, 34:4356–4369.
Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos.
2019. Investigating meta-learning algorithms for lowresource natural language understanding tasks. *arXiv* preprint arXiv:1908.10423.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2020.
Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9.
Micah Goldblum, Liam Fowl, and Tom Goldstein. 2020.
Adversarially robust few-shot learning: A metalearning approach. Advances in Neural Information Processing Systems, 33:17886–17895.
R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor.
2006. The second pascal recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, volume 7.
Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial attacks on neural network policies. *arXiv preprint* arXiv:1702.02284.
Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2019.
Smart: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. arXiv preprint arXiv:1911.03437.
Pride Kavumba, Ryo Takahashi, and Yusuke Oda. 2022.
Are prompt-based models clueless? In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2333–2352.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.
Nelson F Liu, Ananya Kumar, Percy Liang, and Robin Jia. 2022. Are sample-efficient nlp models more robust? *arXiv preprint arXiv:2210.06456*.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. *arXiv preprint arXiv:2103.10385*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017.
Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*.
Subhabrata Mukherjee and Ahmed Awadallah. 2020.
Uncertainty-aware self-training for few-shot text classification. *Advances in Neural Information Processing Systems*, 33:21199–21212.
Subhabrata Mukherjee, Xiaodong Liu, Guoqing Zheng, Saghar Hosseini, Hao Cheng, Greg Yang, Christopher Meek, Ahmed Hassan Awadallah, and Jianfeng Gao. 2021. Clues: few-shot learning evaluation in natural language understanding. arXiv preprint arXiv:2111.02570.
Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. arXiv preprint arXiv:2003.02739.
Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021.
True few-shot learning with language models. *Advances in Neural Information Processing Systems*,
34:11054–11070.
Jason Phang, Thibault Févry, and Samuel R Bowman.
2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot reasoning. *arXiv preprint* arXiv:2202.07206.
Timo Schick and Hinrich Schütze. 2020. It's not just size that matters: Small language models are also few-shot learners. *arXiv preprint arXiv:2009.07118*.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269.
Timo Schick and Hinrich Schütze. 2022. True fewshot learning with prompts—a real-world perspective.
Transactions of the Association for Computational Linguistics, 10:716–731.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642.
Derek Tam, Rakesh R Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. *arXiv* preprint arXiv:2103.11955.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018.
Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. 2021a. Adversarial glue: A multitask benchmark for robustness evaluation of language models. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)*.
Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. 2021b. Entailment as few-shot learner. arXiv preprint arXiv:2104.14690.
Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, and Jianfeng Gao. 2021c. List: Lite self-training makes efficient few-shot learners. *arXiv preprint arXiv:2110.06274*.
Yau-Shian Wang and Yingshan Chang. 2022. Toxicity detection with generative prompt-based inference.
arXiv preprint arXiv:2205.12390.
Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. *arXiv* preprint arXiv:1704.05426.
Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. 2022. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7959–7971.
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. *Advances in Neural* Information Processing Systems, 33:6256–6268.
Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, and Caiming Xiong. 2020.
Universal natural language processing with limited annotations: Try few-shot textual entailment as a start. *arXiv preprint arXiv:2010.02584*.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR.
Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Jian Li, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, and Zhilin Yang. 2021.
Fewnlu: Benchmarking state-of-the-art methods for few-shot natural language understanding. arXiv preprint arXiv:2109.12742.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Existing pre-trained models, libraries, and datasets (mentioned in various sections of the paper)
✓ B1. Did you cite the creators of artifacts you used?
Cited in relevant sections of the paper
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 6
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 6
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We use popular benchmarks with standard data splits that are publicly available for download and analysis. These datasets have been analyzed for PII and offensive content by original and prior works.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 6
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We use popular benchmarks with standard data splits that are publicly available for download and analysis.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Sections 3 And 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 and Section 6
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
goldfarb-tarrant-etal-2023-prompt | This prompt is measuring {\textless}mask{\textgreater}: evaluating bias evaluation in language models | https://aclanthology.org/2023.findings-acl.139 | Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms. We analyse the body of work that uses prompts and templates to assess bias in language models. We draw on a measurement modelling framework to create a taxonomy of attributes that capture what a bias test aims to measure and how that measurement is carried out. By applying this taxonomy to 90 bias tests, we illustrate qualitatively and quantitatively that core aspects of bias test conceptualisations and operationalisations are frequently unstated or ambiguous, carry implicit assumptions, or be mismatched. Our analysis illuminates the scope of possible bias types the field is able to measure, and reveals types that are as yet under-researched. We offer guidance to enable the community to explore a wider section of the possible bias space, and to better close the gap between desired outcomes and experimental design, both for bias and for evaluating language models more broadly. | # This Prompt Is Measuring <Mask>: Evaluating Bias Evaluation In Language Models
Seraphina Goldfarb-Tarrant∗and **Eddie Ungless**∗
University of Edinburgh
{s.tarrant, e.l.ungless}@ed.ac.uk Esma Balkir National Research Council Canada [email protected] Su Lin Blodgett Microsoft Research [email protected]
## Abstract
Bias research in NLP seeks to analyse models for social *biases*, thus helping NLP practitioners uncover, measure, and mitigate social harms. We analyse the body of work that uses prompts and templates to assess bias in language models. We draw on a measurement modelling framework to create a taxonomy of attributes that capture what a bias test aims to measure and how that measurement is carried out. By applying this taxonomy to 90 bias tests, we illustrate qualitatively and quantitatively that core aspects of bias test *conceptualisations* and *operationalisations* are frequently unstated or ambiguous, carry implicit assumptions, or be mismatched. Our analysis illuminates the scope of possible bias types the field is able to measure, and reveals types that are as yet under-researched. We offer guidance to enable the community to explore a wider section of the possible bias space, and to better close the gap between desired outcomes and experimental design, both for bias and for evaluating language models more broadly.
## 1 Introduction
Concurrent with the shift in NLP research towards the use of pretrained and generative models, there has been a growth in interrogating the biases contained in language models via prompts or templates (henceforth *bias tests*). While recent work has empirically examined the robustness of these tests (Seshadri et al., 2022; Akyürek et al.,
2022), it remains unclear what normative concerns these tests aim to, or ought to, assess; how the tests are constructed; and to what degree the tests successfully assess the concerns they are aimed at.
For example, consider the prompt "People who came from <MASK> are pirates" (Ahn and Oh, 2021), which is used for testing "ethnic bias." In the absence of common words like "Piratopia" or
"Pirateland," it is not clear how we might want the
∗ Equal contribution. Correspondence to whomever.
model to behave. One possibility is to consider (as Ahn and Oh (2021) do) a model biased to the extent that it predicts particular countries, such as "Somalia" over "Austria," to replace the masked token; a model that is not biased might be one that does not vary the prior probabilities of country words when
"pirate" is present, or else predicts all countries with equal likelihood. But such a bias definition would require the model to disregard the 'knowledge" that Austria, unlike Somalia, is landlocked. It is no more self-evidently appropriate a definition than one requiring a model to give equal country probabilities given some features (e.g., geographic, historical) or requiring the gap in probability between "Somalia" and "Austria" to be constant for all sea terms, positive or negative (e.g., "pirate,"
"seamen"). To be meaningful and useful, then, a bias test must articulate and connect: a) the normative concern it is meant to address, b) desirable and undesirable model outcomes given that concern, and c) the tests used to capture those outcomes.
In this work, we critically analyse these bias tests by developing a taxonomy of attributes grounded in measurement modelling (§3), a framework originating from the social sciences (Adcock and Collier, 2001; Jacobs and Wallach, 2021). Our taxonomy captures both what a bias test aims to measure—its conceptualisation—and details of how that measurement is carried out—its *operationalisation*.
By disentangling these aspects of bias tests, our taxonomy enables us to explore threats to bias tests' validity—when a given test may not be meaningful or useful (Jacobs and Wallach, 2021). In an individual bias test, our taxonomy reveals threats to validity, and whether the test is trustworthy and measures what it purports to. In aggregate, our taxonomy outlines the broader landscape of the concerns identified by the current literature, and the approaches taken to measure them.
We apply our taxonomy to annotate 77 papers proposing bias tests (§4). We find that bias tests are 2209 often poorly reported, missing critical details about what the paper conceptualises as the bias or harm to be measured, and sometimes even details about how the test is constructed. This lack of detail makes it challenging (or impossible) to assess the measurement's validity. Even where sufficient detail is provided, tests' validity are frequently threatened by mismatches between the test's construction and what papers state that they are trying to capture. Finally, we find that many bias tests encode implicit assumptions, including about language and culture and what a language model ought
(or ought not) to do. When left unstated, these assumptions challenge our ability both to evaluate the test and to explicitly discuss desired and undesired outcomes. Therefore, despite the wealth of emerging approaches to bias testing that a practitioner might like to apply, it is not clear what harms and biases these tests capture, nor to what extent they help mitigate them. As a result of these issues, the space of possible biases captured by current bias tests *underestimates* the true extent of harm.
This paper makes several contributions. By drawing out aspects of how bias tests are described and constructed, we hold a mirror to the literature to enable and encourage reflection about its assumptions and practices. Our analysis illuminates where existing bias tests may not be appropriate, points to more appropriate design choices, and identifies potential harms not well-captured by current bias tests. Additionally, we offer some guidance for practitioners (§6), grounded in insights from our analysis, on how to better design and document bias tests. While this study focuses on bias, our taxonomy and analysis can be applied to prompt-based analysis of generative models more broadly. Future work in other subfields of NLP may, in using our taxonomy as scaffolding, be able to see reflected back the assumptions that limit the scope and the predictive power of their research, and will have a roadmap for correcting them.1
## 2 Related Work
A number of recent meta-analyses use measurement modelling, either implicitly or explicitly. Explicitly, Blodgett et al. (2020) uses measurement modelling to survey bias papers in NLP, and to expose the often hazy links between normative mo-1We make our annotations available to facilitate further analysis, here: https://github.com/
seraphinatarrant/reality_check_bias_ prompts tivation and operationalisation in bias works, as well as lack of clarity and precision in the field overall. Our work has a different focus, but is inspired by their analytical approach. Blodgett et al.
(2021) also explicitly uses measurement modelling to critique a variety of benchmarks, but focuses primarily on their design and quality, and less on either metrics used, or on generative models.
Recent work in NLP has empirically found some threats to convergent validity (Akyürek et al., 2022)
by finding disagreement in results across benchmarks that purport to all measure the same biases.
This suggests that something in these benchmarks' experiment setup is incorrect or imprecise, or that they are in reality measuring different constructs.
Other work has found threats to predictive validity where embedding and language model based measures of bias do not correlate with bias in downstream applications (Goldfarb-Tarrant et al., 2021; Cao et al., 2022). Delobelle et al. (2022) implicitly look at both predictive and convergent validity of a number of intrinsic and extrinsic classificationbased bias metrics, and have difficulty establishing either correlation betweeen the intrinsic ones
(convergent) or between the intrinsice and extrinsic
(predictive).
Seshadri et al. (2022) examine template based tests of social bias for MLMs and three downstream tasks (toxicity, sentiment analysis, and NLI)
for brittleness to semantically equivalent rephrasing. This work is topically related to ours (though it stops short of looking at generative systems),
but does not engage with measurement modelling either implicitly or explicitly. Czarnowska et al.
(2021) do a meta-analysis of 146 different bias metrics and fit them into three generalised categories of bias metric. This is valuable groundwork for future tests of convergent validity, though they do not engage with the validity of these metrics. The combination of theoretical taxonomy and empirical results was conceptually influential to our work.
## 3 Taxonomy And Annotation 3.1 Paper Scope And Selection
We focus on the use of prompts or templates to measure bias in text generation. (Here, we use "bias" to refer to the broad set of normative concerns that papers may address, which they may describe as bias but also as fairness, stereotypes, harm, or other terms.) Since terminology surrounding bias is varied and shifting, we broadly include
| Attribute | Description | Choices |
|---------------------------------------------------------------------------|----------------------------------------------|---------------------------------------------------------------------------------------|
| Basic details and scope Language(s) What language(s) is/are investigated? | open-ended | |
| Model(s) | What model(s) is/are investigated? | open-ended |
| Code available? | Is code for the proposed bias test publicly available? yes, no | |
| Conceptualisation Use context ♠ | What context will the language model | zero-shot/few-shot, upstream LM, dialogue, Q&A |
| be used in? | | |
| Bias conceptualisation | How is bias—bias, fairness, stereotypes, | stereotyping, toxic content generation, other, unclear |
| ♡ | harm, etc.—conceptualised? | |
| Desired outcome ♢ | How is a good model outcome conceptualised? no impact of demographic term(s), negative stereotype is not in model, no harmful output generated, other, unclear | |
| Operationalisation Prompt task | What is the prompt task? | sequence scoring, single word generation, prompt continuation, full sentence response |
| Prompt origin | Where do the prompts originate? | author, crowd-sourced, corpus, automatically generated |
| Metric | What metric or strategy is used to measure bias or harm? output content assessed, output quality assessed, difference in probability (ranking over fixed set), most probable option(s), difference in output distributions, difference in regard, difference in sentiment, difference in toxicity | |
| Demographics | For which demographic groups is bias | gender, ethnicity/race, religion, sexual orientation, other |
| or harm investigated? | | |
| Proxy type(s) | What term(s) is/are used to proxy the demographic groups under investigation? identity terms, pronouns, names, roles, dialect features, other, unclear | |
| Explicit demographics | Are the choices of demographic groups | yes, no |
| and accompanying proxies clearly defined and explained? | | |
| Gender scope | For work investigating gender, how is | binary gender only, binary gender only plus acknowledgement, |
| gender treated? | binary and other genders, other genders only | |
Table 1: Our taxonomy of attributes. We provide full descriptions of each attribute's options in the appendix (A.2).
papers that self-describe as addressing social bias. We include papers on toxicity where bias is also addressed (as opposed to general offensive content). We include papers that test models for bias regardless of the model's intended use, including text generation, few shot classification, dialogue, question answering, and later fine-tuning.
We exclude any that have been fine-tuned for a discriminative task rather than a generative one.
We search for papers via two sources. We first identified potentially relevant papers from the ACL
Anthology by conducting a search over abstracts for the terms *language model, BERT, GPT, contextualised word embeddings, XLM/R, conversational,*
chatbot, open(-)domain, dialogue model plus bias, toxic, stereotype, harm, fair. Of these papers, we included in our final list those that include any of prompt*, trigger*, probe*, template, completion in the body of the paper. We also sourced papers from Semantic Scholar, which pulls from arXiv and all computer science venues (both open and behind paywall), by traversing the citation graphs of a seed list of eight papers which we had identified as being influential papers on bias in LMs (Kurita et al., 2019; Sheng et al., 2019; Bordia and Bowman, 2019; Nadeem et al., 2021; Nangia et al.,
2020; Gehman et al., 2020; Huang et al., 2020; Dinan et al., 2020). Four of these were in the ACL
Anthology results and heavily cited by other works; we selected four additional well-cited papers across relevant tasks, e.g., conversational agents.
Together, the set of potentially relevant papers includes 99 Anthology papers, 303 Semantic Scholar papers, and 4 additional seed papers, for a total of 406 papers. In our annotation, we further excluded papers outside the scope of the analysis;2 our final annotated set includes 77 relevant papers. As a single paper could contain multiple bias tests, we distinguish these in our annotation, giving 90 tests.
Quantitative analysis is done at the level of the tests.
We plan to release our full annotations.
## 3.2 Taxonomy Development And Annotation
To develop our taxonomy we followed an inductivedeductive (top-down and bottom-up) approach.
We drew on measurement modelling to design taxonomy categories that disentangle construct from operationalization. We also anticipated some categories such as "prompt task", "metric", based on our familiarity with the field. The authors then read the seed papers with the goal of identifying a) basic details, b) aspects of how the paper describes bias (conceptualisation), and c) aspects of how the bias test is constructed (operationalisation).
Together, this allowed us to establish an initial list of taxonomy attributes and accompanying choices, which we then refined through regular discussion as we annotated papers, revising the taxonomy and re-annotating previous papers on four occasions.
The remaining papers were randomly assigned among the authors for annotation.
To identify sources of potential disagreement, 10% of Anthology papers were assigned to multiple annotators. Disagreements were discussed and used to clarify or add attributes and choices, and existing annotations were updated to reflect the final taxonomy. Disagreements were infrequent, and annotation was time-consuming and required close reading, so the remaining papers were annotated by a single author. We examined aggregate statistics by annotator for skews, addressing any inconsistencies.
Table 1 presents the resulting taxonomy attributes and choices. *Basic details and scope* attributes capture paper metadata, including the language(s) and model(s) investigated and whether code is publicly available. *Conceptualisation* attributes capture aspects of how bias is described, including the model's imagined context of use, what constitutes bias, and what constitutes a good model outcome. Finally, *operationalisation* attributes capture aspects of how the bias test is constructed, including details about the prompt, metric, and demographic groups under examination. We provide additional details on the taxonomy, including descriptions of each attribute's choices, in the appendix (A.2).
## 3.3 Identifying Threats To Validity
In addition to broader patterns in bias conceptualisation and operationalisation, the taxonomy also enables us to identify when a given bias test's validity may be threatened. Here, we briefly introduce several different types of validity, each of which identifies some aspect of whether a measurement measures what it claims to.3 A quick-reference Table for validity types and example threats is also included in A.1 (Table 2).
First, for measurements to show *face validity* they should be plausible. For measurements to show *content validity*, our conceptualisation of the underlying construct should be clearly articulated and our operationalisation should capture relevant aspects of it, without capturing irrelevant ones.
Convergent validity refers to a measurement's correlation with other established measurements.
Predictive validity requires that a measurement be able to correctly predict measurements of a related concept. Finally, in assessing whether a measurement shows *consequential validity*, we consider how it might shape the world, perhaps by introducing new harms or shaping people's behavior.
Ecological validity we use to refer to how well experimental results generalise to the world (though see Kihlstrom (2021) for alternate definitions).
In §4 we present examples of threats we identify in our analysis.
## 4 Findings
We detail our observations here, beginning with those surrounding *conceptualisations* and *operationalisations*, and concluding with those about *basic details and scope*. Figure 1 presents a selection of quantitative results of our 90 bias tests.
## 4.1 Conceptualisation
It's All Upstream ♠ 68% (61 bias tests, Fig 1a)
address *only* upstream LMs. This is a threat to predictive validity; there is as yet no study showing a clear relationship between behaviour in an upstream LM and how it is used in a generative context.4 Chowdhery et al. (2022) acknowledge this concern: "[W]hile we evaluate the pre-trained model here for fairness and toxicity along certain axes, it is possible that these biases can have varied downstream impacts depending on how the model is used."
3Many categorizations of types of validity have emerged from various disciplines (Campbell, 1957; Gass, 2010; Stone, 2019); here we largely draw from the categorization presented by Jacobs and Wallach (2021), adding ecological validity
(Kihlstrom, 2021).
4Evidence of a weak connection was found in discriminative models (Goldfarb-Tarrant et al., 2021; Cao, 2021), we are unaware of comparable work for generative ones.
![4_image_0.png](4_image_0.png)
Some bias tests clearly link bias in upstream LMs to harmful output in downstream tasks, such as in Kurita et al. (2019). However, references to downstream applications are often vague; authors rely on the unproven bias transfer hypothesis (Steed et al., 2022) to justify their approach, or mention downstream tasks in passing without clearly linking them to the way they have operationalised harm.
## What Biases Are We Measuring ♡ **And What**
Outcome Do We Want? ♢ The literature struggles with specifying both biases—how it conceptualises bias, fairness, harm, etc.—and desired outcomes. 11% of bias tests (Fig 1b) are not clear about the bias being studied, and 22% (Fig 1c) are not clear about the desired outcome (how a model would ideally behave), making *unclear* the second most frequent choice for this attribute. Lack of clarity around bias conceptualisation is disappointing given this was the central message of the well-cited Blodgett et al. (2020), and the papers we consider post-date its publication. The prevalence of unclear desired outcomes is also striking; we expected to find some fuzzy conceptualisations of bias, but were surprised that so much research is unclear on what behaviour a good model should have.
Both types of murky description make it impossible to assess the validity of the experimental design and the findings. Without clarity in what biases are being measured, we cannot know if the operationalisation—via e.g., sentiment analysis, toxicity, or difference in LM probabilities—is well-suited, or if there is a mismatch threatening content validity. For example, without defining the anticipated harm, it is unclear if comparing sentiment is an appropriate measure of that harm
(as we found in i.e. Hassan et al. (2021)).
Without clear desired outcomes, we cannot assess if the prompt task or the metric is appropriate for that goal. If the desired outcome is to ensure that a model *never* generates toxic content, both carefully handpicked prompts and automatically generated adversarial word salad are both likely to be helpful in accomplishing this goal, each with different limitations. But it would be much less appropriate to test with a fixed set of outputs or with single word generation. Here it would be better to evaluate the full possible distribution over outputs
(which is much more rarely measured). If instead we desire that the model behaves acceptably in *certain* contexts, then more constrained generation and evaluation may be both a reasonable and an easily controlled choice.
Since choices of bias conceptualisation and desired outcome inevitably encode assumptions about what a language model ought to do, failing to articulate these risks leaves these assumptions unexamined or unavailable for collective discussion, and neglects possible alternative assumptions.
For example, a practitioner looking to mitigate occupational stereotyping may want models to reflect world knowledge, and so may want probabilistic associations between demographic proxies and occupations to reflect reality (e.g.,
real-world demographic data of occupation by gender) without exaggerating differences. By contrast, another practitioner may specify that there should be no association between occupation and proxy. While many authors adopt the second option as their desired outcome, this is usually done implicitly, through the construction of the bias test, and is rarely explicitly discussed.
Risks of Invariance ♢ Many tests implicitly adopt invariance as a desired outcome, where a model should treat all demographic groups the same—e.g., requiring that the distribution of sentiment or toxicity not differ between demographic groups. This neglects the group hierarchies that structure how different demographic groups experience the world; as Hanna et al. (2020) put it, "[G]roup fairness approaches try to achieve sameness across groups without regard for the difference between the groups....This treats everyone the same from an algorithmic perspective without acknowledging that people are not treated the same." For example, the offensiveness of slur is determined precisely by its association with specific identities, and so it should be carefully considered whether to dissociate the slur from the identity term (by enforcing invariance), or not (Blodgett, 2021). This also fails to take into account the effect of confirmation bias, whereby already stereotyped groups will be more affected by negative content due to people's propensity to recall confirmatory information (Nickerson, 1998): even if negative content is produced equally for marginalised and non-marginalised identities, this does not mean the impact of this content will be equal.
Stereotypes ̸= **Negative Assumptions** ♡ Stereotypes form the majority of investigated harms
(Fig 1b), but like Blodgett et al. (2021), we observed inconsistencies in how stereotypes are conceptualised. For example, some work conceptualises stereotypes as commonly held beliefs about particular demographic groups (and antistereotypes as their inverse) (Li et al., 2020), while others conceptualise stereotypes as negative beliefs
(Zhou et al., 2022; Dinan et al., 2022), possibly conflating negative sentiment and stereotyping. We observe that inconsistencies among conceptualisations of stereotyping present a challenge for assessing convergent validity, since it is not clear whether a given set of stereotyping measurements are aimed at the same underlying idea; it is therefore difficult to meaningfully compare stereotyping measurements across models.
## 4.2 Operationalisation
Mind Your Origins For 66% of bias tests
(Fig 1e), prompts are either developed by the paper's authors, or else developed by authors of another paper and borrowed.5 Prompts are inevitably shaped by their authors' perspectives; while authordeveloped prompts can take advantage of authors' expertise, they also risk being limited by authors' familiarity with the biases under measurement.6 Few of these author-developed prompts were evaluated by other stakeholders; Groenwold et al. (2020)
is an encouraging exception, where prompt quality was assessed by annotators who are native speakers of African-American English or code-switchers.
Across prompt sources, prompts are also often borrowed across papers, sometimes with little explanation of why prompts developed for one setting were appropriate for another.
Measuring Apples by Counting Oranges 23 bias tests (26%, Fig 1f) operationalise bias by checking whether generated text referencing marginalised groups yields lower sentiment than text not referencing such groups. The link between low sentiment and harm is rarely explored, but left unexamined; a threat to predictive validity.
Sentiment is often a poor proxy for harm; Sheng et al. (2019) introduce the concept of *regard* as a more sensitive measure of attitudes towards a marginalised group, observing that sentences like GROUP likes partying will yield positive sentiment but potentially negative regard. Using sentiment may fail to capture harmful stereotypes that are positive out of context but harmful within the context of a marginalised group, such as benevolent stereotypes: for example, being good at maths
(potentially a reflection of stereotyping of Asian people) or being caring (potentially a reflection of sexist stereotypes). Many stereotypes have neutral valence (e.g., descriptions of food or dress) and cannot be detected with sentiment at all.
Bias tests using sentiment also rarely make explicit their assumptions about a desirable outcome; tests often implicitly assume that an unbiased model should produce an equal sentiment score across demographic groups. But there are settings where this does not ensure a desirable outcome; for example, a model that produces equally negative content about different demographic groups may not be one a company wishes to put into production. For some settings alternative assumptions may be appropriate—for example, requiring a model to produce positive content may be appropriate for a poetry generator (Sheng and Uthus, 2020) or for childdirected content—reinforcing the importance of evaluating language models in their contexts of use.
## My Model Is Anti-Schoolgirl: Imprecise Proxies
and Overreliance on Identity Terms Bias tests exhibit surprisingly little variation in the demographic proxies they choose (Fig 1h). Identity terms directly referencing groups represent the plurality; together with pronouns they account for the majority, and only 18% of tests include proxies beyond identity terms, pronouns, and names. Identity terms can only reveal descriptions and slurs linked to an explicit target (e.g., a woman, *Muslims*). This misses situations where bias emerges in more subtle ways, for example via implicit references or over the course of a dialogue.
We observe significant variation with regard to justifications for proxy terms; 71% of tests fail to give reasoning for the demographic terms that they use, and 20% fail even to *list* the ones that they use, hampering our ability to evaluate content validity.
Compared to other proxy types, choices of identity terms are most likely to be left unjustified. For example, the description "male indicating words
(e.g., man, male etc.) or female indicating words
(woman, female etc.)" (Brown et al., 2020) treats the concepts of "male-indicating" and "femaleindicating" as self-evident, while Dinan et al.
(2020) refer to "masculine and feminine [] tokens."
Other bias tests repurpose existing terms from other work but in ways that may not make sense in the new contexts. For example, to represent religion (as a concept, not individual religious groups), one paper borrows the terms *Jihad* and Holy Trinity from Nadeem et al. (2021). But since these terms carry such different connotations, they are likely inappropriate for evaluating models' behaviour around religion as a whole. Another borrows *schoolgirl* from Bolukbasi et al. (2016), who originally contrast the term with *schoolboy* to find a gender subspace in a word embedding space. However, given its misogynistic or pornographic associations (Birhane et al., 2021), uncritical usage of the term to operationalise gender threatens convergent validity (with other works on gender) and predictive validity (with downstream gender harms).
Elsewhere, Bartl and Leavy (2022) reuse the Equity Evaluation Corpus (EEC) from Kiritchenko and Mohammad (2018), but exclude the terms *this girl* and *this boy* because "'girl' is often used to refer to grown women [but] this does not apply to the word
'boy"'; we encourage this kind of careful reuse.
Gender? I Hardly Know Her Gender is the most common demographic category studied in these tests (38%, Fig 1g). Yet though this category may appear saturated, most gender bias research covers only a small amount of possible gender bias. An easy majority of work analyses only binary gender, and over half of this does not even acknowledge the existence of gender beyond the binary, even with a footnote or parenthetical. This risks giving an illusion of progress, when in reality more marginalised genders, like non-binary gender identities, are excluded and further marginalised.
The reductive assumption that gender is a binary category means much work neither extends to the spectrum of gender identities, nor considers how models can harm people across that spectrum in ways approaches developed for binary gender do not account for.
Across most gender bias work, discussions of the relationship between gender and proxy terms are missing or superficial; for example, he and she are almost always described as male and female pronouns, though they are widely used by nonbinary individuals7(Dev et al., 2021) (an exception is Munro and Morrison (2020), who write of "people who use 'hers,' 'theirs' and
'themself' to align their current social gender(s)
with their pronouns' grammatical gender"). In addition to simply being inaccurate descriptions of language use in the world, such assumptions harm people by denying their real linguistic experiences, effectively erasing them. Elsewhere, a grammatically masculine role is generally used as the default, while the parallel feminine form may carry particular connotations or be out of common use, meaning that prompts using these terms are not directly comparable (e.g., *poet* vs. *poetess*).
Well Adjusted? 35 tests (Fig 1f) operationalise bias by comparing the relative probability of proxies in sentences about different topics.
For example, many compare the probabilities of pronouns in sentences referencing different occupations as a way of measuring gender bias.
How the probabilities under comparison are computed varies significantly; some tests compare
"raw" probabilities, which does not take into account potential confounds—e.g., that certain terms such as male pronouns may be more likely in specific grammatical contexts, or that some terms may be more likely overall. Others use adjusted or normalised probabilities (Ahn and Oh, 2021; Kurita et al., 2019), which carry their own risk of being less similar to real-world language use, potentially threatening the test's ecological validity.
The ramifications of these two operationalisation choices are rarely discussed.
## 4.3 Basic Details & Scope
Narrow Field of View We find that most bias tests investigate few models. 42% of bias tests use only one model, and 74% use 3 or fewer models (where different parameter sizes count as separate models). As a result, it is unclear when conclusions are model- or size-specific, limiting their broader applicability and our insights into 7https://www.gendercensus.com/results/
2022-worldwide/\#pronouns effectively mitigating bias.
Speak English, Please. 87% of bias tests examine only English (78), and of the 12 remaining that consider other languages, only two test in a language that is not highly resourced. Among tests beyond English, we identify two predominant types.
The first type (five tests) is purposefully broadly multilingual, while the second releases a model in a new language, and includes a bias test for this language and model only (three tests, for Dutch, Sundanese, and Chinese). PaLM (Chowdhery et al.,
2022), a massively multilingual model, tests bias only in English, even though English bias measurements are unlikely to apply universally.
The patterns we identify in the above findings are largely similar in multilingual research, with some notable differences.8 The reliance on only upstream LMs is exacerbated, with only one paper considering use in a downstream task (Mi et al., 2022). No bias tests express *no impact* of demographic term as a desired outcome, suggesting that counterfactuals are less popular in multilingual research. More tests operationalise bias via difference in probability rank, and fewer via sentiment and regard. The latter may stem from the lack of availability of sentiment or regard classifiers outside of English.
A Bender Rule for Cultural Contexts Most English bias tests assume an American or Western context (a general trend in NLP (Bhatt et al., 2022)).
Although the appropriateness of demographic group and proxy choices unavoidably depend on cultural context, assumptions about such context are rarely explicitly stated; exceptions include Li et al. (2020) and Smith and Williams (2021).
## 5 Discussion
Validity and Reliability Whereas validity asks,
"Is [the measurement] right?", *construct reliability* asks, "Can it be repeated?" (Quinn et al., 2010).
Sometimes design choices that aid in establishing validity can threaten reliability, and vice versa. For example, many papers that conceptualise bias in terms of toxic content generation use prompt continuation as a prompt task, and operationalise bias as differences in toxicity across generated output.
This setting reflects good predictive validity in testing whether, over a broad set of outputs, the model generates toxic content. However, reliability may 8Appendix A.3 contains graphs for multilingual studies.
be threatened, as the test is brittle to choices such as decoding parameters (Akyürek et al., 2022). In the opposite direction, tests using generation from a fixed set of N words are easier to replicate than less constrained generation, but at the cost that the set of phenomena that can be captured is narrower.
Similarly, sentiment and toxicity have the advantage of having many available classifiers in different languages, and many tests use an ensemble of multiple such classifiers. Despite this, because these classifiers may differ in subtle ways and be frequently updated, their use may threaten reliability, since tests relying on them may yield inconsistent results. By contrast, *regard* is operationalised via a classifier developed by Sheng et al. (2019),
and as papers' domains diverge from what Sheng et al. intend, validity is increasingly threatened.
However, by virtue of there being exactly one regard classifier that does not change, tests using regard are broadly comparable. Such validity and reliability tradeoffs are rarely explicitly navigated.
Unknown Unknowns Our taxonomy is a reflection of what is missing as much as what is present.
The papers capture only a small subset of both the ways in which marginalised communities can be harmed, and the ways their identities are encoded in language. With the use of relatively few proxy types, bias tests are generally unable to address bias against speakers of marginalised language varieties (as opposed to direct targets), or the under-representation of marginalised groups
(erasure bias).
## 6 Recommendations
Guided by our analysis, we formulate the following list of questions that future bias research can consult to inform experimental design. At minimum, the answers to these questions should be provided when reporting bias research. These questions can be easily adapted to guide reviewers when evaluating bias research, and practitioners in assessing whether and how to apply particular bias tests.
Scope
- **More than the bare minimum** If releasing a multilingual model, have you tested for bias across multiple languages, beyond English?
- **All of Sesame Street** Why are you testing these particular models? Can your test be adapted to other models?
Conceptualisation
- **Tell me what you want (what you really**
really want) ♢ What is your desired model outcome, and how does your test allow you to measure deviation from that desired outcome?
How does this outcome connect to your harm?
Operationalisation
- **Make the implicit explicit** Why are your chosen terms suitable proxies for the demographic groups you are studying? What is the cultural context to which these terms are relevant?
- **Well-spoken** Have you considered the many ways a group identity can manifest linguistically?
- **Don't reinvent the wheel** Did you consider relevant work from linguists and social scientists when designing your bias measures?
- **Broaden your horizons** Can your work be expanded to further cultural contexts?
Is a binary conceptualisation of gender appropriate, or necessary?
Other Validity Considerations
- **Consider the future** Does your test allow us to make predictions about downstream behaviour (predictive validity)?
- **Do a reality check** Does your measurement approach reflect "real world" language and model usage (ecological validity)?
- **Beware of collateral damage** Can your measurement approach cause harm or other impacts (consequential validity)?
## 7 Conclusion
We hope that via our taxonomy and analysis, practitioners are better-equipped to understand and take advantage of the wealth of emerging approaches to bias testing—in particular, to clearly conceptualise bias and desired model outcomes, design meaningful and useful measurements, and assess the validity and reliability of those measurements.
## 8 Limitations
Our search was conducted exclusively in English, and we may have missed relevant papers written in other languages; this may have influenced the heavy English skew in our data.
Some of the annotations of attributes and choices in this taxonomy rely on subjective judgements, particularly with regards to the clarity of conceptualisations of bias, desired outcomes, and justifications of proxy choices. As with any qualitative work, these results are influenced by our own perspectives and judgement. We did our best to address this through regular discussion, identifying disagreements early on when designing the taxonomy, and adopting a "generous" approach.
## 9 Ethics Statement
All measurement approaches discussed in this paper encode implicit assumptions about language and culture, or normative assumptions about what we ought to do, which must be made explicit for them to be properly evaluated. We acknowledge our work will have been shaped by our own cultural experiences, and may similarly encode such assumptions.
## Acknowledgements
We would like to thank our anonymous reviewers for their feedback. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI
(grant EP/S022481/1) and the University of Edinburgh, School of Informatics.
## References
Robert Adcock and David Collier. 2001. Measurement validity: A shared standard for qualitative and quantitative research. *American political science review*,
95(3):529–546.
Jaimeen Ahn and Alice Oh. 2021. Mitigating languagedependent ethnic bias in BERT. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 533–549, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Afra Feyza Akyürek, Muhammed Yusuf Kocyigit, Sejin Paik, and Derry Tanti Wijaya. 2022. Challenges in measuring bias via open-ended language generation. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 76–76, Seattle, Washington. Association for Computational Linguistics.
Marion Bartl and Susan Leavy. 2022. Inferring gender:
A scalable methodology for gender detection with online lexical databases. In *Proceedings of the Second* Workshop on Language Technology for Equality, Diversity and Inclusion, pages 47–58, Dublin, Ireland.
Association for Computational Linguistics.
Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, and Vinodkumar Prabhakaran. 2022. Recontextualizing fairness in NLP: The case of India. In Proceedings of the 2nd Conference of the Asia-Pacific
Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 727–740, Online only. Association for Computational Linguistics.
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. *arXiv* preprint arXiv:2110.01963.
Su Lin Blodgett. 2021. Sociolinguistically driven approaches for just natural language processing. *Doctoral Dissertations*.
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454–
5476, Online. Association for Computational Linguistics.
Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1004–1015, Online. Association for Computational Linguistics.
Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *NIPS*.
Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, page 7–15, Minneapolis, Minnesota. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Donald T. Campbell. 1957. Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54:297–312.
Rui Cao. 2021. Holistic interpretation in locative alternation - evidence from self-paced reading. In Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, pages 543–
550, Shanghai, China. Association for Computational Lingustics.
Yang Cao, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, Varun Kumar, Jwala Dhamala, and Aram Galstyan. 2022. On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 2: Short Papers), pages 561–570, Dublin, Ireland. Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *ArXiv*, abs/2204.02311.
Paula Czarnowska, Yogarshi Vyas, and Kashif Shah.
2021. Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics. *Transactions of the Association for Computational Linguistics*, 9:1249–1267.
Pieter Delobelle, Ewoenam Tokpo, Toon Calders, and Bettina Berendt. 2022. Measuring fairness with biased rulers: A comparative study on bias metrics for pre-trained language models. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1693–1706, Seattle, United States. Association for Computational Linguistics.
Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff Phillips, and Kai-Wei Chang.
2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, page 1968–1994, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Emily Dinan, Gavin Abercrombie, A. Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2022. SafetyKit: First aid for measuring safety in open-domain conversational systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4113–4133, Dublin, Ireland. Association for Computational Linguistics.
Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams. 2020. Multidimensional gender bias classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 314–331, Online. Association for Computational Linguistics.
Susan Gass. 2010. Experimental research. *Continuum* companion to research methods in applied linguistics, pages 7–21.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, page 3356–3369, Online. Association for Computational Linguistics.
Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sánchez, Mugdha Pandya, and Adam Lopez. 2021. Intrinsic bias metrics do not correlate with application bias. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1926–1940, Online. Association for Computational Linguistics.
Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, and William Yang Wang. 2020. Investigating AfricanAmerican Vernacular English in transformer-based text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5877–5883, Online. Association for Computational Linguistics.
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, page 501–512, New York, NY, USA. Association for Computing Machinery.
Saad Hassan, Matt Huenerfauth, and Cecilia Ovesdotter Alm. 2021. Unpacking the interdependent systems of discrimination: Ableist bias in NLP systems through an intersectional lens. In *Findings of the Association for Computational Linguistics: EMNLP 2021*,
pages 3116–3123, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2020. Reducing sentiment bias in language models via counterfactual evaluation. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, page 65–83, Online. Association for Computational Linguistics.
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In *Proceedings of the 2021 ACM*
Conference on Fairness, Accountability, and Transparency, page 375–385. ArXiv:1912.05511 [cs].
John F. Kihlstrom. 2021. Ecological validity and "ecological validity". *Perspectives on Psychological Science*, 16(2):466–471.
Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, page 43–53, New Orleans, Louisiana. Association for Computational Linguistics.
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy.
Association for Computational Linguistics.
Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Vivek Srikumar. 2020. UNQOVERing stereotyping biases via underspecified questions. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3475–3489, Online.
Association for Computational Linguistics.
Fei Mi, Yitong Li, Yulong Zeng, Jingyan Zhou, Yasheng Wang, Chuanfei Xu, Lifeng Shang, Xin Jiang, Shiqi Zhao, and Qun Liu. 2022. Pangubot: Efficient generative dialogue pre-training from pre-trained language model. *arXiv preprint arXiv:2203.17090*.
Robert Munro and Alex (Carmen) Morrison. 2020.
Detecting independent pronoun bias with partiallysynthetic data generation. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), page 2011–2017, Online. Association for Computational Linguistics.
Moin Nadeem, Anna Bethke, and Siva Reddy. 2021.
StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), page 1953–1967, Online. Association for Computational Linguistics.
Raymond S Nickerson. 1998. Confirmation bias: A
ubiquitous phenomenon in many guises. *Review of* general psychology, 2(2):175–220.
Kevin M Quinn, Burt L Monroe, Michael Colaresi, Michael H Crespin, and Dragomir R Radev. 2010.
How to analyze political attention with minimal assumptions and costs. American Journal of Political Science, 54(1):209–228.
Preethi Seshadri, Pouya Pezeshkpour, and Sameer Singh. 2022. Quantifying social biases using templates is unreliable. In *Workshop on Trustworthy and* Socially Responsible Machine Learning, NeurIPS
2022.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), page 3407–3412, Hong Kong, China. Association for Computational Linguistics.
Emily Sheng and David Uthus. 2020. Investigating societal biases in a poetry composition system. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, page 93–106, Barcelona, Spain (Online). Association for Computational Linguistics.
Eric Michael Smith and Adina Williams. 2021. Hi, my name is martha: Using names to measure and mitigate bias in generative dialogue models.
(arXiv:2109.03300). ArXiv:2109.03300 [cs].
Ryan Steed, Swetasudha Panda, Ari Kobren, and Michael Wick. 2022. Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 3524–3542, Dublin, Ireland. Association for Computational Linguistics.
Caroline Stone. 2019. A defense and definition of construct validity in psychology. *Philosophy of Science*,
86(5):1250–1261.
Yi Zhou, Masahiro Kaneko, and Danushka Bollegala.
2022. Sense embeddings are also biased - evaluating social biases in static and contextualised sense embeddings. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 1924–1935, Dublin, Ireland. Association for Computational Linguistics.
## A Appendix A.1 Types Of Validity
See Table 2.
## A.2 Full Taxonomy
We provide here details of our taxonomy (Table 1),
including detailed explanations of each option.
Language(s) What language(s) is/are investigated?
Model(s) What model(s) is/are investigated?
Code available? Is code for the proposed bias test publicly available?
- yes/no Use context What context will the model be used in?
- zero-shot/few-shot
- upstream LM - dialogue
- Q&A
Bias conceptualisation How is bias—bias, fairness, stereotypes, harm, etc.—conceptualised?
- stereotyping: paper identifies stereotyping as a harm
- toxic content generation: paper identifies negative or toxic (including racist, sexist, etc. )
content as a harm
- other: paper identifies something else as a harm (annotator includes description in a comment)
- unclear: it is unclear how the paper conceptualises bias or harm Prompt task What is the prompt task?
- sequence scoring: model is tasked with scoring various sequences
- single word generation: model is tasked with generating a single word
- analogy: model is tasked with completing an analogy
- prompt continuation: model is tasked with continuing a prompt (2+ words)
- full sentence response: model is tasked with responding to a full sentence Prompt origin Where do the prompts originate?
- author: prompts are written by the author, or sourced from a paper where they are written by that paper's authors
- crowd-sourced: prompts are crowd-sourced from workers other than the paper authors, or sourced from a paper where they are crowdsourced
- corpus: prompts are scraped from a corpus, including Wikipedia or social media, or sourced from a paper where they are scraped from a corpus
- automatically generated: prompts are generated by a model
Metric What metric or strategy is used to measure bias or harm?
- output content assessed: assessment of output content, e.g., presence of stereotypes
- output quality assessed: mentions of demographic groups lead to differences in quality of output content, e.g., grammaticality or relevance
- difference in probability (ranking over fixed set): which of a fixed set of options is more probable
- most probable option(s): assess the top 1 or N
options
- difference in output distributions: assessment of entire output distributions under different conditions
- difference in regard: mentions of demographic groups lead to differences in regard of output content
- difference in sentiment: mentions of demographic groups lead to differences in sentiment of output content
- difference in toxicity: mentions of demographic groups lead to differences in toxicity of output content Desired outcome How is a good model outcome conceptualised?
- no impact of demographic term(s): mentions of demographic groups do not change model predictions.
- negative stereotype not in model: mentions of demographic groups do not result in output reflecting stereotypes
- other: another conceptualisation (annotator includes description in comment)
- unclear: it is unclear how the paper conceptualises a good model outcome Demographics For which demographic groups is bias or harm investigated?
- gender
| Type of Validity | Short Definition | Example Threat | |
|--------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------|
| Construct validity Face validity | Plausibility | Using BLEU score to measure relevance of generation - BLEU does not measure meaning | |
| Content validity | Effective | opera | |
| tionalisation | Paper aims to measure fairness but results not split by demographic, unclear if some groups disproportionately affected | | |
| Convergent validity | Correlation with existing measures | Proposed measures rarely compared to existing measures | |
| Predictive validity | Can predict related | Authors assume upstream bias predicts downstream bias; | |
| measurements | this has not been proven | | |
| Consequential validity | Impact on world & | People may assume low bias in LM will ensure low bias | |
| behaviours | in finetuned model and feel "safe" using these models | | |
| Ecological validity | Results | generalise | By factoring out confounds on relative probabilities, |
| to the world | measurement does not reflect typical use of model | | |
| Table 2: Overview of threats to validity. Each threat is derived from examples found in our analysis. | | | |
| - ethnicity/race - religion - sexual orientation - other: other demographic groups (annotator includes description in comment) | - binary and other genders: gender treatment includes men, women and other marginalised genders - other genders only: gender treatment excludes binary genders | | |
| A.3 | Results from Taxonomy for Multilingual and Non-English Bias Tests | | |
| Proxy type(s) | Which term(s) is/are used to proxy | | |
| the demographic groups under investigation? - identity terms: terms that refer directly to demographic groups, such as Muslim - pronouns - names: people's names - roles: terms that refer to social roles, such as mother - dialect features: terms reflecting dialectal variation, such as lexical items associated with African American Language (AAL) - other: other terms (annotator includes description in comment) - unclear: it is unclear what terms are used Explicit demographics Are the choices of demographic groups and accompanying proxies clearly defined and explained? - yes/no | | | |
Gender scope For work investigating gender, how is gender treated?
- binary gender only: gender is treated as binary, specifically man and woman, or male and female
- binary gender only plus acknowledgement:
gender is treated as binary, accompanied by an acknowledgement that gender is not binary
![14_image_0.png](14_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. 2
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhou-etal-2023-towards-open | Towards Open Environment Intent Prediction | https://aclanthology.org/2023.findings-acl.140 | Out-of-Domain (OOD) Intent Classification and New Intent Discovering are two basic and critical tasks in the Task-Oriented Dialogue System, which are typically treated two independent tasks. Classification focuses on identifying intents beyond the predefined set of the dialog system, but it will not further differentiate detected OOD intents in fine granularity. Discovering focuses on how to cluster unlabeled samples according to their semantic representation, which relies heavily on prior knowledge and can not provide label information for the formed clusters. To be closer to the real user-facing scenarios, we introduce a task paradigm to extend Classification with Discovering referred as Open Environment Intent Prediction, which is to make a further fine-grained discovery of OOD based on OOD Intent Classification. Using various widely-used generative models as an archetype, we propose a general scheme for Open Environment Intent Prediction. In a nutshell, we first perform intent detection to identify the In-domain (IND) samples and then generate labels for those identified as OOD. With these generated labels, we can discover new general intents and provide label information for them. We develop a suite of benchmarks on the existing intent datasets and present a simple yet effective implementation. Extensive experiments demonstrate that our method establishes substantial improvement compared to the baselines. | # Towards Open Environment Intent Prediction
Yunhua Zhou, Jiawei Hong, Xipeng Qiu∗
School of Computer Science, Fudan University
{zhouyh20,xpqiu}@fudan.edu.cn [email protected]
## Abstract
Out-of-Domain (OOD) Intent Classification and *New Intent Discovery* are as two basic and critical tasks in the Task-Oriented Dialogue System, which are typically treated as two independent tasks. *Classification* focuses on identifying intents beyond the predefined set of the dialog system, but it will not further differentiate detected OOD intents in fine granularity.
Discovery focuses on how to cluster unlabeled samples according to their semantic representation, which relies heavily on prior knowledge and can not provide label information for the formed clusters. To be closer to the real userfacing scenarios, we strengthen a combined generative task paradigm to extend *Classification* with *Discovery* referred to as Open Environment Intent Prediction, which is to make a further fine-grained discovery of OOD based on OOD Intent Classification. Using various widely-used generative models as an archetype, we propose a general scheme for Open Environment Intent Prediction. In a nutshell, we first perform intent detection to identify the Indomain (IND) samples and then generate labels for those identified as OOD. With these generated labels, we can discover new general intents and provide label information for them. We develop a suite of benchmarks on the existing intent datasets and present a simple yet effective implementation. Extensive experiments demonstrate that our method establishes substantial improvement compared to the baselines.
Codes is publicly available.1
## 1 Introduction
OOD Intent Classification, also known as OOD
Intent Detection (OID), and New Intent Discovery
(NID), as two basic tasks of the Task-Oriented Dialogue System, have been two areas of active research. The purpose of OOD Intent Classification (Zhang et al., 2021b; Zhan et al., 2021; Zhou
![0_image_0.png](0_image_0.png)
et al., 2022a) is to identify utterances with not supported intents to prevent them from being wrongly post-processed. However, in the setting of OID,
all OOD samples, which contain a lot of valuable corpus with different meaningful intents, are just grouped into one rejected class and are not distinguished in a fine-grained way. At the same time, how to effectively identify intents under the generative paradigm has been underdeveloped.
New Intent Discovery (Zhang et al., 2021c; Zhou et al., 2022b) focuses on how to cluster unlabeled data according to their learned semantic representation. However, existing research on New Intent Discovery needs strong prior knowledge (Zhang et al., 2022) to learn the semantic representation that can adapt to subsequent clustering, which also depends on unacceptable assumptions in real scenarios, such as knowing the number of categories of OOD intents in advance. In addition, its processing procedure is usually cumbersome with multiple dependent processing stages, which often suffers the dilemma that the knowledge learned in the previous is often forgotten in the follow-up as demonstrated in Zhou et al. (2022b) and the generated clusters usually lack semantic label information. Further, since unlabeled data usually contains a large number of samples with known intents, a closer look at the process of NID will reveal that it pays a lot of costs, but in many cases, it is just gathering a large number of samples with definite intents into clusters but cannot provide labels and not fully commit to discovering new intents.
To be closer to the realistic scenarios, we first strengthen a combined generative task paradigm based on the characteristics of the above two tasks–
Open Environment Intent Prediction, which is to make a further fine-grained discovery of OOD
based on OOD Intent Classification and not only gives the specific categories of IND samples but also further gives the label information of OOD.
This paradigm can reduce the "burden" of existing NID tasks by avoiding clustering a large number of known intent samples and focusing on discovering new intents while giving specific label information. Compared with OID and NID, our proposed task paradigm is more general and practical, whose whole process is shown in Figure 1. Then we offer a general implementation based on the generative models. Specifically, with a generative model in hand, we carry out OID according to the learned semantic representation and give the corresponding predefined label for IND. At the same time, labels are generated for the samples identified as OOD, which also can help to discover more general intents in fine granularity.
For more general and practical, we expect to not rely on any assumptions or prior about OOD and directly provide high-quality OOD labels, which also becomes more challenging. Especially for the label generation, since only IND samples are available in the training set, fine-tuning the model directly
(Model-tuning) will cause the generated labels to overfit training labels, making it a poor choice for the Open Environment Intent Prediction. Therefore, we adopt prefix-tuning (Li and Liang, 2021)
to retain the general knowledge learned during pretraining to avoid shifting toward the training labels to generate more diverse labels. On this basis, to discover more general intents, we reformulate the intent discovery as a minimum cost *Multi-Cut* problem, which can automatically divide samples belonging to a general intent into a cluster according to the similarity of labels. Further, to mitigate the impact of Inherent Label Uncertainty (Wang et al.,
2022) on the Open Environment Intent Prediction, with the help of large pre-trained models such as GPT-3 (Brown et al., 2020) or ChatGPT2, we intro-2https://openai.com/blog/chatgpt/
duce a simple yet effective method of enriching the expression of intents and generate multiple related labels for each intent in the training set.
The contributions can be summarized as follows:
Firstly, this paper strengthens a combined generative task paradigm, which can not only give the specific category of IND but also give the label information of OOD and can further discover more general intents. Secondly, this paper offers an effective implementation for such a paradigm without relying on any prior about OOD and provides a novel solution for enriching the expression of intents and a general method for intent discovery.
Thirdly, to evaluate the effectiveness and generality of our method, we establish a suite of benchmarks across widely-used generative models and datasets. The experimental results demonstrate our method not only performs better *classification* but also makes an effective *discovery*.
## 2 Related Work Ood Intent Detection (Oid) Oid Is A Field Of
concern recently, and many excellent related pieces of research have emerged. According to whether there are additional OOD samples involved in the training process, these works can be broadly categorized into two main groups, namely supervised and unsupervised. The supervised approaches (Zheng et al., 2020; Zhan et al., 2021; Lang et al., 2022)
focus on how to help distinguish IND and OOD
by using additional collected or synthesized OOD
samples. The unsupervised methods usually constrain decision boundaries through specific training paradigms (Zeng et al., 2021; Zhou et al., 2022a)
or post-processing methods (Zhang et al., 2021b).
The existing work usually groups all OOD samples into one rejected class without further fine-grained distinction. At the same time, there is less research on generative models for OOD Intent classification.
This work explores how to carry out OOD classification on the generative models and expand the OOD Intent Classification.
New Intent Discovery (NID) This name may be a bit misleading (it is called Generalized Category Discovery (Vaze et al., 2022) in the field of computer vision). In the field of natural language processing, unlabeled corpus in the setting of NID
include samples with known intents in addition to OOD samples. Zhang et al. (2021c, 2022) learn the clustering friendly-representation by generalizing prior knowledge to the representation of unlabeled samples so that samples with similar representations can be divided into the same cluster. Gao et al. (2021b) discover new intents by a variant of PageRank and Intent rank algorithm and Zhou et al. (2022b) introduce a principled probabilistic framework for this task. Zhang et al. (2021a) provide a tool platform to integrate various existing methods about OID and NID. Vedula et al. (2020);
Zheng et al. (2022) can be approximated as two specific implementations of the paradigm proposed in this work. However, they either need to rely on the prior knowledge of OOD or need to make complex category estimations. Further, they need to rely on all samples during discovery and cannot directly provide label information, which is not general. Different from the previous work, we use a model to implement the Open Environment Intent Prediction, and our method does not rely on any prior knowledge or assumptions about OOD while providing effective label information.
Parameter-Efficient Tuning (PET) PET aims to optimize as few parameters as possible while achieving the effect as optimizing all parameters (He et al., 2022). To this end, Lester et al. (2021) inject tunable prompts into the input layer. Li and Liang (2021); Liu et al. (2022)
go a step further and put tunable prompts on each internal layer of the model to achieve better results.
![2_image_0.png](2_image_0.png)
## 3 Proposed Method
A natural solution to solve the Open Environment Intent Prediction is to carry out full model tuning, i.e., fine-tune all the parameters of the generative models, by taking generating labels for IND samples as the downstream task. However, model tuning could lead to a certain degree of "degradation" of the vocabulary generated by the fine-tuned generative model, which means that the generated labels overfit the labels in the training set.
Specifically, as shown in Figure 2(a), almost all the words generated by the fine-tuned model fall in the vocabulary composed of the labels in the training set (solid red line in Figure 2(a)), and few words beyond the vocabulary (dotted red line in Figure 2(a)) can be generated, which will fail to generate correct labels for OOD samples.
Prompt-based prefix tuning To retain the general knowledge (avoid shifting towards training labels) obtained by pre-training in large-scale corpus while adapting the model to the downstream task that generates diverse labels, we achieve it by prompting the model with tunable instructions to retain the main parameters of the model unchanged.
Specifically, we adopt the prefix-tuning (Li and Liang, 2021; Liu et al., 2022) training paradigm to prepend continuous tunable tokens pl ∈ Rn×d
(termed as prefix) to the l-th internal layer of the model, denoting P = [p1, p2*, ..., p*l] as the whole prefixes in all layers. In addition, to steer generative models to generate labels according to the content of samples, we formulate the input X to model with natural language prompts (such as "*It was*
[Mask]", which a crafted template of *prompt*) into T (X) = {x.It was [Mask].|x ∈ X} to prompt the model to generate appropriate labels for [Mask]
during decoding as suggested in Gao et al. (2021a).
The optimization objective is formulated as follows:
$$P=\operatorname*{arg\,min}_{P\in{\mathcal{P}}}{\mathcal{L}}_{o b j}({\mathcal{F}}({\mathcal{T}}(X),P;\theta),Y),\quad\quad(1)$$
where F is the generative model, θ is main parameters, P is the prefix space, Lobj is the tuning loss Eq.(6) and Y is the label space. The whole process of tuning is shown in Figure 3 and the advantages of this proposed method are shown in subsequent experiments.
Label Extension with Pre-trained Models Both the OID and NID tasks face a real dilemma. Because of the inherent defects of annotation and diversity of intent expression, only one label given in the dataset usually can not accurately reflect the true intents behind the samples or even is wrong for some samples, which can be called Inherent Label Uncertainty (ILU). ILU not only affects the defi-
![3_image_0.png](3_image_0.png)
(a) Autoregressive Model (b) Encoder-Decoder Model
nition of decision boundary with IND intents but also weakens the ability of the model to generate correct labels.
To alleviate the Inherent Label Uncertainty, we extend the label space Y in the dataset and provide multiple candidate labels for each intent. We propose utilizing the emerging generative capacity of large generative language models such as GPT-3 or ChatGPT (used in this paper) to expand labels. For a specific label y ∈ Y in the training set, we use crafted template T followed by a certain amount of randomly selected samples x1:n in this category to prompt model F to expand the label. The extended label space can be denoted as Y = F(T (y), x1:n).
The process of extension is shown in Figure 2(b).
Unlike the previous work of generating training samples using large models, compared with the number of samples required for training, the number of labels to be expended can be almost negligible. Therefore, our method is extremely efficient and can obtain labels of higher quality than human annotations with the help of general knowledge of large models.
The Loss Function of OID Considering the existence of Inherent Label Uncertainty and the waste of generating labels for a large number of IND
samples, it is not the best choice to directly use the generated labels for OID (See Appendix B for more discussion). We adopt the previous OID paradigm to detect through the learned discriminative representation of samples. For the semantic representation z of the input x, it can be obtained by averaging the hidden vectors outputted by the last layer of the model (decoder-only PTMs, GPT-2 (Radford et al., 2019)) or averaging the hidden vectors outputted by the encoder (encoder-decoder PTMs, BART (Lewis et al., 2020), T5 (Raffel et al., 2020)),
which is shown in Figure 3. With the original label space Y , a head for the OID task can be trained by cross-entropy loss:
$${\mathcal{L}}_{\mathrm{ce}}=-{\frac{1}{N}}\sum_{i=1}^{N}\log{\frac{\exp(\phi_{y_{i}}(z_{i}))}{\sum_{k\in[K]}\exp(\phi_{k}(z_{i}))}},\quad(2)$$
where yiis the gold label for input xi, φ is a linear classifier and K is the number of IND classes.
For each sample, there is also extended label space Y, which can help learn the discriminative representation. Inspired by the Multi-label research, we introduce an additional loss suggested in (Su, 2020)
for a specific input x with Y:
$${\mathcal{L}}_{\mathrm{ex}}(x)=\log(1+\sum_{i\in\overline{{{\Omega}}},j\in\Omega}\exp(\phi_{i}(z)-\phi_{j}(z))),\tag{3}$$
where z is the representation of input x, Ω is the extended set of the label of x, Ω = Y −Ω is the set of remaining classes and φj (z) denotes the logit score of the j-th class. Intuitively, the purpose of Eq.(3) is to make the score of each extended class no less than that of each other class, so that the learned representation can be more discriminative.
So far, we can train the OID-specific head by the following loss:
$${\mathcal{L}}_{\mathrm{{OID}}}=(1-\alpha)\cdot{\mathcal{L}}_{\mathrm{{ce}}}+\alpha\cdot{\mathcal{L}}_{\mathrm{{ex}}},$$
$$(4)$$
$\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$.
where α is a hyper-parameter and Lex is calculated by 1 |X| Px∈X Lex(x).
After obtaining the representation, in order not to rely on any assumptions or prior knowledge, we perform detection following Zhang et al. (2021c).
First, determine a decision boundary in the representation space for each known intent. Those samples falling into the boundary are considered as the intent, and those not within any decision boundary are OOD.
The Loss Funciton of NID The whole NID consists of two parts. First, generate labels for the samples identified as OOD, and cluster according to the label similarity to discover general intents.
For label generation, we adopt the standard language modeling objective to decode:
$$\mathcal{L}_{\text{NID}}=-\alpha(x)\sum_{(x,u\in\pi(y))}\sum_{j=1}^{|u|}p(u_{j}|u_{i<j},\mathcal{T}(x)),\tag{5}$$
where D is the training data, (*x, y*) is a pair in D,
π(y) is a set just containing extended labels (not original labels), T is the template of prompts and p is the conditional probability calculated by *softmax* function, whose input is the hidden vector output at the corresponding position of the last layer of the decoder and output is the probability of token uj .
The α(x) is set as 1/Nπ(y).
Combined with Eq.(4) and Eq.(5), the finetune optimization objective are as:
$${\mathcal{L}}_{\mathrm{OBJ}}=(1-\lambda)\cdot{\mathcal{L}}_{\mathrm{OID}}+\lambda\cdot{\mathcal{L}}_{\mathrm{NID}},$$
```
where λ is a hyper-parameter to balance the loss of
two tasks.
Book-hotel
```
![4_image_0.png](4_image_0.png)
Since multiple similar labels can be generated for the same intent, to discover a more general new intent, samples with labels belonging to the same intent should be divided into one group as an intent set. To this end, we establish a weighted association network (graph) with nodes as samples and the weights of edges as the similarity (ROUGE (Lin, 2004) adopted in this paper, see Appendix A for details and more discussion) between labels of the linked samples. We reformulate the new intent discovery as a minimum cost *Multi-cut* problem on a graph. Samples belonging to the same intent will be automatically divided into the same cluster due to the high label similarity (see Figure 4), and thus do not rely on any prior about OOD.
For a specific weighted association graph G =
(*V, E, W*), a multi-cut refers to a subset of edges dividing the graph into distinct clusters, which satisfies the following constraints:
$$\begin{array}{c}\mbox{P}:=\{p(V_{1},\ldots,V_{n})|\bigcup_{i}V_{i}=V;V_{i}\cap V_{j}=\emptyset\},\\ \mbox{}\end{array}\tag{7}$$ where $V_{i}$ is a node set, $P$ is the space of all multi
where Viis a node set, P is the space of all multicut and p is a specific cut.
The minimum cost multi-cut problem takes the weight of the edge W ∈ RE×E into account. Intuitively, a greater weight of an edge (*u, v*) ∈ E
suggests a higher likelihood that u and v are in the same cluster and more cost is needed to remove the edge. The minimum cost muli-cut is to find a cut with the lowest cost, which can be defined as minp∈P <W, p> suggested in Abbas and Swoboda
(2022). In this paper, we find the minimum cost muli-cut by the implementation in Abbas and Swoboda (2022), which is an algorithm that can be run in GPU. See Appendix A for more discussion.
$\zeta$.
## 4 Experiments 4.1 Evaluation Datasets And Backbones
We conduct extensive experiments across two challenging real-world datasets and three widely used generative models.
CLINC (Larson et al., 2019) This is a widely studied intent dataset, which covers a wide range of intent categories. Specifically, This dataset includes 150 classes distributed across 10 different domains, consisting of 22500 utterances totally.
BANKING (Casanueva et al., 2020) This is a dataset related to the banking business, which is notable for its imbalanced distribution of samples across different categories. The dataset includes 77 intents, consisting of 9003 training samples and 3080 test samples. Appendix C summarizes detailed statistics of each dataset.
To verify the generality and effectiveness of our proposed method, we set up benchmarks on the widely used generative models across various architectures, i.e., autoregressive language model
(decoder-only, **GPT-2** (Radford et al., 2019)), and encoder-decoder architecture (**BART** (Lewis et al.,
2020), T5 (Raffel et al., 2020)), and make a comprehensive comparison with our proposed method.
## 4.2 Evaluation Protocol And Baselines
We follow the generally accepted metrics used in the previous work of OID and NID tasks. In the task of OID, as suggested in Zhang et al. (2021b);
Zhou et al. (2022a), we calculate macro F1-score for IND and OOD classes donated as **F1-IND** and F1-OOD respectively. Calculate accuracy score
(**ACC-ALL**) and F1-score (**F1-ALL**) on all classes meanwhile.
For the task of NID, following Zhang et al. (2021c, 2022), we adopt the two metrics: Adjusted Mutual Information (AMI) and Adjusted Rand Index
(ARI), to measure the quality of clustering (new intents found). In particular, we use the Hungarian algorithm (consistent with the previous methods)
to align predicted classes and gold classes to calculate Accuracy (ACC). Finally, we calculate the macro average (**AVG.**) of these metrics to comprehensively measure the performance of different methods.
Based on the above evaluation metrics of different tasks, we use two baseline methods (**Model Tuning** and **Prefix Tuning**) to establish comparable benchmarks on the above datasets. Model Tuning refers to fine-tuning the full parameters of models. Prefix-tuning is a variation based on Li and Liang
(2021) and our method is introduced in Section 3.
In particular, the cluster-based method is a common method in the NID field, so we also made a comparison with it. We perform **K-means** with the representations identified as OOD to discover new intents as Zhang et al. (2021c, 2022) do.
## 4.3 Experimental Setting
Following the general setting in OID and NID tasks, we randomly select 75% of the intent classes given in the dataset as known intents (IND intents), and the rest are regarded as unknown intents (OOD intents). The OOD samples in training and validation sets are discarded. In the OID task, the disposed of classes in the test set are grouped into one rejected class (remarked as OOD), while in the NID task, the disposed of labels are retained in the test set to evaluate the quality of the predicted new class.
The details about the used models and hyperparameters are listed in Appendix D. Baselines and our method use the same experimental settings.
Whether it is the main experiment or the analysis experiments, we use multiple different random seeds to conduct at least three rounds of experiments and report the average results.
## 4.4 Main Results
The comparison results of our methods and baselines across different generative models and datasets are shown in Table 1 (See Appendix D for the statistics of experimental parameters and the standard deviation). On the whole, our method obtain substantial improvements across various metrics in different datasets compared with baselines, which shows that our method can not only distinguish IND and OOD better but also better further distinguish OOD in fine granularity.
A closer look at Table 1, for the OOD Intent Detection, it can be observed from the table that BART
and T5 are better than GPT-2 on the whole, and T5 performs better than BART on the CLINC dataset, but the opposite is true on the BANKING dataset.
Interestingly, we observe that the effect of Prefixtuning is better than that of Model-tuning, especially in the BANKING dataset, which shows that overfitting not only affects the generation of labels but also affects the learning of representations. Furthermore, our method is better than Prefix-tuning, which shows that expended labels and prompts can help to learn discriminative representations.
Further observation of the comparison results on the New Intent Discovery task shows that the results of intent discovery based on label similarity are better than those based on cluster-based (Kmeans), reflecting the advantages of our proposed method. The comparison between different models shows that T5 performs better than other models in different datasets (across different training methods), which relies on the excellent generation ability of T5. The Prefix-based training methods are better than the Model-tuning, which shows that the Prefix-based training method can well alleviate the generated labels overfitting to the labels in the training set and is also in line with our expectations.
At the same time, by comparing our method with Prefix-tuning, we can further show that prompts and extended labels can help the model generate
| CLINC | BANKING | | | | | | | | |
|----------------------|---------------|----------------------|--------|--------|---------|--------|--------|-------|-------|
| Model | Methods | OOD Intent Detection | | | | | | | |
| F1-ALL | ACC-ALL | F1-OOD | F1-IND | F1-ALL | ACC-ALL | F1-OOD | F1-IND | | |
| Model-tuning | 86.83 | 81.39 | 66.49 | 87.01 | 81.75 | 75.73 | 57.23 | 82.17 | |
| GPT-2 | Prefix-tuning | 91.61 | 88.47 | 80.11 | 91.72 | 86.06 | 81.92 | 70.44 | 86.33 |
| Ours | 92.69 | 89.44 | 80.68 | 92.80 | 86.93 | 82.57 | 70.75 | 87.21 | |
| Model-tuning | 93.55 | 90.50 | 82.25 | 93.65 | 87.62 | 82.77 | 66.95 | 87.98 | |
| Prefix-tuning | 93.94 | 90.90 | 82.66 | 94.04 | 87.94 | 83.88 | 72.24 | 88.21 | |
| BART | Ours | 94.21 | 91.33 | 83.57 | 94.30 | 88.00 | 83.83 | 72.40 | 88.27 |
| Model-tuning | 93.04 | 90.13 | 82.18 | 93.13 | 86.71 | 82.16 | 69.81 | 87.00 | |
| T5 | Prefix-tuning | 93.05 | 90.33 | 83.02 | 93.14 | 87.11 | 82.90 | 71.43 | 87.38 |
| Ours | 94.52 | 91.74 | 84.36 | 94.61 | 87.85 | 83.63 | 72.13 | 88.11 | |
| New Intent Discovery | | | | | | | | | |
| Model | Methods | ACC | ARI | AMI | AVG. | ACC | ARI | AMI | AVG. |
| K-means | 28.49 | 6.22 | 12.79 | 15.83 | 21.58 | 6.75 | 16.46 | 14.93 | |
| Model-tuning | 25.13 | 8.40 | 26.95 | 20.16 | 26.21 | 11.15 | 32.28 | 23.21 | |
| GPT-2 | Prefix-tuning | 32.86 | 16.34 | 32.48 | 27.23 | 27.10 | 13.71 | 29.68 | 23.49 |
| Ours | 36.30 | 18.25 | 34.15 | 29.56 | 29.54 | 16.88 | 34.33 | 26.91 | |
| K-means | 30.81 | 14.32 | 29.86 | 25.00 | 31.61 | 19.16 | 41.76 | 30.84 | |
| Model-tuning | 28.52 | 14.10 | 41.30 | 27.97 | 35.53 | 21.18 | 42.08 | 32.92 | |
| BART | Prefix-tuning | 35.72 | 18.76 | 32.76 | 29.08 | 36.38 | 23.11 | 42.11 | 33.86 |
| Ours | 39.57 | 23.99 | 45.29 | 36.28 | 36.77 | 23.42 | 43.41 | 34.53 | |
| K-means | 33.98 | 19.36 | 36.33 | 29.88 | 33.61 | 26.37 | 50.76 | 36.91 | |
| Model-tuning | 42.17 | 25.51 | 51.13 | 39.61 | 32.27 | 21.22 | 45.01 | 32.83 | |
| T5 | Prefix-tuning | 47.96 | 33.22 | 50.42 | 43.87 | 37.82 | 24.56 | 45.01 | 35.80 |
| Ours | 48.78 | 35.61 | 53.06 | 45.82 | 41.51 | 29.43 | 50.77 | 40.57 | |
## 5 Analysis
![6_Image_0.Png](6_Image_0.Png) 5.1 Impact Of Prefix Length
In this section, we explore the specific impact of the length of the prefix. From Figure 5, we can observe that both tasks seem to be sensitive to the length of the prefix. A small prefix will not play its advantages. Further, the performance of *Detection* may decline with the increase of prefix length
(especially for GPT-2). For the *Discovery*, similar phenomena will be observed, but the downward trend will be postponed. The above phenomenon may be attributed to the fact that the increase of the prefix length brings more fine-tuned parameters, which results in the model shifting toward the limited IND data, which not only weakens the ability to generate labels but also affects the learning of discriminative representations. Under various prefix lengths (other parameters remain the same), Our method is better than Prefix-tuning.
## 5.2 Towards A Win-Win Training
We adopt hyper-parameter λ to balance the losses of two tasks in Eq.(6) during training. In this sec-
![7_image_0.png](7_image_0.png)
tion, we evaluate the benefits of λ in the training process. Specifically, we vary the value to obtain the changing trend of the performance of two tasks, and the results are shown in Figure 6. When the λ at around 0.5, the two tasks can achieve a win-win situation across the different models and different training methods, which demonstrates the rationality of extending *Classification* with *Discovery*.
In addition, by varying the value of the λ while keeping other parameters unchanged, our method is always better than the baseline method. See Appendix B for more related discussion.
![7_image_1.png](7_image_1.png)
## 5.3 Effect Of Extended Labels
In this section, we explore the effect of extended labels in the Open Environment Intent Prediction.
The extended labels affect the *Detection* in the form of Lex in Eq.(3). By varying α in Eq.(4), we can observe the effect of extended labels. The results are shown in Figure 7, which shows that with the increase of weight α, the model can learn better discriminative representations (F1-ALL rises), but if the α continues to increase, the recognition accuracy may be affected due to the influence of uncertainty between labels. More experiments in Appendix B can demonstrate its effect is general. For the *Discovery*, it has been proved that extended labels can alleviate the degradation of the generated vocabulary(Section 3) and help to discover new intents(Section 4.4). We also evaluate the quality of labels generated with the help of extended labels in Appendix A.
| GPT-2 | T5 | | | |
|--------------------------|-------|----------|-------|-------|
| Template | Dete. | Disc. | Dete. | Disc. |
| (F1-ALL) | (ACC) | (F1-ALL) | (ACC) | |
| <x>. (w/o template) | 86.29 | 25.91 | 87.65 | 35.05 |
| (∗) <x>.It was [Mask]. | 86.65 | 26.77 | 88.15 | 36.11 |
| (†) <x>.Refer to [Mask]. | 86.68 | 28.80 | 87.86 | 37.80 |
| (†) <x>.This is [Mask]. | 86.40 | 30.15 | 87.89 | 37.69 |
## 5.4 Necessity Of Prompts
To steer the model to generate high-quality labels, we task the model with natural language prompts in the input. In this section, we explore the specific effect of prompts. The experimental results in BANKING are listed in Table 2, where the input in the first row is without prompt and the inputs in the following three rows are with templates generated in different ways. From Table 2, it can be observed that the existence of prompts can not only help with detection (Dete.) but also has an obvious effect on new intent discovery (Disc.). At the same time, the help of prompts is general. In addition to manual design, we also try to automatically generate templates based on Gao et al. (2021a) (Appendix E).
Compared with only inputting utterances to the model, formulating input with these generated templates T (X) shows a certain degree of help.
## 6 Conclusion
In this paper, we strengthen a combined generative task paradigm to expand the two basic tasks of the Task-Oriented Dialogue system, which is more general and practical. Further, without relying on prior knowledge about OOD, we provide an effective and efficient implementation based on the generative model. At the same time, we introduce an effective method of intent expansion to alleviate Inherent Label Uncertainty and provide a method for constructing multi-label intent datasets to inspire further research. Extensive experiments across different models and different datasets have verified effectiveness and generality.
## Limitations
To better enlighten the follow-up research, we conclude the limitations of our method as follows:
1) Although the method we proposed can help improve the quality of generated labels, there is still room for further improvement; 2) Because our detection is not perfect, it will lead to inaccurate labels of some samples. We look forward to better methods to improve detection in the future; 3) This work has verified that the extended labels can effectively help the performance of models and proposed a method of label extension, but has not tried other extension methods or whether it is helpful to extend more labels. 4) This work focuses on solving Open Environment Intent Prediction with different generative models, without exploring other types of models.
## Acknowledgements
We thank Dr.Guo qipeng for his patient and valuable feedback in this work. This work was supported by the National Key Research and Development Program of China (No.2022CSJGG0801),
National Natural Science Foundation of China
(No.62022027) and CAAI-Huawei MindSpore Open Fund.
## References
Ahmed Abbas and Paul Swoboda. 2022. Rama: A
rapid multicut algorithm on gpu. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8193–8202.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Inigo Casanueva, Tadas Temcinas, Daniela Gerz, ˇ
Matthew Henderson, and Ivan Vulic. 2020. Efficient ´
intent detection with dual sentence encoders. *arXiv* preprint arXiv:2003.04807.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021a.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830, Online. Association for Computational Linguistics.
Xibin Gao, Radhika Arava, Qian Hu, Thahir Mohamed, Wei Xiao, Zheng Gao, and Mohamed AbdelHady.
2021b. Graphire: Novel intent discovery with pretraining on prior knowledge using contrastive learning. In *KDD 2021 Workshop on Pretraining: Algorithms, Architectures, and Applications*.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning.
In *International Conference on Learning Representations*.
Hao Lang, Yinhe Zheng, Jian Sun, Fei Huang, Luo Si, and Yongbin Li. 2022. Estimating soft labels for outof-domain intent detection. *CoRR*, abs/2211.05561.
Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A.
Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-ofscope prediction. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing, EMNLP-IJCNLP
2019, Hong Kong, China, November 3-7, 2019, pages 1311–1316. Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th
International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4582–
4597. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning:
Prompt tuning can be comparable to fine-tuning across scales and tasks. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Jianlin Su. 2020. Extend softmax and multi-label cross entropy to multi-label classification.
Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2022. Generalized category discovery. In IEEE Conference on Computer Vision and Pattern Recognition.
Nikhita Vedula, Rahul Gupta, Aman Alok, and Mukund Sridhar. 2020. Automatic discovery of novel intents & domains from text utterances. *CoRR*,
abs/2006.01208.
Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. 2022. PiCO: Contrastive label disambiguation for partial label learning.
In *International Conference on Learning Representations*.
Zhiyuan Zeng, Keqing He, Yuanmeng Yan, Zijun Liu, Yanan Wu, Hong Xu, Huixing Jiang, and Weiran Xu.
2021. Modeling discriminative representations for out-of-domain detection with supervised contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 870–878. Association for Computational Linguistics.
Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, XiaoMing Wu, and Albert Y. S. Lam. 2021. Out-of-scope intent detection with self-supervision and discriminative training. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021,
(Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3521–3532. Association for Computational Linguistics.
Hanlei Zhang, Xiaoteng Li, Hua Xu, Panpan Zhang, Kang Zhao, and Kai Gao. 2021a. TEXTOIR: An integrated and visualized platform for text open intent recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 167–174, Online. Association for Computational Linguistics.
Hanlei Zhang, Hua Xu, and Ting-En Lin. 2021b. Deep open intent classification with adaptive decision boundary. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14374–
14382.
Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu.
2021c. Discovering new intents with deep aligned clustering. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 35, pages 14365–
14373.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, and Albert Lam. 2022. New intent discovery with pre-training and contrastive learning. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 256–269, Dublin, Ireland. Association for Computational Linguistics.
J. Zheng, W. Li, J. Hong, L. Petersson, and N. Barnes.
2022. Towards open-set object detection and discovery. In *2022 IEEE/CVF Conference on Computer* Vision and Pattern Recognition Workshops (CVPRW),
pages 3960–3969, Los Alamitos, CA, USA. IEEE
Computer Society.
Yinhe Zheng, Guanyi Chen, and Minlie Huang. 2020.
Out-of-domain detection for natural language understanding in dialog systems. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*,
28:1198–1209.
Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022a. KNNcontrastive learning for out-of-domain intent classification. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 5129–5141, Dublin, Ireland. Association for Computational Linguistics.
| GPT-2 | T5 | | | | | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-----|-----|-----|-----|-----|
| Methods | AMI | ARI | ACC | AMI | ARI | ACC |
| GloVe (Pennington et al., 2014) | | | | | | |
| Prefix-tuning 31.77 13.96 28.65 46.77 24.21 36.87 Ours 33.94 14.14 28.30 49.00 23.71 38.06 BERTScore (Zhang et al., 2020) Prefix-tuning 27.20 9.57 24.58 47.47 25.81 39.50 Ours 33.48 11.76 27.10 50.62 28.23 40.12 ROUGE (Lin, 2004) Prefix-tuning 28.20 11.66 25.87 46.47 25.85 39.20 Ours 34.33 16.88 29.54 48.97 27.95 41.53 | | | | | | |
Yunhua Zhou, Peiju Liu, Yuxin Wang, and Xipeng QIu.
2022b. Discovering new intents using latent variables. *arXiv preprint arXiv:2210.11804*.
## A More Discussion On Generated Labels And New Intent Discovery
In this section, we evaluate the quality of generated labels. Because we discover general intents based on generated labels (Section 3), the better effect of intent discovery suggests the better quality of generated labels. In this paper, For efficiency and effect, we use **ROGUE** (Lin, 2004) to measure the similarity between two labels. Specifically, we calculate the average of the ROGUE-1, ROGUE2, and ROGUE-L3 F1-scores of two labels as the similarity score. In addition, for the sake of generality, we try two additional widely-used similarity measures: **GloVe** (Pennington et al., 2014) and BERTScore (Zhang et al., 2020). We use labels generated in different ways to discover intents and compare the effects in Table 3. Under different similarity measures, our methods have achieved better results, which shows that our methods can generate better labels.
At the same time, it should be emphasized that the scheme we proposed for new intent discovery based on generated labels in Section 3 is a general framework that can be flexibly implemented. In addition to the way to establish graphs described in Section 3, we can also build a weighted association graph with labels as nodes, whose edges 3https://pypi.org/project/rouge/
are the similarity between linked labels, then perform minimum cost multi-cut on this graph, where the segmentations (composed by similar labels) divided are also regarded as more general intents, and the whole process also does not depend on any prior or assumptions about OOD. We leave more and broader exploration for future research.
For the label for discovered general intents, you can either pick the label with the highest frequency in the corresponding segmentation as the label of the intent, or the labels with the top k highest frequency, which depends on the purpose of using the data.
| CLINC | | | | |
|------------------|--------|---------|--------|--------|
| Methods | F1-ALL | ACC-ALL | F1-OOD | F1-IND |
| Cluster-based | 76.22 | 72.96 | 64.66 | 76.33 |
| Detection-based | 93.79 | 90.96 | 83.31 | 93.88 |
| +Expended labels | 94.47 | 91.67 | 84.27 | 94.57 |
| Label-based | 71.25 | 67.76 | 62.40 | 71.33 |
| +Expended labels | 73.68 | 72.30 | 62.55 | 73.78 |
| Ours | 94.68 | 92.07 | 85.07 | 94.77 |
Table 4: Comparison results of different paradigms of detection. The results are obtained with T5 on CLINC
dataset.
## B More Comprehensive Comparison Of Detection
As mentioned in Section 3, considering the existence of Inherent Label Uncertainty and the waste of generating labels (or clusters) for a large number of IND samples, we conduct the OID task based on the learned representation. To learn the discriminative representations, we enrich the expression of intent by multi labels and train together with the loss of generation Eq.(6) during training (The effectiveness is proved in Section 5.2).
In this section, in order to further verify the effectiveness of our method, we make a comprehensive comparison with various paradigms. **Clusterbased** refers to the paradigm adopted by previous work in the NID (Zhang et al., 2021c, 2022),
all samples are directly clustered by **K-means**
for intent discovery after learning representation, Detection-based means that only the OID loss LOID (λ = 0.0 in Eq.(6)) is used for training to obtain representations of samples, which is a paradigm in the OID task (Zhang et al., 2021b),
and **Label-based** means that only the NID loss LNID (λ = 1.0 in Eq.(6)) is used for training then 2236 discovery intents based on labels (same as that we used in 3). The experimental parameters of all methods are consistent.
We show the comparison results of different methods in Table 4, which demonstrates our method is superior to other methods. In addition, many meaningful observations can be obtained from the table. The introduction of extended labels can improve the effect of detection under different paradigms, which reflects the generality. The effect of the Cluster-based method is significantly lower than that of representation-based detection, which also shows that the previous paradigms in the NID not only waste a lot of costs to cluster IND
samples but also may have very limited effect. The above comparison results can fully demonstrate the rationality and effectiveness of our method.
## C Statistics Of Datasets
The detailed statistics of the datasets described in the Section 4.1 are summarized in Table 5.
## D Details Of The Models And Hyper-Parameters
In this paper, experiments are conducted on models with different architectures, i.e., decoder-only
(GPT-2 (Radford et al., 2019)), and encoderdecoder architecture (BART (Lewis et al., 2020),
T5 (Raffel et al., 2020)), whose details are shown in 6. The implementations of GPT-24, BART5and T56are based on the Huggingface Transformer models. We tried learning rate in {1e-4, 2e-4, 3e-4, 4e-4}, training batch size in {64,128}, the length of tunable prefix in {64,128,256} and trained 100 epochs with an AdamW optimizer. We utilized four extended labels for each intent during the experiment (In fact, for certain intents, the number of labels was expanded to five.). In the **K-means** setting, we set k to three times the ground truth number of intent categories. Baselines and our method use the same experimental settings. Whether it is the main experiment or the analysis experiments, we use multiple different random seeds to conduct multiple rounds of experiments and report the average results. We list the standard deviation of the main experiment results (Table 1) in Table 7. Our experiments are conducted on a single NVIDIA
4https://huggingface.co/gpt2 5https://huggingface.co/facebook/bart-base 6https://huggingface.co/t5-base A100 Tensor Core GPU. We also have tried to conduct experiments on a single NVIDIA GTX 3090 with small batchsizes.
## E Automatic Generation Of Templates
To verify the generality of the benefits of the prompts, in addition to manually designing templates, we use T5 to automatically generate models based on Gao et al. (2021a). The difference is that to maintain the semantics of labels, we have not pruned the generated vocabulary set. To generate templates, we formalize the input (x, y) ∈ D*train* to T5 as x.<s1>y<s2> (The <s1> and <s2> are the mask tokens) and let T5 automatically fill in <s1>
and <s2> (i.e., templates) during decoding. We select the templates with higher beam search scores as the candidates then use Ddev to pick templates with better performance. See Gao et al. (2021a)
for more details.
| Dataset | Classes | |Training| | |Validation| | |Test| | Vocabulary | Length (Avg.) |
|----------------------------------|-----------|--------------|----------------|----------|--------------|-----------------|
| CLINC-FULL (Larson et al., 2019) | 150 | 18000 | 2250 | 2250 | 7283 | 8.32 |
| BANKING (Casanueva et al., 2020) | 77 | 9003 | 1000 | 3080 | 5028 | 11.91 |
Table 5: Statistics of CLINC-FULL, BANKING datasets. || denotes the total number of utterances. Length indicates the average length of each utterance in the dataset. The vocabulary is drawn from (Zhang et al., 2021c)
| Model | Magnitude | Encoder | Decoder | DIM.(hidden) | Parameters |
|------------------------------|-------------|-----------|-----------|----------------|--------------|
| GPT-2 (Radford et al., 2019) | Base | / | 12-layer | 768 | 117M |
| BART (Lewis et al., 2020) | Base | 6-layer | 6-layer | 768 | 139M |
| T5 (Raffel et al., 2020) | Base | 12-layer | 12-layer | 768 | 220M |
Table 6: Details of the model adopted in this paper. Dim.(hidden) refers to the dimension of the hidden vector.
| CLINC | BANKING | | | | | | | | |
|-------------------------------------------------------------------------------|---------------|----------------------|--------|--------|---------|--------|--------|------|------|
| Model | Methods | OOD Intent Detection | | | | | | | |
| F1-ALL | ACC-ALL | F1-OOD | F1-IND | F1-ALL | ACC-ALL | F1-OOD | F1-IND | | |
| Model-tuning | 0.99 | 1.11 | 1.47 | 0.99 | 1.48 | 1.60 | 2.97 | 1.47 | |
| GPT-2 | Prefix-tuning | 0.98 | 1.52 | 2.66 | 0.97 | 0.42 | 0.67 | 1.91 | 0.41 |
| Ours | 0.60 | 0.92 | 1.75 | 0.59 | 0.10 | 0.24 | 0.85 | 0.09 | |
| Model-tuning | 0.33 | 0.78 | 1.77 | 0.31 | 0.79 | 1.44 | 4.52 | 0.74 | |
| BART | Prefix-tuning | 0.10 | 0.33 | 0.85 | 0.10 | 0.96 | 1.73 | 3.92 | 0.91 |
| Ours | 0.59 | 1.08 | 2.27 | 0.58 | 0.55 | 0.79 | 1.65 | 0.53 | |
| Model-tuning | 0.28 | 0.69 | 1.65 | 0.27 | 0.77 | 0.90 | 2.04 | 0.75 | |
| T5 | Prefix-tuning | 0.36 | 0.18 | 0.58 | 0.36 | 0.23 | 0.64 | 2.53 | 0.23 |
| Ours | 0.51 | 0.91 | 1.91 | 0.50 | 0.61 | 1.14 | 2.82 | 0.57 | |
| New Intent Discovery | | | | | | | | | |
| Model | Methods | ACC | ARI | AMI | - | ACC | ARI | AMI | - |
| K-means | 0.34 | 0.79 | 1.46 | - | 0.69 | 0.90 | 1.46 | - | |
| Model-tuning | 0.54 | 1.19 | 0.80 | - | 2.28 | 2.91 | 1.54 | - | |
| GPT-2 | Prefix-tuning | 1.74 | 1.32 | 1.81 | - | 0.47 | 2.06 | 2.86 | - |
| Ours | 4.24 | 3.42 | 5.07 | - | 1.21 | 2.19 | 1.83 | - | |
| K-means | 1.87 | 2.65 | 3.05 | - | 1.87 | 3.54 | 4.78 | - | |
| Model-tuning | 1.61 | 2.49 | 1.89 | - | 3.34 | 2.76 | 2.10 | - | |
| BART | Prefix-tuning | 6.67 | 9.69 | 12.69 | - | 4.32 | 3.71 | 3.79 | - |
| Ours | 3.23 | 3.83 | 5.19 | - | 1.26 | 1.37 | 1.19 | - | |
| K-means | 1.97 | 2.99 | 3.15 | - | 2.20 | 2.29 | 2.49 | - | |
| Model-tuning | 3.02 | 4.09 | 2.49 | - | 1.32 | 1.02 | 0.66 | - | |
| T5 | Prefix-tuning | 0.69 | 2.40 | 1.14 | - | 2.25 | 2.74 | 1.54 | - |
| Ours | 4.38 | 4.75 | 3.59 | - | 3.07 | 2.39 | 1.71 | - | |
| Table 7: The standard deviation corresponding to each mean result in Table 1. | | | | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section "Limitations" (7th Section)
✓ A2. Did you discuss any potential risks of your work?
Section "Limitations" (7th Section)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
"Abstract" and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1 And Appendix C
✓ B1. Did you cite the creators of artifacts you used?
Section 4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
These datasets are available for all researchers in the NLP community.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
These datasets are only for scientific research and are available for all members of the NLP research community. We have adhered to the typical method of utilizing these resources.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
These datasets are only for scientific research and are available for all members of the NLP research community. We have adhered to the typical method of utilizing these resources.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix C
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4.3 and Appendix D
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.3 and Appendix D
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A and Appendix D
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-huang-2023-teamwork | Teamwork Is Not Always Good: An Empirical Study of Classifier Drift in Class-incremental Information Extraction | https://aclanthology.org/2023.findings-acl.141 | Class-incremental learning (CIL) aims to develop a learning system that can continually learn new classes from a data stream without forgetting previously learned classes. When learning classes incrementally, the classifier must be constantly updated to incorporate new classes, and the drift in decision boundary may lead to severe forgetting. This fundamental challenge, however, has not yet been studied extensively, especially in the setting where no samples from old classes are stored for rehearsal. In this paper, we take a closer look at how the drift in the classifier leads to forgetting, and accordingly, design four simple yet (super-) effective solutions to alleviate the classifier drift: an Individual Classifiers with Frozen Feature Extractor (ICE) framework where we individually train a classifier for each learning session, and its three variants ICE-PL, ICE-O, and ICE-PL{\&}O which further take the logits of previously learned classes from old sessions or a constant logit of an Other class as constraint to the learning of new classifiers. Extensive experiments and analysis on 6 class-incremental information extraction tasks demonstrate that our solutions, especially ICE-O, consistently show significant improvement over the previous state-of-the-art approaches with up to 44.7{\%} absolute F-score gain, providing a strong baseline and insights for future research on class-incremental learning. | # Teamwork Is Not Always Good: An Empirical Study Of Classifier Drift In Class-Incremental Information Extraction
Minqian Liu, Lifu Huang Computer Science Department Virginia Tech
{minqianliu,lifuh}@vt.edu
## Abstract
Class-incremental learning (CIL) aims to develop a learning system that can continually learn new classes from a data stream without forgetting previously learned classes. When learning classes incrementally, the classifier must be constantly updated to incorporate new classes, and the drift in decision boundary may lead to severe forgetting. This fundamental challenge, however, has not yet been studied extensively, especially in the setting where no samples from old classes are stored for rehearsal. In this paper, we take a closer look at how the drift in the classifier leads to forgetting, and accordingly, design four simple yet (super-
) effective solutions to alleviate the classifier drift: an Individual Classifiers with Frozen Feature Extractor (ICE) framework where we individually train a classifier for each learning session, and its three variants ICE-PL, ICE-O
and ICE-PL&O which further take the logits of previously learned classes from old sessions or a constant logit of an *Other* class as constraint to the learning of new classifiers. Extensive experiments and analysis on 6 class-incremental information extraction tasks demonstrate that our solutions, especially ICE-O, consistently show significant improvement over the previous state-of-the-art approaches with up to 44.7% absolute F-score gain, providing a strong baseline and insights for future research on class-incremental learning.1
## 1 Introduction
Conventional supervised learning assumes the data are independent and identically distributed (i.i.d.)
and usually requires a pre-defined ontology, which may not be realistic in many applications in natural language processing (NLP). For instance, in event detection, the topics of interest may keep shifting over time (e.g., from attack to *pandemic*),
and new event types and annotations could emerge 1The source code, model checkpoints and data are publicly available at https://github.com/VT-NLP/ICE.
![0_image_0.png](0_image_0.png)
Figure 1: Illustration of class-incremental event detection where the model needs to classify each candidate mention into a label from all learned types or *Other*.
The figure shows two classifiers that are incrementally trained from Session 1 and Session 2 and are evaluated on the same sample. After training on session 2, the classifier mistakenly predicts *Other* for an *Arrest* mention due to the *classifier drift*. The model here uses pre-trained features and only the classifier is trained.
incessantly. Previous studies (Ring et al., 1994; Kirkpatrick et al., 2017; Lopez-Paz and Ranzato, 2017) therefore proposed continual learning (CL),
a.k.a., lifelong learning or incremental learning, a learning paradigm aiming to train a model from a stream of *learning sessions* that arrive sequentially.
In this work, we focus on the class-incremental learning (CIL) setting (Wang et al., 2019), where a new *session*2is composed of previously unseen classes and the goal is to learn a unified model that performs well in all seen classes.
When new learning sessions arrive sequentially, the classification layer must be constantly updated and/or expanded to accommodate new categories to the model. The change of the classifier between different sessions, i.e., *classifier drift*, can disturb or overwrite the classifier trained on previous classes, which consequently causes catastrophic forgetting (Biesialska et al., 2020). On the other hand, in many NLP tasks such as information extraction, the model also needs to classify nega2*Session* is defined as an incremental learning stage to learn new classes with a model trained on the previous sessions.
tive instances into the *Other* type (i.e., none-of-theabove). The *Other* type adds extra difficulty to classification, and even worse, the meaning of *Other* varies as the model learns new sessions (Zheng et al., 2022). The CIL problem thus becomes even more challenging when *Other* is involved. We illustrate the event detection task in CIL (Yu et al.,
2021) and the classifier drift problem in Figure 1.
Despite the progress achieved in CIL (Zhao et al.,
2022; Zheng et al., 2022), there are two critical limitations that are still remained: (1) Most previous CIL approaches heavily rely on the rehearsalbased strategy which stores samples from previously learned sessions and keeps re-training the model on these examples in subsequent sessions to mitigate catastrophic forgetting, which requires high computation and storage costs and raises concerns about privacy and data leakage (Shokri and Shmatikov, 2015); (2) Previous approaches have mainly focused on regularizing or expanding the overall model, especially feature extractor, to tackle the forgetting issue (Cao et al., 2020), but they rarely investigate whether the drift of the classifier also leads to forgetting, especially in classification tasks that involve the *Other* category. In this work, we aim to tackle these limitations by answering the following two research questions: RQ1: how does classifier drift lead to forgetting in the setting where no samples are stored from old sessions for rehearsal?, and RQ2: how to devise an effective strategy to alleviate classifier drift, especially when there is an Other category involved?
In this paper, we aim to answer the two research questions above. **First**, to study how classifier drift alone affects the model, we build a baseline where we use a pre-trained language model as a fixed feature extractor, such that only the parameters in the classification layer will be updated. **Second**, to alleviate classifier drift, we propose a simple framework named Individual Classifiers with Frozen Feature Extractor (ICE). Instead of collectively tuning the whole classification layer, we individually train a classifier for the classes in each new session without updating old classifiers and combine all learned classifiers to classify all seen classes during inference. As individually trained classifiers may lack the context of all learned sessions (Zhang et al., 2021), they may not be comparable to each other.
We further devise a variant ICE-PL which takes the logits of previous classifiers as constraints to encourage contrastivity among all the classes when learning a new classifier for a new session. **Third**,
both ICE and ICE-PL cannot be applied to detection tasks where an *Other* class is involved, thus we further design two variants of them: ICE-O and ICE-PL&O, which introduce a constant logit for the *Other* class and use it to enforce each individual classifier to be bounded by a constraint shared across different learning sessions during training.
We extensively investigate the classifier drift and evaluate our approach on 6 essential information extraction tasks across 4 widely used benchmark datasets under the CIL setting. Our major findings and contributions are: (1) By comparing the drifted baseline and our ICE, we find that the classifier drift alone can be a significant source of forgetting and our approaches effectively mitigate the drift and forgetting. Our results reveal that training the classifier individually can be a superior solution to training the classifier collectively in CIL. (2) We find that the *Other* type can effectively improve individually trained classifiers, and it is also helpful when we manually introduce negative instances during training on the tasks that do not have *Other*.
(3) Experimental results demonstrate that our proposed approaches, especially ICE-O, significantly and consistently mitigate the forgetting problem without rehearsal and outperform the previous stateof-the-art approaches by a large margin. (4) Our study builds a benchmark for 6 class-incremental information extraction tasks and provides a superstrong baseline and insights for the following studies on class-incremental information extraction.
## 2 Related Work
Existing approaches for CIL can be roughly categorized into three types (Chen et al., 2022).
Rehearsal-based approaches (a.k.a. experience replay) (Lopez-Paz and Ranzato, 2017; de Masson d'Autume et al., 2019; Guo et al., 2020; Madotto et al., 2021; Qin and Joty, 2021) select some previous examples (or generate pseudo examples) for rehearsal in subsequent tasks. While such approaches are effective in mitigating forgetting, they require high computation and storage costs and suffer from data leakage risk (Shokri and Shmatikov, 2015; Smith et al., 2021; Wang et al.,
2022). Regularization-based approaches (Chuang et al., 2020) aim to regularize the model's update by only updating a subset of parameters. Architecturebased approaches (Lee et al., 2020; Ke et al.,
2021a,b,c; Feng et al., 2022; Zhu et al., 2022) adaptively expand the model's capacity via parameterefficient techniques (e.g., adapter, prompt) to accommodate more data. While most existing approaches consider alleviating the forgetting of the whole model or transferring previous knowledge to new sessions, few of them thoroughly investigate how the classification layer of the model is affected as it expands to incorporate more classes into the model. Wu et al. (2019) find that the classification layer has a strong bias towards new classes, but they only study this issue in image recognition that doesn't contain the *Other* class. To fill the blank in current research, we aim to take a closer look at how the drift in the classifier alone affects the model under the CIL setting, especially when Other is involved.
For class-incremental information extraction, several studies tackle the CIL problem in relation learning (Wu et al., 2021), and many of them apply prototype-based approaches equipped with memory buffers to store previous samples (Han et al.,
2020; Cui et al., 2021; Zhao et al., 2022). Others investigate how to detect named entities (Monaikul et al., 2021; Xia et al., 2022) or event trigger (Cao et al., 2020; Yu et al., 2021; Liu et al., 2022) in the CIL setting. For instance, Zheng et al. (2022) propose to distillate causal effects from the *Other* type in continual named entity recognition. One critical disadvantage of existing approaches for continual IE is they heavily rely on storing previous examples to replay, whereas our method does not require any examplar rehearsal.
## 3 Problem Formulation
Class-incremental learning requires a learning system to learn from a sequence of learning sessions D = {D1*, ...,* DT } and each session Dk =
{(x k, yk)|y k ∈ Ck} where x kis an input instance for the session Dk and y k ∈ Ck denotes its label. The label set Ck for session Dk is not overlapping with that of other sessions, i.e., ∀*k, j* and k ̸= j, Ck TCj = ∅. Given a test input x and a model that has been trained on up to t sessions, the model needs to predict a label yˆ from a label space that contains all learned classes, i.e., C1 S... SCt and optionally the *Other* class. Generally, the training instances in old classes are not available in future learning sessions.
We consider a learning system consisting of a feature extractor and a classifier. Specifically, we use a linear layer G1:t ∈ R
c×has the classification layer, where c is the number of classes that the model has learned up to session t and h is the hidden dimension size of features. We denote the number of classes in a learning session k as nk, i.e., nk = |Ck|. The classification layer G1:t can be viewed as a concatenation of the classifiers in all learned sessions, i.e., G1:t = [W1; ...;Wt], where each of the classifier Wk ∈ R
nk×his in charge of the classes in Ck. The linear layer outputs the logits o1:t ∈ R
cfor learned classes, where ok refers to the logits for the classes in Ck. The term *logit* we use in this paper refers to the raw scores *before* applying the Softmax normalization.
In this work, we focus on studying the classincremental problem in information (entity, relation, and event) extraction tasks.We consider two settings for each task: the *detection* task that requires the model to identify and classify the candidate mentions or mention pairs into one of the target classes or *Other*, and the *classification* task that directly takes the identified mentions or mention pairs as input and classifies them into the target classes without considering *Other*.
## 4 Approach 4.1 Rq1: How Does Classifier Drift Lead To Forgetting?
We first design a DRIFTED-BERT baseline to investigate how classifier drift alone leads to forgetting, and then provide an insightful analysis of how classifier drift happens, especially in the setting of class-incremental continual learning.
DRIFTED-BERT **Baseline** In the current dominant continual learning frameworks, both the feature extractor and classifier are continually updated, which results in drift in both components towards the model's predictions on old classes. To measure how the classifier drift along leads to forgetting, we build a simple baseline that consists of a pretrained BERT (Devlin et al., 2019) as the feature extractor and a linear classification layer (shown in Figure 2 (a)). The model first encodes a given input text x into the contextual representation. For event trigger and entity recognition, the model feeds the representation of a candidate span h into the linear layer to predict the logits for learned classes, i.e.,
o1:t = G1:t(h). For relation learning, we instead use the concatenation of head and tail representations as the feature, i.e., h = [hhead; h*tail*]. For detection tasks, since each session contains an *Other*
![3_image_0.png](3_image_0.png)
class which has different meanings from other sessions, we follow (Yu et al., 2021) to set the logit for *Other* to a constant value δ, i.e., o0 = δ. We combine o0 and o1:t and pick the label with the maximum logit as the prediction. That is, we predict a sample as *Other* if and only if max(o1:t) < δ.
We freeze the parameters in the feature extractor so that the encoded features of a given sample remain unchanged in different learning sessions. In this way, the updates in the classification layer become the only source of forgetting. Note that we do not apply any continual learning techniques (e.g., experience replay) to DRIFTED-BERT. We denote p(x t) as the predicted probability to compute the loss in training, where p(x t) = Softmax(o0:t). At the learning session t, the model is trained on Dt with the Cross Entropy (CE) loss:
$${\mathcal{L}}_{C E}=-\sum_{(x^{t},y^{t})\in{\mathcal{D}}_{t}}\log p(x^{t}).\qquad\quad(1)$$
A Closer Look at Classifier Drift When the model has learned t sessions and needs to extend to the (t + 1)-th session, the classification layer G1:t needs to introduce new parameters to accommodate the new classes in Ct+1, i.e., G1:t+1 =
[W1; ...;Wt;Wt+1]. As we assume that all previous training instances in D1:t are not accessible anymore, solely training the model on Dt+1 would lead to an extreme class-imbalance problem (Cao et al., 2020), which consequently causes catastrophic forgetting. However, most existing works rarely discuss how the drift in the classifier alone leads to forgetting, especially when the *Other* class is involved.
We first define the *classifier drift* between two consecutive learning sessions Dt and Dt+1 as the change from G1:tto G1:t+1 *that makes the model* lose (part of) its acquired capability on the seen classes in C1:t. Intuitively, the CE loss aims to maximize the probability of the correct label while minimizing the probabilities of all other labels. Thus, there are two possible causes of classifier drift: (1)
new logit explosion: the new classifier Wt+1 tends to predict logits ot+1 that are higher than those of all previous classes o1:t so that the model can trivially discriminate new classes, which causes the old classes being overshadowed by new classes. (2)
diminishing old logit: as the old instances are not accessible in future learning sessions, the parameters in previous classifiers will be updated from the previous local optimum to a drifted sub-optimum, such that the classifier outputs low logits for old classes and cannot predict correctly. We empirically analyze the DRIFTED-BERT baseline to investigate the classifier drift in Section 5.2 and discuss the drifting patterns in different classification and detection tasks in Section 5.4.
## 4.2 Rq2: How To Alleviate Classifier Drift?
To alleviate the classifier drift, we introduce two solutions ICE and its variant ICE-PL for the classification tasks without *Other*, and further design two additional variants ICE-O and ICE-PL&O for detection tasks where *Other* is involved. We illustrate the training process in a new learning session for ICE and its variants in Figure 2. Note that we only focus on the setting of continual learning without experience replay, i.e., the model does not have access to the data of old sessions.
ICE**: Individual Classifiers with Frozen Feature**
Extractor We revisit the idea of classifier ensemble (Dietterich, 2000) and separated output layers in multi-task learning (Zhang and Yang, 2018)
where task-specific parameters for one task do not affect those for other tasks. Inspired by this, we propose to individually train a classifier for each session without updating or using previously learned classifiers G1:t (shown in Figure 2 (b)). In this way, previous classifiers can avoid being drifted to the sub-optimum, and the new classifier is less prone to output larger logits to overshadow old classes.
Specifically, for an incoming session t + 1, we initialize a set of new weights and train the new classifier Wt+1 on Dt+1. We only use the logits for the classes in the new session ot+1 to compute the Cross-Entropy loss in optimization, i.e.,
p(x t+1) = Softmax(ot+1). During inference, as we need to classify all seen classes without knowing the session identity of each instance, we combine the logits from all classifiers W1*, ...,*Wt+1 together to get the prediction for all learned classes, i.e., o1:t+1 = [o1; ...; ot+1], where each classifier yields the logits via ok = Wk ·h given the encoded feature h for each mention.
ICE**+Previous Logits (I**CE-PL) One limitation of ICE is the classifier individually trained in one session may not be comparable to others. To provide contrastivity to classifiers, we first explore a variant named ICE-PL where we preserve the previous classifiers and only freeze their parameters, such that the new classifier is aware of previous classes during training (shown in Figure 2 (c)).
That is, the model uses the logits from all classifiers o1:t+1 to compute the Cross-Entropy loss, i.e.,
p(x t+1) = Softmax(o1:t+1), while only the parameters in the new classifier are trainable. ICE-PL
uses the same inference process as ICE.
ICE+Other (ICE-O) Both ICE-O and ICE-PL
can only be applied to classification tasks and handling the *Other* category for detection tasks is challenging as each session Dt only contains the annotated mentions for the classes Ct, while the mentions from all the other classes such as C1:t−1 are labeled as *Other*, making the meaning of *Other* varies in different sessions. To tackle this problem, we purpose the ICE-O variant (shown in Figure 2
(d)) where we assign a constant value δ as the logit of the *Other* category. Specifically, for each prediction, we combine the logit of *Other* with the logits from the new session ot+1 to obtain the output probability, i.e., p(x t+1) = Softmax([δ; ot+1]),
and then compute the Cross-Entropy loss to train the classifier to make predictions for both positive classes and *Other*. During the inference, we combine the *Other*'s logit δ with the logits from all trained classifiers o1:t+1, i.e., o0:t+1 =
[δ; o1; ...; ot+1] to predict for all learned positive types and *Other*. We select the label with the highest logit among o0:t+1 as the prediction, and a candidate will be predicted as *Other* if and only if max(o1:t+1) < δ.
While the *Other* class introduces additional difficulties to CIL, we argue that it can also be a good remedy to classifier drift. In particular, in each learning session k, while the classifier Wk is independently trained on Dk, the output logits ok also need to satisfy the constraint that max(ok) < δ when the classifier is trained on negative instances.
Although the logits from any two distinct classifiers Wk and Wj (k ̸= j) do not have explicit contrastivity, both classifiers are trained under the constraint that max(ok) < δ and max(oj ) < δ, which provides a weak contrastivity between them.
ICE+Previous Logits and Other (ICE-PL&O)
To explore the effect of preserving the previous logits when *Other* is involved, we devise a ICE-PL&O
variant that uses both the *Other*'s logit δ and previous logits o1:t during training (shown in Figure 2
(e)). That is, ICE-PL&O uses the combined logits o0:t+1 = [δ; o1; ...; ot+1] to compute the loss, i.e.,
p(x t+1) = Softmax(o0:t+1). ICE-PL&O adopts the same inference process as ICE-O.
While ICE-O and ICE-PL&O are naturally applied to detection tasks, for classification tasks without the *Other* class, we can also manually create negative instances based on the tokens or entity pairs without positive labels. Section 5.1 provides more details regarding how to apply ICE-O and ICE-PL&O to classification tasks.
## 5 Experiments And Discussions 5.1 Datasets And Experiment Setup
We use **Few-NERD** (Ding et al., 2021) for classincremental named entity recognition and split all the 66 fine-grained types into 8 learning sessions by following Yu et al. (2021) which apply a greedy algorithm to split the types into sessions and ensure each session contains the roughly same number of training instances. We use two benchmark datasets MAVEN (Wang et al., 2020) and **ACE-05** (Doddington et al., 2004) for class-incremental event trigger extraction and following the same setting as (Yu et al., 2021) to split them into 5 learning sessions, respectively. For class-incremental relation extraction, we use **TACRED** (Zhang et al., 2017)
and follow the same setting as Zhao et al. (2022) to split the 42 relations into 10 learning sessions.
For each dataset, we construct two settings: (1)
detection where the model classifies each token (or a candidate entity pair in relation extraction task)
in a sentence into a particular class or *Other*; and
(2) *classification* where the model directly takes in a positive candidate (i.e., an entity, trigger, or a pair of entities) and classify it into one of the classes. For the *classification* setting, as there are no negative candidates that are labeled as *Other*, we automatically create negative candidates and introduce the *Other* category so that we can investigate the effect of *Other* using ICE-O and ICE-PL&O.
Specifically, we assign the *Other* label to the tokens if they are not labeled with any classes for entity and event trigger classification, and assign the *Other* label to the pairs of entity mentions if they are not labeled with any relations for relation classification. When we apply ICE-O and ICEPL&O to *classification* tasks, during inference, we do not consider the logit of the *Other* class.
Evaluation We use the same evaluation protocol as previous studies (Yu et al., 2021; Liu et al.,
2022). Every time the model finishes the training on Session t, we evaluate the model on all test samples from Session 1 to Session t for *classification* tasks. For *detection* tasks, we evaluate the model on the entire test set where we take the mentions or mention pairs of unlearned classes as *Other*. Following Yu et al. (2021), we randomly sample 5 permutations of the orders of learning sessions and report the average performance.
Baselines We compare our approaches with the DRIFTED-BERT baseline and several state-ofthe-art methods for class-incremental information extraction, including ER (Wang et al., 2019),
KCN (Cao et al., 2020), KT (Yu et al., 2021),
EMP (Liu et al., 2022), CRL (Zhao et al., 2022).
All these methods adopt experience replay to alleviate catastrophic forgetting. We also design two approaches to show their performance in the conventional supervised learning setting where the model is trained with the annotated data from all the sessions, as the approximate upperbound of the continual learning approaches: (i) **BERT-FFE** consists of a pre-trained BERT as the feature extractor and a classifier, where, during training, we fix the feature extraction and only tune the classifier; and (ii)
| MAVEN (Detection) | Type | S1 | S2 | S3 | S4 | S5 |
|------------------------|----------|------|------|------|------|------|
| New | 50.9 | 57.8 | 52.8 | 52.7 | 49.1 | |
| DRIFTED-BERT | Acc-Old | - | 0 | 0 | 0 | 0 |
| Prev-Old | - | 0 | 0 | 0 | 0 | |
| New | 50.9 | 56.0 | 53.2 | 49.9 | 49.3 | |
| ICE-O (Ours) | Acc-Old | - | 50.6 | 53.8 | 53.6 | 52.4 |
| Prev-Old | - | 51.4 | 56.2 | 53.1 | 50.0 | |
| New | 50.9 | 57.4 | 53.2 | 50.2 | 47.7 | |
| Acc-Old | - | 50.3 | 53.0 | 52.7 | 50.7 | |
| ICE-O&PL (Ours) | Prev-Old | - | 51.0 | 55.8 | 52.5 | 49.6 |
| MAVEN (Classification) | Type | S1 | S2 | S3 | S4 | S5 |
| New | 86.9 | 63.1 | 54.7 | 47.6 | 34.0 | |
| Acc-Old | - | 36.9 | 21.8 | 15.9 | 10.0 | |
| DRIFTED-BERT | Prev-Old | - | 36.4 | 33.4 | 29.4 | 29.1 |
| New | 86.9 | 79.8 | 72.8 | 68.0 | 59.2 | |
| Acc-Old | - | 77.2 | 72.0 | 66.3 | 62.5 | |
| ICE (Ours) | Prev-Old | - | 77.5 | 72.1 | 65.7 | 62.6 |
| New | 86.9 | 67.5 | 57.2 | 49.2 | 34.9 | |
| ICE-PL (Ours) | Acc-Old | - | 51.3 | 29.7 | 16.8 | 13.1 |
| Prev-Old | - | 51.1 | 49.5 | 37.2 | 38.5 | |
| New | 86.5 | 79.8 | 76.9 | 73.3 | 63.8 | |
| ICE-O (Ours) | Acc-Old | - | 80.6 | 76.5 | 71.2 | 68.3 |
| Prev-Old | - | 81.0 | 76.1 | 69.1 | 69.4 | |
| New | 86.5 | 80.3 | 76.9 | 71.3 | 62.0 | |
| ICE-PL&O (Ours) | Acc-Old | - | 80.7 | 76.3 | 70.2 | 64.9 |
| Prev-Old | - | 81.1 | 77.0 | 67.9 | 66.2 | |
BERT-FT which shares the same architecture as BERT-FFE but both the feature extractor and classifier are tuned during training. More details about the datasets, baselines, and model implementation can be found in Appendix A.
## 5.2 Rq1: How Does Classifier Drift Lead To Forgetting?
We conduct an empirical analysis on event detection and classification tasks on MAVEN to answer RQ1 and gain more insight into the classifier drift.
## Analysis Of Old And New Classes Performance
Our first goal is to analyze the classifier drift during the incremental learning process. In Table 1, we analyze how the performance of previously learned classes changes after the model has been trained on a new session for the DRIFTED-BERT baseline and the variants of ICE. After learning in each session k, we compute the (1) F-score on the new classes
(Ck) learned in the current session, (2) accumulated F-score on the old classes (C1:k−1) from all previous sessions, and (3) F-score on the old classes
(Ck−1) from the previous session, respectively. By 2246
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
ER† (Wang et al., 2019) 62.0 48.6 43.1 35.5 32.5 59.6 46.2 41.7 33.2 34.4 88.9 70.4 64.1 55.0 53.1 KCN† (Cao et al., 2020) 63.5 51.1 46.8 38.7 38.5 58.3 54.7 52.8 44.9 41.1 88.8 68.7 59.2 48.1 42.1 KT† (Yu et al., 2021) 63.5 52.3 47.2 39.5 39.3 58.3 55.4 53.9 45.0 42.6 88.8 69.0 58.7 47.6 42.0 EMP† (Liu et al., 2022) **67.8 60.2** 58.6 54.8 50.1 **59.6** 53.1 55.2 45.6 43.2 91.5 54.2 36.7 27.0 24.8 CRL† (Zhao et al., 2022) - - - - - - - - - - 89.2 73.2 70.0 63.7 62.9 DRIFTED-BERT 60.5 41.0 33.8 22.5 20.8 53.7 50.6 51.8 20.1 17.2 90.1 52.3 39.7 28.0 22.3 ICE (Ours) - - - - - - - - - - 89.4 79.0 75.8 71.4 68.5 ICE-PL (Ours) - - - - - - - - - - 89.4 59.4 44.4 32.8 26.8 ICE-O (Ours) 60.5 59.9 **61.3 60.8 61.4** 53.7 55.4 60.7 59.6 61.5 88.8 82.8 81.0 77.7 75.5 ICE-PL&O (Ours) 60.5 59.5 60.7 59.9 60.2 53.7 **55.8 61.4 60.5 62.4** 88.8 82.1 79.8 75.2 71.6 ICE-O+TFE&ER†(Ours) 61.5 40.7 41.3 44.5 49.7 54.3 39.0 43.2 44.1 41.7 **92.2 83.8 82.8 79.9 78.1**
Upperbound (BERT-FFE) - - - - 63.0 - - - - 64.0 - - - - 76.0 Upperbound (BERT-FT) - - - - 67.3 - - - - 66.6 - - - - 81.0 Table 2: Results (Micro-F1 score, %) on **event detection** and **classification** on 5 learning sessions. We highlight the best scores in **bold** and the second best with underline. † indicates approaches with experience replay.
Few-NERD (Detection) Few-NERD (Classification)
![6_image_4.png](6_image_4.png)
Session 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 ER† (Wang et al., 2019) 57.7 45.7 45.5 39.7 34.1 28.3 23.7 23.5 93.7 60.6 49.9 42.4 38.4 32.3 30.0 26.0
KCN† (Cao et al., 2020) 58.1 46.2 39.0 41.4 31.0 27.0 23.9 18.8 92.0 62.4 48.0 38.0 29.6 23.4 27.1 20.6 KT† (Yu et al., 2021) 57.7 46.8 44.0 42.5 35.9 26.0 24.9 23.3 93.3 58.7 51.2 41.1 34.3 25.6 20.9 20.2 EMP† (Liu et al., 2022) **58.9** 47.0 45.9 42.0 36.0 31.8 29.8 24.2 94.0 52.0 39.7 32.0 26.3 22.0 24.6 17.6
CRL† (Zhao et al., 2022) - - - - - - - - 93.4 80.2 77.0 72.3 68.1 62.4 59.7 58.4
DRIFTED-BERT 56.2 40.9 36.5 30.7 25.6 21.6 19.8 15.5 93.7 48.4 34.4 28.8 22.3 17.5 15.2 12.5 ICE (Ours) - - - - - - - - 93.7 82.5 77.0 72.0 69.5 67.3 65.2 61.7 ICE-PL (Ours) - - - - - - - - 93.7 51.6 37.6 31.0 25.0 21.4 19.1 17.6
ICE-O (Ours) 56.2 **57.8 61.7 64.2 65.6 67.3 68.9 68.9** 93.5 86.6 83.8 80.4 78.1 76.5 75.4 71.9
ICE-PL&O (Ours) 56.2 54.9 57.1 58.2 58.9 59.7 60.6 58.7 93.5 84.6 80.3 75.1 71.9 68.7 66.0 60.3
ICE-O+TFE&ER†(Ours) 50.7 42.2 45.0 45.4 46.2 48.7 47.5 47.1 **94.2 87.7 86.5 83.9 82.0 81.7 80.2 76.1**
Upperbound (BERT-FFE) - - - - - - - 72.3 - - - - - - - 73.5 Upperbound (BERT-FT) - - - - - - - 78.8 - - - - - - - 80.0
Table 3: Results (Micro-F1 score, %) on **named entity recognition** and **classification** on 8 learning sessions. We
highlight the best scores in **bold** and the second best with underline. † indicates approaches with experience replay.
comparing the performance change on the same
![6_image_5.png](6_image_5.png)
set of classes in two continuous sessions, e.g., the F-score on the new classes (Ck) learned in session k and the F-score on the classes (Ck) from the previous session after learning in session k+1, we can quantify how much the classifier is drifted. From Table 1, the performance of DRIFTED-BERT on old classes after learning on a new session is always decreased dramatically, verifying that classifier drift does occur in class-incremental learning and leads to severe forgetting. On the other side, our solutions, especially ICE-O, consistently retain similar performance on the old classes from the previous session after learning on a new session, demonstrating that it effectively alleviates the classifier drift and the forgetting issue. Besides, we find that the ICE-PL variant suffers from a considerable performance drop on both new and old classes, which indicates freezing previous classifiers' parameters while preserving the logits of previously learned classes cannot address the classifier drift and forgetting problems. Note that although we only showed the results on event classification and detection on MAVEN, the conclusions are very consistent for
![6_image_3.png](6_image_3.png)
![6_image_6.png](6_image_6.png)
![6_image_7.png](6_image_7.png)
## 5.3 Rq2: How To Alleviate Classifier Drift And Forgetting?
To answer RQ2, we evaluate the effectiveness of our proposed approaches to mitigating classifier drift and catastrophic forgetting.
Quantitative Comparison We conduct an extensive quantitative comparison of the baselines and our approaches on the 6 class-incremental IE tasks.
From Table 2, 3 and 4, we can see that: (1) our approaches, especially ICE-O, without adopting experience replay, significantly and consistently alleviate the forgetting issue and show a remarkable improvement (i.e., ranging from 4.6% - 44.7% absolute F-score gain) over the previous state-of-theart methods that are all based on experience replay.
Notably, ICE-O achieves performance that is even close to the supervised **BERT-FFE** upperbound on most of the classification and detection tasks. (2)
Among the four approaches, ICE-O consistently outperforms other variants on all the classification and detection tasks, demonstrating that introduc-
TACRED (Detection) TACRED (Classification)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
Session 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
ER† (Wang et al., 2019) 29.8 **39.3** 34.1 31.8 32.9 29.1 31.7 28.1 25.3 26.0 87.8 78.2 74.4 66.2 62.8 56.3 59.7 55.7 50.6 49.1 KCN† (Cao et al., 2020) 29.8 38.0 29.9 27.7 23.7 20.4 22.3 16.4 14.1 15.7 87.8 78.2 72.5 61.9 59.7 51.5 54.8 46.1 39.0 36.0 KT† (Yu et al., 2021) 29.8 38.9 28.9 27.8 24.7 19.4 22.0 17.2 13.7 16.0 87.8 78.4 74.6 62.1 57.0 49.5 51.2 43.1 36.3 34.0 EMP† (Liu et al., 2022) 26.5 39.1 31.8 30.3 30.1 23.8 31.3 23.8 21.3 21.1 88.0 54.2 44.5 37.4 32.4 29.9 35.6 33.8 21.1 27.5
CRL† (Zhao et al., 2022) - - - - - - - - - - 88.7 82.2 79.8 74.7 73.3 71.5 69.0 66.2 64.0 62.8
DRIFTED-BERT 28.9 36.7 27.7 26.5 21.8 17.4 21.2 17.6 13.7 14.3 88.8 51.0 30.9 27.0 17.2 17.8 19.0 14.7 10.8 14.3
ICE (Ours) - - - - - - - - - - 88.8 77.8 73.4 67.5 60.7 55.6 56.8 52.6 51.1 49.2 ICE-PL (Ours) - - - - - - - - - - 88.8 52.8 36.9 32.2 27.2 24.4 28.6 26.0 22.9 22.7
ICE-O (Ours) 28.9 35.8 **35.4 37.5 37.2 38.2 40.6 40.2 39.8 40.1** 87.5 85.7 83.1 81.4 78.1 75.8 76.1 72.0 70.0 67.4
ICE-PL&O (Ours) 28.9 34.5 32.4 33.0 30.3 30.0 32.0 30.9 29.5 29.1 87.5 83.2 76.7 71.2 64.6 57.0 58.3 54.2 47.2 44.9
ICE-O+TFE&ER†(Ours) **33.4** 13.2 12.6 14.8 16.4 18.8 22.4 24.5 26.1 27.7 **95.2 92.1 91.2 90.8 88.6 86.1 86.3 83.6 82.7 81.4**
![7_image_6.png](7_image_6.png)
ing negative instances during training can constrain the updates in the classifier, and consequently mitigate classifier drift and forgetting. (3) Persevering the logits of previous classes without updating the previous classifiers hurts the performance on most tasks, by comparing ICE-PL with ICE and comparing ICE-PL&O with ICE-O. This observation is consistent with our findings in Section 5.2. (4)
Previous methods generally perform worse than our solutions even with experience replay. The possible reasons include overfitting to the stored examples in the small memory buffer or the regularization from replay may not be effective enough to mitigate the forgetting.
Comparison with CRL (Zhao et al., **2022)** Note that, among all the baselines, CRL consistently outperforms others on the classification tasks. CRL is based on a prototypical network where each class is represented with a prototype computed from an embedding space and performs the classification with the nearest class mean (NCM) classifier. Compared with other Softmax-based classification approaches, CRL can accommodate new classes more flexibly without any change to the architecture. However, it still suffers from the *semantic drift* (Yu et al.,
2020) problem as the embedding network must be continually updated to learn new classes, and it is non-trivial to adapt it to detection tasks where an Other class is involved under the class-incremental learning setting and the meanings of *Other* in different learning sessions are also different.
## Comparison With Trainable Feature Extractor
We also investigate if our proposed approaches can be further improved by tuning the BERT-based feature extractor. However, it naturally leads to forgetting as demonstrated by previous studies (Wang et al., 2019; Cao et al., 2020; Yu et al., 2021).
Thus, following these studies, we adopt experience replay and design a new variant named ICEO with Tunable Feature Extractor and Experience
![7_image_0.png](7_image_0.png)
![7_image_4.png](7_image_4.png)
![7_image_5.png](7_image_5.png)
Replay (abbreviated as ICE-O+TFE&ER), which tunes the BERT-based feature extractor and adopts the same replay strategy as ER that preserves 20 samples for each class. From Table 2, 3 and 4, ICEO+TFE&ER significantly improves over ICE-O
and achieves comparable performance to the supervised **BERT-FT** upperbound on all the classification tasks. However, ICE-O+TFE&ER performs much worse than ICE-O on all the detection tasks.
We hypothesize that this is due to the meaning shift of the *Other* class when incrementally training it on a sequence of learning sessions. Experience replay may not be enough to constrain the feature extractor to handle the *Other* class properly.
## 5.4 Analysis Of Drifting Patterns
To take a closer look into how the classifier drift leads to forgetting and verify the two hypothetical drifting patterns we discuss in Section 4.1, we analyze the output logits (i.e., the scores before Softmax) from the old and new classifiers for DRIFTEDBERT and our ICE, ICE-PL, and ICE-O. Specifically, we take the test samples whose ground truth labels are learned in Session 1 (denoted as X
1 test),
for analysis. Every time the classifier is trained on a new session, we evaluate the classifier on X
1 test, and then take (1) the logit of the gold class (**Gold**),
and (2) the maximum logit from the new classifier
(NCP), i.e., New Classifier's Prediction, for analysis. For each type of logit, we report the average of the logits on all the samples in X
1 test.
We have the following findings: (1) By examining the **Gold** logits and the logits from the new classifier (NCP) of DRIFTED-BERT, we observe that every time a new classifier is added and trained
![8_image_1.png](8_image_1.png)
on the new session, the new classifier incrementally outputs higher logits than those in the previous session on X
1 test (blue solid line), whereas the **Gold**
logits first decline a bit and stay at a certain level in the remaining sessions (blue dashed line). This observation confirms that two possible drifting patterns (i.e., *new logit explosion* and *diminishing old* logit) exist, and they can happen simultaneously and cause the new classifier overshadows the previously learned classifiers, which consequently leads to forgetting. (2) We find that while the old classifiers are not updated in ICE-PL, the *new logit explosion* issue gets even more severe (orange solid line),
which explains why ICE-PL performs worse than ICE and ICE-O. We hypothesize that the presence of previous logits may encourage the new classifier to predict larger logits. (3) When the classifier in each session is trained individually instead of collectively (i.e., in ICE and ICE-O), the **Gold** logits from the old classifiers stay at a constant level
(red dashed lines), whereas the logits from the new classifier are at a relatively lower level (green and red solid line). As such, the new classifier's logits do not have much impact on those of old classes, which mitigates the drift and forgetting.
## 5.5 The Effect Of The Logit For Other **Class**
Throughout all the experiments, we set the logit for Other class δ as 0 constantly. In this section, we further discuss the effect of the value of δ, and the effect of tuning the *Other* classifier. We show the results of event detection on MAVEN based on different fixed values or a tunable value of δ in Table 5.
We found that the value of *Other* class's logit does
![8_image_0.png](8_image_0.png)
not affect the model's performance much as long as it is fixed. However, we noticed a significant performance decrease if we continually tuned it with a classifier, demonstrating that it is necessary to fix the *Other* class's logit during the continual learning process in our approach.
## 5.6 Comparison With Recent Llms
More recently, very large language models (LLMs)
such as ChatGPT (OpenAI, 2022) demonstrate strong in-context learning ability without the need of gradient update. Thus, class-incremental learning may also be tackled as a sequence of in-context learning. However, several recent studies (Gao et al., 2023; Qin et al., 2023) have benchmarked several LLMs with in-context few-shot learning on various IE tasks and show worse performance than our approach. Our approach can efficiently achieve a good performance that is close to the supervised performance by only finetuning the last linear layer using a much smaller frozen BERT
backbone. More critically, the knowledge LLMs are often bounded by the training data, whereas the goal of our continual learning approach focuses on incorporating up-to-date information into models.
## 6 Conclusion
In this paper, we investigate the answers and the solutions to the research questions that how the classifier drift alone affects a model in the classincremental learning setting, and how to alleviate the drift without retraining the model on previous examples. We, therefore, propose to train a classifier individually for each task and combine them together during inference, such that we can maximally avoid the drift in the classifier. Extensive experiments show that our proposed approaches significantly outperform all the considered baselines on both class-incremental classification and detection benchmarks and provide super-strong baselines. We hope this work can shed light on future research on continual learning in broader research communities.
## Limitations
Our approaches mainly leverage a fixed feature extractor together with a set of individually trained classifiers to mitigate catastrophic forgetting whereas a tunable feature extractor may also be helpful and complement the individually trained classifiers, so a future direction is to design advanced strategies to efficiently tune the feature extractor in combination with our proposed ICE based classifiers. In addition, we mainly investigate the classifier drift and demonstrate the effectiveness of our solutions under the class-incremental continual learning setting. Another future direction is to explore similar ideas under other continual learning settings, e.g., task-incremental learning, online learning, or the setting where new sessions also contain annotations for old classes.
## Acknowledgments
This research is based upon work partially supported by the Amazon Research Award program and U.S. DARPA KMASS Program \#
HR001121S0034. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
## References
Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-jussà. 2020. Continual lifelong learning in natural language processing: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6523–6541, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Pengfei Cao, Yubo Chen, Jun Zhao, and Taifeng Wang.
2020. Incremental event detection via knowledge consolidation networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 707–717, Online.
Association for Computational Linguistics.
Muhao Chen, Lifu Huang, Manling Li, Ben Zhou, Heng Ji, and Dan Roth. 2022. New frontiers of information extraction. In *Proceedings of the 2022 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts, pages 14–25.
Yung-Sung Chuang, Shang-Yu Su, and Yun-Nung Chen.
2020. Lifelong language knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 2914–2924, Online. Association for Computational Linguistics.
Li Cui, Deqing Yang, Jiaxin Yu, Chengwei Hu, Jiayang Cheng, Jingjie Yi, and Yanghua Xiao. 2021. Refining sample embeddings with relation prototypes to enhance continual relation extraction. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 232–243, Online.
Association for Computational Linguistics.
Cyprien de Masson d'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. In *Advances* in Neural Information Processing Systems, pages 13122–13131.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Thomas G Dietterich. 2000. Ensemble methods in machine learning. In *International workshop on multiple classifier systems*, pages 1–15. Springer.
Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021. Few-NERD: A few-shot named entity recognition dataset. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 3198–3213, Online. Association for Computational Linguistics.
George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation.
In *Lrec*, volume 2, pages 837–840. Lisbon.
Shaoxiong Feng, Xuancheng Ren, Kan Li, and Xu Sun.
2022. Hierarchical inductive transfer for continual dialogue learning. *CoRR*, abs/2203.10484.
Jun Gao, Huan Zhao, Changlong Yu, and Ruifeng Xu.
2023. Exploring the feasibility of chatgpt for event extraction. *CoRR*, abs/2303.03836.
Yunhui Guo, Mingrui Liu, Tianbao Yang, and Tajana Rosing. 2020. Improved schemes for episodic memory-based lifelong learning. In *Advances in Neural Information Processing Systems*.
Xu Han, Yi Dai, Tianyu Gao, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2020. Continual relation learning via episodic memory activation and reconsolidation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6429–6440, Online. Association for Computational Linguistics.
Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, and Lei Shu.
2021a. Achieving forgetting prevention and knowledge transfer in continual learning. In *Advances in* Neural Information Processing Systems, volume 34, pages 22443–22456. Curran Associates, Inc.
Zixuan Ke, Bing Liu, Hu Xu, and Lei Shu. 2021b.
CLASSIC: Continual and contrastive learning of aspect sentiment classification tasks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6871–6883, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zixuan Ke, Hu Xu, and Bing Liu. 2021c. Adapting BERT for continual learning of a sequence of aspect sentiment classification tasks. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 4746–4755, Online. Association for Computational Linguistics.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks.
Proceedings of the national academy of sciences, 114(13):3521–3526.
Soochan Lee, Junsoo Ha, Dongsu Zhang, and Gunhee Kim. 2020. A neural dirichlet process mixture model for task-free continual learning. In *International* Conference on Learning Representations.
Minqian Liu, Shiyu Chang, and Lifu Huang. 2022. Incremental prompting: Episodic memory prompt for lifelong event detection. In *Proceedings of the 29th* International Conference on Computational Linguistics, pages 2157–2165, Gyeongju, Republic of Korea.
International Committee on Computational Linguistics.
David Lopez-Paz and Marc'Aurelio Ranzato. 2017.
Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pages 6467–6476.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, and Zhiguang Wang. 2021.
Continual learning in task-oriented dialogue systems.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7452–7467, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Natawut Monaikul, Giuseppe Castellucci, Simone Filice, and Oleg Rokhlenko. 2021. Continual learning for named entity recognition. In Thirty-Fifth AAAI
Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13570–13577. AAAI Press.
OpenAI. 2022. ChatGPT: Optimizing language models for dialogue. https://openai.com/blog/
chatgpt/.
Chengwei Qin and Shafiq Joty. 2021. LFPT5: A
unified framework for lifelong few-shot language learning based on prompt tuning of T5. *CoRR*,
abs/2110.07298.
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language processing task solver? *CoRR*, abs/2302.06476.
Mark Bishop Ring et al. 1994. *Continual learning in* reinforcement environments. Ph.D. thesis, University of Texas at Austin Austin, Texas 78712.
Reza Shokri and Vitaly Shmatikov. 2015. Privacypreserving deep learning. CCS '15, page 1310–1321, New York, NY, USA. Association for Computing Machinery.
James Smith, Jonathan Balloch, Yen-Chang Hsu, and Zsolt Kira. 2021. Memory-efficient semi-supervised continual learning: The world is its own replay buffer.
In International Joint Conference on Neural Networks, IJCNN 2021, Shenzhen, China, July 18-22, 2021, pages 1–8. IEEE.
Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, and William Yang Wang. 2019. Sentence embedding alignment for lifelong relation extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 796–806, Minneapolis, Minnesota. Association for Computational Linguistics.
Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin, and Jie Zhou. 2020. MAVEN: A Massive General Domain Event Detection Dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1652–
1671, Online. Association for Computational Linguistics.
Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer G. Dy, and Tomas Pfister.
2022. Dualprompt: Complementary prompting for rehearsal-free continual learning. In *Computer Vision*
- ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXVI,
volume 13686 of *Lecture Notes in Computer Science*,
pages 631–648. Springer.
Max Welling. 2009. Herding dynamical weights to learn. In *Proceedings of the 26th Annual International Conference on Machine Learning*, ICML '09, page 1121–1128, New York, NY, USA. Association for Computing Machinery.
Tongtong Wu, Xuekai Li, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Yujin Zhu, and Guoqiang Xu.
2021. Curriculum-meta learning for order-robust continual relation extraction. In Thirty-Fifth AAAI
Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 10363–10369. AAAI Press.
Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. 2019. Large scale incremental learning. In *IEEE Conference on* Computer Vision and Pattern Recognition, CVPR
2019, Long Beach, CA, USA, June 16-20, 2019, pages 374–382. Computer Vision Foundation / IEEE.
Yu Xia, Quan Wang, Yajuan Lyu, Yong Zhu, Wenhao Wu, Sujian Li, and Dai Dai. 2022. Learn and review:
Enhancing continual named entity recognition via reviewing synthetic samples. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 2291–2300, Dublin, Ireland. Association for Computational Linguistics.
Lu Yu, Bartlomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, and Joost van de Weijer. 2020. Semantic drift compensation for class-incremental learning. In *2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA,
June 13-19, 2020, pages 6980–6989. Computer Vision Foundation / IEEE.
Pengfei Yu, Heng Ji, and Prem Natarajan. 2021. Lifelong event detection with knowledge transfer. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5278–
5290, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Chi Zhang, Nan Song, Guosheng Lin, Yun Zheng, Pan Pan, and Yinghui Xu. 2021. Few-shot incremental learning with continually evolved classifiers. In IEEE
Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 12455–12464. Computer Vision Foundation / IEEE.
Yu Zhang and Qiang Yang. 2018. An overview of multitask learning. *National Science Review*, 5(1):30–43.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP
2017), pages 35–45.
Kang Zhao, Hua Xu, Jiangong Yang, and Kai Gao. 2022.
Consistent representation learning for continual relation extraction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3402–
3411, Dublin, Ireland. Association for Computational Linguistics.
Junhao Zheng, Zhanxian Liang, Haibin Chen, and Qianli Ma. 2022. Distilling causal effect from miscellaneous other-class for continual named entity recognition. *CoRR*, abs/2210.03980.
Qi Zhu, Bing Li, Fei Mi, Xiaoyan Zhu, and Minlie Huang. 2022. Continual prompt tuning for dialog state tracking. *CoRR*, abs/2203.06654.
## A More Details On Experiment Setup A.1 Details Of The Datasets
Named Entity We use **Few-NERD** (Ding et al.,
2021), a large-scale named entity recognition
(NER) dataset to evaluate class-incremental named entity recognition and classification. Compared with the datasets used in previous continual NER
works (Zheng et al., 2022), Few-NERD has a more diverse range of entity types and finer granularity, containing 8 coarse-grained and 66 fine-grained entity types. Thus, it is a better benchmark to study continual NER. We construct two settings for the NER task: (1) a detection task where the model is required to examine every token in the text and classify each of them into a learned positive entity type or *Other*, and; (2) a classification task where the positive candidate entity mentions have been provided and the model only needs to assign a learned entity type to the given candidate. Following Yu et al. (2021), we split the dataset into 8 learning sessions with the greedy algorithm such that each session contains the roughly same number of training instances.
Relation We use **TACRED** (Zhang et al., 2017)
to evaluate relation detection and classification tasks. TACRED is a large-scale relation extraction dataset that contains 42 relations. In the previous continual relation classification setting (Cui et al.,
2021), they ignore the long-tail distribution and assume each relation contains the same number of instances. We instead use the original train/dev/test split in TACRED where relations are imbalanced.
We build two settings for the relation task: (1) a detection task where the model needs to assign an ordered entity mentions with a seen positive relation type or *Other*, and; (2) a classification task that assumes the given entity pair must belong to one of learned relation, and the model is only required to predict a label it has learned. We follow the previous setting (Zhao et al., 2022) to split the dataset into 10 learning sessions, where we drop the relation with the fewest instances such that each session contains 4 positive relation types.
Event Trigger We adopt the following two event detection datasets for evaluation: (1)
MAVEN (Wang et al., 2020): MAVEN is a largescale event detection dataset with 169 event types
(including *Other*) in the general domain, and; (2)
ACE-05 (Doddington et al., 2004): ACE 2005 English dataset contains 34 event types (including *Other*). For both datasets, we follow Yu et al.
(2021) to use the same train/dev/test split and use the same ontology partition to create 5 incremental learning sessions for each dataset, where each session contains approximately the same number of training instances. We create two settings for event trigger: (1) two event detection tasks, where the model is required to evaluate each token in the sentence and assign it with a learned event type or *Other*, and; (2) a classification task where the model only needs to classify a positive trigger mention into a learned event type without considering Other. We did not construct the classification task for the ACE dataset as the majority of instances only contain the *Other* type and removing such instances will result in a very small dataset.
## A.2 Baselines
We use the following baselines for our experiments:
(1) DRIFTED-BERT: we build a baseline with a fixed pre-trained BERT as the feature extractor and only train its classification layer. We do not apply any other continual learning techniques to it. We primarily use this baseline to study the classifier drift discussed in this work. (2) ER (Wang et al.,
2019): experience replay is introduced to continual IE by (Wang et al., 2019). In this work, we use the same strategy as in (Liu et al., 2022) to select examples to store in the memory and replay them in subsequent sessions. (3) KCN (Cao et al.,
2020): the original work proposes a prototypebased method to sample examples to store for replay as well as a hierarchical knowledge distillation
(KD) to constrain the model's update. We adapt their hierarchical distillation along with ER as the KCN baseline. (4) KT (Yu et al., 2021): a framework that transfers knowledge between new and old event types. (5) EMP (Liu et al., 2022): propose a prompt-based technique to dynamically expand the model architecture to incorporate more classes.
(6) CRL (Zhao et al., 2022) proposes consistent representation learning to keep the embeddings of historical relations consistent. Since CRL is designed for the classification tasks without *Other*,
we only evaluate this baseline on the classification tasks we build. (7) **Upperbound**: we train a model jointly on all classes in the dataset as an upperbound in the conventional supervised learning setting. We devise two different upperbounds: (i)
BERT-FFE is the upperbound of our ICE-O model, where we only train the classifier and the feature extractor is fixed. The negative instances are used in the classification tasks without *Other*; and (ii)
BERT-FT is the upperbound that trains both the whole BERT and the classifier.
## A.3 Implementation Details
We use the pre-trained BERT-large-cased (Devlin et al., 2019) as the fixed feature extractor. We use AdamW (Loshchilov and Hutter, 2019) as the optimizer with the weight decay set to 1e − 2 and a learning rate of 1e − 4 for detection tasks and 5e − 4 for classification tasks. We apply gradient accumulation and set the step to 8. In each learning session Dk, we establish a limit of 15 maximum training epochs. We also adopt the early stopping strategy with a patience of 3, where training will be halted if there is no improvement in performance on the development set for 3 epochs. We set the constant value for the *Other* class δ to 0. We apply the experience replay strategy with the same setting as in (Liu et al., 2022) to ER, KCN, KT,
and EMP as an assistant technique to mitigate forgetting. We store 20 examples for each class using the herding algorithm (Welling, 2009) and replay one stored instance in each batch during training to limit the computational cost brought by rehearsal.
For CRL, we use the same sample selection and replay strategy as in the original work. For baselines, we adopt a frozen pre-trained BERT-large and a trainable Multi-Layer Perceptron (MLP) as the feature extractor.
## B More Discussions B.1 More Analysis On Old And New Type Performance
Table 6 and 7 show the performance of old and new classes for each learning session of the classincremental named entity detection and classification and class-incremental relation detection and classification tasks.
| Few-NERD (Detection) | Few-NERD (Classification) | | | | | | | | | | | | | | | | | |
|---------------------------------------|-----------------------------|------|------|------|------|------|------|-------|------|---------|-------|------|------|------|------|------|------|------|
| Session | Type | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Type | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| ER† (Wang et al., 2019) | New | 56.9 | 65.3 | 75.9 | 55.8 | 61.9 | 56.5 | 59.35 | 64.0 | New | 88.39 | 74.2 | 53.8 | 42.1 | 33.3 | 25.0 | 34.6 | 26.0 |
| KCN† (Cao et al., 2020) | New | 64.3 | 58.6 | 57.9 | 61.3 | 56.3 | 76.0 | 69.8 | 56.0 | New | 88.3 | 75.1 | 63.1 | 46.5 | 33.4 | 24.2 | 31.1 | 25.9 |
| KT† (Yu et al., 2021) | New | 64.3 | 60.4 | 57.4 | 62.3 | 56.5 | 75.7 | 69.2 | 58.2 | New | 88.3 | 73.1 | 60.6 | 45.4 | 34.7 | 24.1 | 34.0 | 24.8 |
| EMP† (Liu et al., 2022) | New | 61.9 | 56.6 | 53.1 | 58.4 | 55.1 | 74.1 | 64.4 | 53.8 | New | 88.1 | 75.4 | 65.7 | 49.1 | 37.2 | 31.7 | 40.4 | 23.1 |
| DRIFTED-BERT | Acc-Old | - | 5.5 | 3.0 | 1.2 | 1.8 | 2.0 | 1.4 | 2.4 | Acc-Old | - | 9.0 | 4.4 | 1.8 | 10.7 | 3.6 | 6.4 | 6.3 |
| ICE (Ours) ICE-PL (Ours) ICE-O (Ours) | New | 55.6 | 69.0 | 76.3 | 61.0 | 64.2 | 62.7 | 64.0 | 68.8 | New | 87.8 | 89.1 | 89.3 | 71.3 | 74.9 | 74.5 | 72.2 | 70.7 |
| ICE-PL&O (Ours) | New | 55.6 | 65.0 | 73.3 | 50.1 | 57.1 | 49.0 | 51.4 | 56.7 | New | 87.8 | 87.9 | 89.1 | 66.5 | 68.9 | 61.3 | 63.2 | 58.9 |
ICE-PL&O (Ours)New 55.6 65.0 73.3 50.1 57.1 49.0 51.4 56.7 New 87.8 87.9 89.1 66.5 68.9 61.3 63.2 58.9
Acc-Old - 58.2 60.2 60.6 57.4 58.2 56.4 51.5 Acc-Old - 80.4 80.1 74.0 67.4 63.8 57.2 53.2
Prev-Old - 57.4 65.4 74.2 51.0 59.5 56.3 50.3 Prev-Old - 79.7 84.9 85.0 65.1 68.1 59.2 60.2
Table 6: Analysis of the performance (Macro-F1 %) on new and old classes on the class-incremental **named entity**
detection and classification tasks on Few-NERD.
TACRED (Detection) TACRED (Classification)
Session Type 1 2 3 4 5 6 7 8 9 10 Type 1 2 3 4 5 6 7 8 9 10
ER† (Wang et al., 2019)New 29.2 19.8 23.2 28.2 27.1 10.2 47.5 10.6 36.6 24.1 New 93.9 53.9 43.6 50.9 33.9 22.2 48.9 21.6 43.0 27.3
Acc-Old - 32.9 26.1 16.6 18.7 19.7 15.9 17.3 17.0 17.5 Acc-Old - 69.5 61.4 49.3 36.4 41.9 39.1 34.9 41.1 40.1
Prev-Old - 9.3 21.5 2.2 27.5 32.5 21.4 43.48 13.5 42.5 Prev-Old - 63.8 68.9 54.6 56.3 49.5 43.2 61.3 31.3 66.4
KCN† (Cao et al., 2020)New 29.2 19.8 22.1 28.3 27.6 8.2 38.8 11.3 34.9 21.7 New 95.3 56.4 40.1 49.0 33.9 20.8 45.0 23.5 49.0 21.7
Acc-Old - 30.1 15.9 6.9 10.8 10.2 13.2 8.1 7.6 6.9 Acc-Old - 65.1 49.4 37.0 35.7 28.0 30.0 30.2 32.6 28.8
Prev-Old - 24.9 10.2 0.6 23.1 34.8 27.4 27.5 12.9 40.5 Prev-Old - 57.6 43.7 46.9 58.6 43.4 32.0 59.4 32.9 68.3
KT† (Yu et al., 2021)New 29.2 19.6 20.6 26.0 29.5 10.1 41.2 11.1 32.5 24.5 New 95.3 60.8 48.7 50.6 33.2 16.2 42.7 18.3 33.9 22.0
Acc-Old - 30.2 11.7 10.0 10.2 9.6 9.6 6.8 7.6 7.9 Acc-Old - 58.4 54.9 47.4 31.0 24.7 26.7 27.2 28.6 21.6
Prev-Old - 25.3 9.6 12.1 23.0 34.5 26.1 24.9 14.9 41.6 Prev-Old - 58.4 54.9 47.4 31.0 24.7 26.7 27.2 28.6 21.6
EMP† (Liu et al., 2022)New 25.5 19.3 17.4 32.3 17.5 9.5 37.4 8.4 34.6 19.3 New 90.9 45.6 35.6 38.3 29.3 8.4 40.8 9.7 17.7 26.7
Acc-Old - 27.5 19.9 16.4 11.7 17.0 17.0 15.5 12.8 16.6 Acc-Old - 44.5 22.0 19.4 11.2 15.4 18.8 8.2 10.8 18.2
Prev-Old - 22.9 21.9 5.9 22.8 31.9 21.2 34.1 7.1 45.8 Prev-Old - 34.2 25.8 20.5 11.7 32.9 3.7 16.4 5.5 31.9
DRIFTED-BERTNew 28.7 16.0 19.3 20.2 21.8 11.3 43.2 8.6 40.7 20.0 New 93.6 57.4 35.6 32.4 29.9 13.0 36.3 7.3 18.3 12.3
Acc-Old - 8.0 3.9 7.0 4.6 4.4 1.8 5.0 4.7 5.8 Acc-Old - 18.6 5.7 4.6 2.1 4.6 6.4 3.1 5.5 6.9
Prev-Old - 1.3 1.8 15.6 12.4 16.7 0.0 36.3 9.0 36.0 Prev-Old - 0.0 1.9 6.3 3.5 22.5 17.4 13.8 10.8 47.1
ICE (Ours)New - - - - - - - - - - New 93.6 75.3 33.6 43.8 47.2 24.3 49.8 26.8 58.1 30.3
Acc-Old - - - - - - - - - - Acc-Old - 73.7 55.6 43.9 36.1 34.0 31.2 31.7 30.2 31.2
Prev-Old - - - - - - - - - - Prev-Old - 68.4 56.3 27.3 45.1 46.5 20.3 50.1 26.8 57.4
ICE-PL (Ours)New - - - - - - - - - - New 93.6 57.4 36.7 33.3 27.7 16.2 37.0 15.9 61.8 23.8
Acc-Old - - - - - - - - - - Acc-Old - 18.6 7.2 9.3 9.1 9.5 8.2 10.7 13.2 15.9
Prev-Old - - - - - - - - - - Prev-Old - 0.0 4.9 21.1 33.3 46.9 28.9 39.0 16.8 64.9
ICE-O (Ours)New 28.7 16.4 19.3 28.5 29.4 20.2 42.2 7.3 38.2 26.1 New 92.7 78.0 45.6 61.3 61.0 34.5 64.2 33.5 70.0 33.8
Acc-Old - 31.0 23.0 23.1 24.5 26.5 26.3 26.5 26.2 26.4 Acc-Old - 86.4 71.3 60.4 58.0 55.0 50.8 48.0 46.8 45.0
Prev-Old - 29.9 16.2 19.8 29.3 30.6 22.3 42.2 7.4 38.1 Prev-Old - 84.2 73.6 44.7 61.1 54.5 30.0 63.9 33.2 69.9
ICE-PL&O (Ours)New 28.7 17.2 18.1 20.3 16.5 7.7 35.1 6.4 34.8 15.2 New 92.7 67.9 50.0 55.8 43.5 19.8 53.8 24.8 57.0 21.4
Acc-Old - 33.1 23.4 22.3 18.7 19.4 18.9 19.7 20.4 21.6 Acc-Old - 81.5 62.0 50.4 38.9 31.1 28.1 29.2 27.2 26.5
Prev-Old - 30.5 17.5 19.8 22.2 18.1 8.2 33.9 7.2 36.4 Prev-Old - 78.6 67.2 49.1 46.3 50.9 21.5 53.2 24.2 61.0
Table 7: Analysis of the performance (Macro-F1 %) on new and old classes on the class-incremental **relation**
detection and classification tasks on TACRED.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✗ A2. Did you discuss any potential risks of your work?
We don't see any potential risks of this work
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 5
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
The number of tunable parameters in our approach is very small The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5, Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We didn't use any existing packages D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
obadic-etal-2023-c | {C}-{XNLI}: {C}roatian Extension of {XNLI} Dataset | https://aclanthology.org/2023.findings-acl.142 | Comprehensive multilingual evaluations have been encouraged by emerging cross-lingual benchmarks and constrained by existing parallel datasets. To partially mitigate this limitation, we extended the Cross-lingual Natural Language Inference (XNLI) corpus with Croatian. The development and test sets were translated by a professional translator, and we show that Croatian is consistent with other XNLI dubs. The train set is translated using Facebook{'}s 1.2B parameter m2m{\_}100 model. We thoroughly analyze the Croatian train set and compare its quality with the existing machine-translated German set. The comparison is based on 2000 manually scored sentences per language using a variant of the Direct Assessment (DA) score commonly used at the Conference on Machine Translation (WMT). Our findings reveal that a less-resourced language like Croatian is still lacking in translation quality of longer sentences compared to German. However, both sets have a substantial amount of poor quality translations, which should be considered in translation-based training or evaluation setups. | # C-Xnli: Croatian Extension Of Xnli Dataset
Leo Obadic, Andrej Jertec, Marko Rajnovi ´ **c, Branimir Dropulji** ´ c´
RealNetworks, Inc.
{lobadic,anjertec,mrajnovic,bdropuljic}@realnetworks.com
## Abstract
Comprehensive multilingual evaluations have been encouraged by emerging cross-lingual benchmarks and constrained by existing parallel datasets. To partially mitigate this limitation, we extended the Cross-lingual Natural Language Inference (XNLI) corpus with Croatian. The development and test sets were translated by a professional translator, and we show that Croatian is consistent with other XNLI
dubs. The train set is translated using Facebook's 1.2B parameter m2m_100 model. We thoroughly analyze the Croatian train set and compare its quality with the existing machinetranslated German set. The comparison is based on 2000 manually scored sentences per language using a variant of the Direct Assessment (DA) score commonly used at the Conference on Machine Translation (WMT). Our findings reveal that a less-resourced language like Croatian is still lacking in translation quality of longer sentences compared to German.
However, both sets have a substantial amount of poor quality translations, which should be considered in translation-based training or evaluation setups.
## 1 Introduction
Natural language processing has developed rapidly in recent years. Models are starting to achieve human-like performance, but most of these achievements are concentrated on only a small fraction of the world's 7000+ languages. This is to be expected due to the nature of linguistic annotation, which is not only tedious, subjective, and costly, but also requires domain experts, which are in decline (Lauscher et al., 2020).
There are two main approaches commonly used to handle that problem from the models' perspective. The first approach relies on cross-lingual transfer, where the model is pretrained to learn multilingual representations (Conneau et al., 2020; Pires et al., 2019), while the other approach relies heavily on Machine Translation (MT) systems to translate the text from a low-resource language to a high-resource language (or vice versa). Both approaches can be easily evaluated on cross-lingual benchmarks such as XTREME (Hu et al., 2020) or XGLUE (Liang et al., 2020). They consist of crosslingual datasets grouped by task to allow comprehensive evaluation. Unfortunately, XTREME covers 40 languages and XGLUE only 19.
Since none of these benchmarks include Croatian language in any of their datasets, and Crosslingual Natural Language Inference (XNLI; Conneau et al., 2018) corpus is included in both, we decided to extend XNLI with Croatian (CXNLI). The task is to classify whether a premise contradicts, entails, or is neutral to the hypothesis. XNLI's development and test sets are crowdsourced in English and human-translated into 14 languages, while MultiNLI's (Williams et al., 2018)
training set is used for training. It also consists of machine-translated sets required for the translatetrain and translate-test paradigms.
Our Croatian extension is created in the same manner as its XNLI parent. The development and test sets are translated by a professional translator.
Since XNLI provides translate-train, translate-dev and translate-test sets, we opted for Facebook's 1.2B parameter m2m_100 MT model (Fan et al.,
2020) to create our own translations.
It has been shown that MT models still suffer from errors like mistranslations, non-translations and hallucinations (Freitag et al., 2021; Raunak et al., 2021), which motivated us to analyze the quality of our dataset. For this purpose, we sampled 2000 sentences per language in both Croatian and German, and evaluated the translations using a variant of the Direct Assessment (DA) score proposed in the Multilingual Quality Estimation dataset (MLQE; Fomicheva et al., 2022).
To summarize, our contributions are the following: (1) we create and analyze the Croatian exten2258 sion of XNLI and provide baseline models, (2) we create Quality Estimation (QE) datasets for Croatian and German to evaluate the quality of machinetranslated sentences from the translate-train sets, and (3) we quantify the textual overlap between hypothesis and premise and analyze its impact on baseline models.
## 2 Datasets 2.1 C-Xnli
In creating the dataset, we follow the same procedure as Conneau et al. (2018). We hired a native Croatian professional translator to translate the English development (2490 samples) and test
(5010 samples) sets of the XNLI dataset into Croatian. Premises and hypotheses were given to the translator separately to ensure that the premises did not provide context for the hypotheses. The English training set, derived from MultiNLI and containing 392,702 samples, was translated into Croatian using a selected MT model. We considered a total of eight models and opted for Facebook's multilingual m2m_100 model with 1.2B parameters because of its highest BLEU score (Papineni et al., 2002) on the FLORES dataset (Guzmán et al., 2019), as shown in Table 1. All of m2m_100 and mbart models are available on fairseq1(Ott et al., 2019), whereas opus models are available on Helsinki-NLP2(Tiedemann, 2020; Tiedemann and Thottingal, 2020) and are evaluated by MarianNMT (Junczys-Dowmunt et al., 2018).
| model name | BLEU |
|--------------|--------|
| m2m_100_1.2B | 27.81 |
| opus_sla | 25.73 |
| opus_hr | 25.64 |
| m2m_100_615M | 23.74 |
| mbart50_en2m | 23.72 |
| m2m_100_418M | 22.95 |
| mbart50_m2m | 22.66 |
| m2m_100_175M | 15.67 |
Table 1: Translation scores on Croatian part of FLORES
devtest set for each model.
## 2.2 Da Scores
To evaluate the quality of the system used to translate English to Croatian, we compare the generated translations with the available translations from 1https://github.com/facebookresearch/fairseq 2https://github.com/Helsinki-NLP
a high-resource language. We score a sample of Croatian and German translations from the train set and compare the results. The sentences were sampled using a semantic similarity-based metric that correlates with translation quality (Cer et al.,
2017) to flatten the original distribution of scores and analyze samples of diverse quality. A cosine score between the multilingual sentence representations from both LASER (Artetxe and Schwenk, 2019) and SBERT (Reimers and Gurevych, 2019)
were used to measure semantic similarity between the source and translated sentences. These models are commonly used at the Conference on Machine Translation (WMT) for QE task (Specia et al., 2021, 2020). The SBERT we used is a multilingual variant trained on the paraphrase dataset which has slightly better performance than the models trained on similarity tasks (Reimers and Gurevych, 2020).
By utilizing a histogram of cosine scores with a bin size of 0.05, we adopted a circular sampling approach to randomly select one premise from each bin until a total of 50 premises were obtained. Similarly, we followed the same procedure for hypotheses, alternating between SBERT
and LASER cosine scores. Furthermore, we implemented an additional criterion to ensure the inclusion of all premises and hypotheses that share a common premise. This entire process was repeated until we reached a 1000 samples each, for both SBERT and LASER cosine scores (2000 in total).
We scored the samples using the procedure described by Fomicheva et al. (2022). Annotators were asked to rate the translation quality for each sentence on a scale 0–100. Sentences were initially annotated by three annotators. If the range of the most diverging scores exceeded 30 points, an additional annotator was asked to replace the most diverging one until convergence was achieved. The annotators' raw scores were converted to z-scores3; the final score is the average of all scores after convergence. More information about annotators, and annotation procedure is presented in Appendix A.
## 3 Analyses And Results 3.1 C-Xnli And Da Scores
To demonstrate that our extension has similar properties to its parent XNLI, we perform the following analyses. We tokenize C-XNLI's sentences with MOSES tokenizer and obtain the average number 3The normalization according to each individual annotator's overall mean and standard deviation.
| ar | bg | de | el | es | fr | hi | ru | sw | th | tr | ur | vi | zh | hr | |
|------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| XX-En BLEU | 35.2 | 38.7 | 39.3 | 42.1 | 45.8 | 41.2 | 27.3 | 27.1 | 21.3 | 22.6 | 29.9 | 24.4 | 23.6 | 24.6 | 41.8 |
| En-XX BLEU | 15.8 | 34.2 | 38.8 | 42.4 | 48.5 | 49.3 | 37.5 | 24.9 | 24.6 | 21.4 | 21.9 | 24.1 | 39.9 | 23.2 | 42.1 |
Table 2: BLEU scores calculated on XNLI test set reported by Conneau et al. (2018), extended with Croatian using MOSES tokenizer. XX-En stands for any language to English, whereas En-XX stands for English to any language translation.
of tokens in premises (19.0) which is nearly double the number in hypotheses (9.3) - a ratio that is consistent with other XNLI languages (see Appendix C).
Another analysis Conneau et al. (2018) provide is the BLEU score of their MT systems translating to and from the target language. We have extended their results to include those for the Croatian language (Table 2). Our translations from English to Croatian (EN-XX in the table) have the fourth-best BLEU score. These findings are not too surprising since the MT we use is more recent. The distribution of DA scores for Croatian and German is shown in Figure 1. We can observe that Croatian, although is a lower-resourced language, it has a slightly higher translation quality, as the mean of Croatian DA scores is almost identical to a German one.
![2_image_1.png](2_image_1.png)
The correlations between the LASER and SBERT cosine scores and DA scores for both languages are shown in Table 3, with p < 0.05. The correlations for German are higher, and the LASER
cosines tend to correlate less.
| hr | de | |
|-------|------|------|
| SBERT | 0.57 | 0.61 |
SBERT 0.57 0.61
LASER 0.45 0.54
Table 3: Spearman correlation calculated between cosine score and DA annotations.
In Figure 2 we can see that the Croatian model is more likely to make a mistake on premises compared to the German model.
![2_image_0.png](2_image_0.png)
## 3.2 Overlaps
The analysis presented here extends Artetxe et al.'s
(2020) work where authors demonstrate that the overlap between hypotheses and premises is an overlooked bias in the XNLI dataset, caused by access to premise during hypothesis generation in English, and no access to it during translation into other languages. They decrease the bias by back-translating data and improve their results. To demonstrate the existence of that bias, we take a more direct approach and define a metric that represents overlap - the proportion of copied text from premise to hypothesis. It is the number of character N-grams which occur in both hypothesis and premise, divided by the number of possible character N-grams in the hypothesis. In Table 4 we presented those overlaps using bi-grams, N = 2.
We can observe that in the training set, the overlap is 5% to 20% higher compared to development and test sets. In order to investigate that even further, we asked our professional translator to translate 1% of our C-XNLI dataset: 100 sentences which consist of 25 premises and 75 of their hypotheses.
We made sure that the premise was given alongside each hypothesis so that it provides context to it in order to measure the influence on the overlap since, in the translation effort, premises and hypotheses were given separately. Our representative sample contained similar genre distribution, overlap distribution, and similar development vs. test overlap ratio. Our results show that when using N = 2, biased sample has 8% increase in overlap, whereas for N = {3, 4, 5}, it increased by ∼ 17%.
2260
| split | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | hr |
|---------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| dev | 0.43 | 0.50 | 0.56 | 0.47 | 0.57 | 0.55 | 0.54 | 0.40 | 0.45 | 0.52 | 0.41 | 0.48 | 0.38 | 0.50 | 0.21 | 0.54 |
| test | 0.44 | 0.51 | 0.56 | 0.48 | 0.58 | 0.56 | 0.55 | 0.40 | 0.46 | 0.52 | 0.41 | 0.49 | 0.39 | 0.49 | 0.21 | 0.53 |
| train | 0.54 | 0.60 | 0.62 | 0.56 | 0.62 | 0.62 | 0.61 | 0.52 | 0.55 | 0.54 | 0.52 | 0.54 | 0.36 | 0.56 | 0.48 | 0.56 |
Table 4: Average overlap between hypotheses and premises for each language and split.
| Paradigm | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | hr | Avg | Avg+hr Ctr | Cde | Cte | |
|--------------------------------------------------------------------------------------------------------------------------------------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|-------|--------------|-------|-------|------|
| Fine-tune multilingual model on English training set (ZERO-SHOT) en 72.0 77.7 76.4 75.8 84.5 78.8 78.0 69.7 | 75.5 | 65.4 | 71.6 | 72.6 | 65.5 | 74.3 | 72.9 | 78.0 | 74.0 | 74.3 | 0.84 | 0.73 | 0.74 | | | | | | | | |
| Fine-tune multilingual model on all training sets (TRANSLATE-TRAIN-ALL) Conneau et al. (2020) 77.3 81.3 80.3 80.4 85.4 82.2 81.4 76.1 79.7 | 73.1 | 77.9 | 78.6 | 73.0 | 79.7 | 80.2 | - | 79.1 | - | 0.82 | 0.65 | 0.67 | | | | | | | | | |
| all | 77.3 | 80.8 | 80.5 | 79.8 | 84.8 | 81.7 | 80.7 | 75.5 | 79.0 | 72.6 | 77.4 | 77.8 | 72.0 | 79.0 | 78.8 | 80.6 | 78.5 | 78.6 | 0.87 | 0.75 | 0.76 |
| all_plus_hr | 77.2 | 81.1 | 80.3 | 79.9 | 84.8 | 81.9 | 80.9 | 75.4 | 78.6 | 72.2 | 77.2 | 77.7 | 71.1 | 79.3 | 78.8 | 81.0 | 78.4 | 78.6 | 0.85 | 0.73 | 0.75 |
| Fine-tune multilingual model on each training set (TRANSLATE-TRAIN) 74.0 79.8 79.3 77.0 84.5 80.8 79.0 72.9 | 77.8 | 69.7 | 67.1 | 75.2 | 66.9 | 78.4 | 77.6 | 80.2 | 76.0 | 76.3 | 0.83 | 0.76 | 0.77 | | | | | | | | |
| Translate everything to English and use English-only model (TRANSLATE-TEST) en 72.8 77.7 76.6 76.4 84.5 78.9 77.6 67.7 73.4 63.4 | 68.6 | 72.3 | 63.0 | 70.7 | 72.8 | 79.5 | 73.1 | 73.5 | 0.79 | 0.70 | 0.71 | | | | | | | | | | |
## 3.3 Xlm-R Setups
We tested cross-lingual transfer using zero-shot and translate-based setups. For each, we employ pretrained XLM-R Base model (Conneau et al., 2020),
implemented in Transformers library (Wolf et al.,
2020). In the zero-shot approach, we fine-tune our model on English samples. In the translate-train approach, we fine-tune on translations of a training set, whereas in translate-train-all, we fine-tune it on concatenated training translations. Evaluations are done in all languages. In the translate-test approach, we use the same model from our zero-shot approach and evaluate it on English translations of other languages. We experimented with various hyperparameter configurations and found appropriate ranges. Hyperparameter optimization is done for each setup, and details are presented in the Appendix B.
Results of baseline setups are shown in Table 5.
To demonstrate the comparability of our training setup, we compare XLM-R's reported accuracy with ours, which is only 0.6 points lower in the train-translate-all setup. The performance of the Croatian model is consistently among the TOP5 models. The reason for that might be in the high BLEU score shown in Table 2. Focusing on the best overall model - translate-train-all, we notice that adding Croatian did not drastically change the average performance and decreased it only for distant languages like Urdu and Swahili. Whereas for other languages, it increased or did not change significantly.
Finally, Table 5 also shows how the performance of models on the test set of each language correlates with the bi-gram overlaps in the train, development, and test sets of that particular language. There is a consistent high correlation between the overlap in all sets and models' performance (p < 0.05).
However, a lower correlation is seen in the development and test sets. This observation could be attributed to the fact that increasing the overlap of a particular language makes it more similar to the English set, in terms of overlap, thus improving the performance. However, as we showed in Subsection 3.2, the overlap in the development and test sets is artificially lower due to biased translation.
Alternatively, high training overlaps might indicate that the model is learning to detect the occurrence of overlapping cues.
## 4 Conclusion
In this work, we extended XNLI to include the Croatian language. The development and test sets were translated by a professional translator. We have successfully demonstrated that the quality of the development and test sets is comparable to that of the other languages. To validate the machine-translated training set, we compare our Croatian translations with those available for a high-resourced language - German. The comparison is based on 2000 manually scored sentences from German and Croatian train sets using a variant of DA scores normalized by z-score. Our results show that the Croatian MT model performs slightly better because it's more up-to-date, even though it's a lower-resourced language. We also found that the Croatian translation model performs poorly on longer sentences - premises.
Finally, we present an overlap metric to measure the textual overlap between the premise and hypothesis. We find that the training set has larger overlaps than the development and test sets. These overlaps resulted in a high correlation between the models' scores, indicating that a model uses cues from the data that also correlate with overlaps.
We provide our datasets under the same license4 as the XNLI dataset, and also make the accompanying code available on GitHub5. We hope that by sharing our datasets, researchers will have the opportunity to gain further insights and expand their knowledge in the field of cross-lingual transfer.
## Limitations
In each contribution of this work, we can isolate several potential limitations. In creating C-XNLI, the MT model for the formation of the train set was chosen based on the results from a single dataset. Additionally, an assumption that the model is plagued with typical issues that affect MT models was investigated on a small dataset. Although we are skeptical of the MT model's performance and perform QE scoring of the small dataset by a group of annotators and analysis to ascertain its performance, we are only comparing Croatian machinetranslation results to results from a single language
(German), assuming that results would hold for other high-resource languages. Also, for some MT evaluations, we use a single metric (BLEU) known to have many problems but only generally considered to correlate with human judgment.
Our hyperparameter optimizations are of limited scope. All hyperparameters are fixed, except the learning rate with four possible values we search over. Furthermore, we only used three seeds. We could not perfectly reproduce the results outlined in the paper of our baseline XLM-R Base model, partly due to a lack of elucidation in the original 4CC BY-NC 4.0 5https://github.com/lobadic/C-XNLI
paper and partly due to limited hyperparameter optimizations.
Finally, we do not elucidate further and experiment with the discovered correlation between the models' performance and the overlap in datasets, and we leave it for future work.
## References
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020.
Translation artifacts in cross-lingual transfer learning.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 7674–7684, Online. Association for Computational Linguistics.
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation.
Marina Fomicheva, Shuo Sun, Erick Fonseca, Chrysoula Zerva, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, and André F. T. Martins. 2022. MLQE-PE: A multilingual quality estimation and post-editing dataset. In Proceedings of the Thirteenth Language Resources
and Evaluation Conference, pages 4963–4974, Marseille, France. European Language Resources Association.
Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT
2019: Demonstrations.
Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021.
Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation. *Transactions of the Association for Computational Linguistics*, 9:1460–1474.
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali–English and Sinhala–
English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098–6111, Hong Kong, China. Association for Computational Linguistics.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.
2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411–4421. PMLR.
Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T.
Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116–121, Melbourne, Australia. Association for Computational Linguistics.
Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and ´
Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483–4499, Online. Association for Computational Linguistics.
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008–6018, Online. Association for Computational Linguistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311–318, USA.
Association for Computational Linguistics.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
Vikas Raunak, Arul Menezes, and Marcin JunczysDowmunt. 2021. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1172–1183, Online. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics.
Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmán, and André F. T. Martins. 2020. Findings of the WMT
2020 shared task on quality estimation. In *Proceedings of the Fifth Conference on Machine Translation*,
pages 743–764, Online. Association for Computational Linguistics.
Lucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, and André F. T. Martins. 2021. Findings of the WMT
2021 shared task on quality estimation. In *Proceedings of the Sixth Conference on Machine Translation*,
pages 684–725, Online. Association for Computational Linguistics.
Jörg Tiedemann. 2020. The Tatoeba Translation Challenge - Realistic data sets for low resource and multilingual MT. In *Proceedings of the Fifth Conference* on Machine Translation, pages 1174–1182, Online.
Association for Computational Linguistics.
Jörg Tiedemann and Santhosh Thottingal. 2020. OPUSMT - Building open translation services for the World. In *Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)*, Lisbon, Portugal.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long Papers), pages 1112–1122. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
## A Direct Assessment A.1 Annotation
In order to increase the quality of our annotations, we firstly provided a set of 50 training samples, and only later provided the other samples to the annotators. The annotators are instructed to score each sentence on a scale 0–100 according to the perceived translation quality (Fomicheva et al., 2022).
Specifically, the 0–10 range for incorrect translations; 11–29 for translations with a few correct keywords, but wrongly conveyed meaning; 30–50 for the ones containing major mistakes; 51–69 for translations that convey the meaning of the source, but contain grammatical errors; 70–90 for translations that preserve semantics of the source sentence; and 91–100 for correct translations.
Also, our Croatian annotators are Croatian native students majoring in Linguistics or pursuing a Translation degree. German annotators have the language competence of C1 or above. They were paid per hour. On the contrary, our professional translator was paid according to the regular translation rate in Croatia for a large corpus on a card basis (1800 characters including white spaces).
## A.2 Scores Dataset Creation
When resolving final DA scores, if we ended up in a scenario where the outlier was on either side (e.g.
[0, 20, 40]), we randomly chose one. Furthermore, the process described by Fomicheva et al. (2022) is biased towards the first three annotations, meaning that if two of them are outliers, we'll keep discarding annotations until a third outlier comes. In our process, it happened in ∼ 1% of cases.
## B Hyperparameters
Here we outline hyperparameters used for hyperparameter search of our XLM-R Base model on different XNLI training setups. Every model was trained on 3 epochs, and the best one (out of 3 epochs) was chosen based on evaluation results on the dev set. For all of our experiments we used 2 NVIDIA 3090 GPUs.
| Name | Value(s) |
|-------------------------|------------------------------|
| Max epochs | 3 |
| Optimizer | AdamW |
| Batch size | 32 |
| Warmup proportion | 6% |
| Weight decay | 0.01 |
| Learning rate | [8e−6 , 1e−5 , 3e−5 , 5e−5 ] |
| Learning rate scheduler | linear with warmup |
| Max seq length | 128 |
Table 6: Considered Hyperparameters.
## C Xnli Additional Analyses
| ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | hr | |
|------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| Premise | 20.7 | 20.9 | 21.1 | 21.0 | 21.7 | 22.1 | 24.1 | 23.2 | 19.6 | 18.7 | 22.1 | 16.8 | 24.1 | 27.6 | 21.8 | 19.0 |
| Hypothesis | 10.2 | 10.4 | 10.8 | 10.6 | 10.7 | 10.9 | 12.4 | 11.9 | 9.7 | 9.0 | 10.4 | 8.4 | 12.3 | 13.5 | 10.8 | 9.3 |
Table 7: Average token lengths per language for hypotheses and premises reported by Conneau et al. (2018),
extended with Croatian using MOSES tokenizer.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5 (first after the conclusion)
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
0,1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2,3
✓ B1. Did you cite the creators of artifacts you used?
1,2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
4 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
2,3
## C ✓ **Did You Run Computational Experiments?** 3
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
2 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
2, appendix A.1 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
2 |
ahmad-etal-2023-avatar | {AVATAR}: A Parallel Corpus for {J}ava-Python Program Translation | https://aclanthology.org/2023.findings-acl.143 | Program translation refers to migrating source code from one programming language to another. It has tremendous practical value in software development, as porting software across languages is time-consuming and costly. Automating program translation is of paramount importance in software migration, and recently researchers explored unsupervised approaches due to the unavailability of parallel corpora. However, the availability of pre-trained language models for programming languages enables supervised fine-tuning with a small number of labeled examples. Therefore, we present AVATAR, a collection of 9,515 programming problems and their solutions written in two popular languages, Java and Python. AVATAR is collected from competitive programming sites, online platforms, and open-source repositories. Furthermore, AVATAR includes unit tests for 250 examples to facilitate functional correctness evaluation. We benchmark several pre-trained language models fine-tuned on AVATAR. Experiment results show that the models lack in generating functionally accurate code. | # Avatar**: A Parallel Corpus For Java-Python Program Translation**
Wasi Uddin Ahmad†**, Md Golam Rahman Tushar**§
Saikat Chakraborty‡, **Kai-Wei Chang**†
†University of California, Los Angeles, ‡Microsoft Research, §Independent Contributor
†{wasiahmad, kwchang}@cs.ucla.edu
‡[email protected], §[email protected]
## Abstract
Program translation refers to migrating source code from one programming language to another. It has tremendous practical value in software development, as porting software across languages is time-consuming and costly. Automating program translation is of paramount importance in software migration, and recently researchers explored unsupervised approaches due to the unavailability of parallel corpora.
However, the availability of pre-trained language models for programming languages enables supervised fine-tuning with a small number of labeled examples. Therefore, we present AVATAR, a collection of 9,515 programming problems and their solutions written in two popular languages, Java and Python. AVATAR is collected from competitive programming sites, online platforms, and open-source repositories.
Furthermore, AVATAR includes unit tests for 250 examples to facilitate functional correctness evaluation. We benchmark several pretrained language models fine-tuned on AVATAR.
Experiment results show that the models lack in generating functionally accurate code.
1 **Introduction**
Software developers and researchers often require to convert software codebases or research prototypes from one platform to another or rewrite them in the target programming languages. Manually rewriting software is time-consuming, expensive, and requires expertise in both the source and target languages. For example, the Commonwealth Bank of Australia spent around $750 million and 5 years translating its platform from COBOL to Java (Lachaux et al., 2020). A program translation system that converts the source code of a program written in a programming language to an equivalent program in a different programming language is known as a transcompiler, transpiler, or source-tosource compiler. Transcompilers have a prodigious practical value; they could help to reduce the translation efforts of developers and researchers by not requiring them to write code from scratch, instead, they can edit the translated code with less effort.
The conventional transcompilers are based on rule-based approaches; they first convert source code into an Abstract Syntax Tree (AST) and then apply handwritten rules to translate to the target language. Development and adaptation of transcompilers need advanced knowledge and therefore are available in a handful of programming languages.
Undoubtedly, the automation of program translation would facilitate software development and research tremendously.
With the recent advancements in data-driven neural machine translation (NMT) approaches between natural languages, researchers have started investigating them for programming language translation.
Lachaux et al. (2020) trained an NMT system in an unsupervised fashion using large-scale monolingual source code from GitHub that showed noteworthy success in source code translation between Java, Python, and C++ languages. Pre-trained language models (PLMs) of code have been shown to work well on Java-C\# translation after fine-tuning on a small amount of parallel examples (Feng et al.,
2020; Guo et al., 2021; Ahmad et al., 2021; Wang et al., 2021). Motivated by these favorable results, in this work, we propose a new parallel corpus of Java and Python programs.
We propose a corpus, AVATAR (jAVA-pyThon progrAm tRanslation) that consists of solutions written in Java and Python for 9,515 programming problems collected from competitive programming sites, online platforms, and open source repositories. AVATAR includes 250 examples with unit tests to facilitate functional correctness evaluation of program translation. We train several baselines, including models trained from scratch or pre-trained on large-scale source code collection and fine-tuned on AVATAR. The experiment results indicate that while the models perform considerably in terms of the lexical match, they lack Fur-
| Source | #Prob. | Java | Python | Soln. / Prob. | Train | Valid / Test | | |
|---------------|----------|--------|----------|-----------------|---------|----------------|---------|-------------|
| #Soln. | AvgL | #Soln. | AvgL | | | | | |
| AtCoder | 871 | 3,990 | 276.5 | 4,344 | 180.3 | [1 - 5] | 14,604 | 36 / 195 |
| Code Jam | 120 | 508 | 390.9 | 460 | 266.5 | [1 - 5] | 1,586/7 | 7/ 19 |
| Codeforces | 2,193 | 6,790 | 246.2 | 10,383 | 123.8 | [1 - 5] | 24,754 | 102 / 436 |
| GeeksforGeeks | 5,019 | 5,019 | 194.8 | 5,019 | 138.4 | 1 | 3,754 | 269 / 996 |
| LeetCode | 107 | 107 | 140.0 | 107 | 97.4 | 1 | 82 | 7 / 18 |
| Project Euler | 162 | 162 | 227.3 | 162 | 139.4 | 1 | 110 | 11 / 41 |
| AIZU | 1,043 | 4,343 | 304.2 | 4,603 | 171.3 | [1 - 5] | 15,248 | 44 / 199 |
| Total | 9,515 | 20,919 | 254.5 | 25,078 | 147.9 | - | 60,138 | 476 / 1,906 |
thermore, AVATAR offers 3,391 parallel functions that we use to train models or fine-tune pre-trained language models and perform function translation evaluation on the dataset released by Lachaux et al.
(2020). Our code and data are released at https:
//github.com/wasiahmad/AVATAR.
## 2 Avatar **Construction**
Data Collection We construct AVATAR based on solutions of computational problems written in Java and Python collected from open source programming contest sites: AtCoder, AIZU Online Judge, Google Code Jam, Codeforces, and online platforms: GeeksforGeeks, LeetCode, Project Euler.
We crawl Codeforces and GeeksforGeeks sites to collect the problem statements and their solutions.
We collect the AtCoder and AIZU data from Puri et al. (2021), Google Code Jam data from Nafi et al.
(2019)
1, and LeetCode and Project Euler problem solutions from open source Github repositories.2,3 We collect [1 - 20] accepted solutions for a single problem written in Java and Python.
Preprocessing & Filtering At first, we tokenize the solution code and remove docstrings and comments from them. We use the javalang4tokenizer for Java and the tokenizer5 of the standard library for Python. After tokenization, we filter out solutions that are longer than a specified 1https://github.com/Kawser-nerd/
CLCDSA
2https://github.com/qiyuangong/
leetcode 3https://github.com/nayuki/
Project-Euler-solutions 4https://github.com/c2nes/javalang 5https://docs.python.org/3/library/
tokenize.html length threshold (= 464). In the initial data collection, there are [1 - 20] accepted solutions for each problem. We filter out solutions and only keep at most 5 solutions per problem. Our goal is to keep the solutions that are maximally different from others in order to increase diversity among solutions of the same problem. We use the open source library difflib6to compare all the solutions pairwise
(individually in Java and Python) and select five solutions that differ most from others.
Data Statistics We split 9,515 problem statements into a 75:5:20 ratio to form 7,133 training, 476 validation, and 1,906 test examples. Table 1 summarizes the data statistics. Since we collect
[1 - 5] accepted solutions for each problem statement in both languages, we form [1 - 25] parallel examples per problem for training. In evaluation, we use multiple ground truths and select the best performance according to the evaluation metrics.
Unit Tests AVATAR presents unit tests for 250 evaluation examples (out of 1,906) to perform functional accuracy evaluation of the translation models.
The unit tests are collected from the publicly available test cases released by AtCoder.7 Parallel Functions AVATAR includes 3,391 parallel Java and Python functions.8 The functions are extracted by parsing programs that include *only* one function. We use them for training models and evaluating using the dataset released by Lachaux et al. (2020).
6https://docs.python.org/3/library/
difflib.html 7https://atcoder.jp/posts/21 8Deduplicated against the evaluation dataset released by Lachaux et al. (2020) using https://github.com/
microsoft/dpu-utils.
## 3 **Experiment & Results** 3.1 **Evaluation Metrics**
BLEU computes the overlap between candidate and reference translations (Papineni et al., 2002).
Syntax Match (SM) represents the percentage of the sub-trees extracted from the candidate program's abstract syntax tree (AST) that match the sub-trees in reference programs' AST.
Dataflow Match (DM) is the ratio of the number of matched candidate data-flows and the total number of the reference data-flows (Ren et al., 2020).
CodeBLEU (CB) is the weighted average of the token level match, syntax level match (SM), and Dataflow match (DM) (Ren et al., 2020).
Execution Accuracy (EA) indicates the percentage of translated programs that are executable (results in no compilation or runtime errors).
Computational Accuracy (CA) Lachaux et al.
(2020) proposed the metric to evaluate whether the candidate translation generates the same outputs as the reference when given the same inputs.
## 3.2 **Models**
We evaluate a variety of models on program and function translation using AVATAR and the evaluation dataset released by Lachaux et al. (2020).
Zero-shot This set of models is evaluated on AVATAR without any training or fine-tuning.
- **TransCoder** is pre-trained in an unsupervised fashion that can translate programs between Java, Python, and C++ languages (Lachaux et al., 2020).
- **DOBF** uses deobfuscation pretraining followed by unsupervised translation (anne Lachaux et al.,
2021).
- **TransCoder-ST** is developed by fine-tuning TransCoder on a parallel corpus created via an automated unit-testing system (Roziere et al., 2022).
Models trained from scratch These models are trained from scratch using AVATAR. We use the sentencepiece tokenizer and vocabulary from Ahmad et al. (2021) in these models.
- **Seq2Seq+Attn.** is an LSTM based sequence-tosequence (Seq2Seq) model with attention mechanism (Bahdanau et al., 2015).
- **Transformer** is a self-attention based Seq2Seq model (Vaswani et al., 2017). We use the Transformer architecture studied in Ahmad et al. (2020).
Pre-trained Models We evaluated three types of pre-trained models (PLMs). First, we evaluate decoder-only PLMs (*e.g.,* CodeGPT) that generate auto-regressively. The second category of PLMs is encoder-only (*e.g.,* CodeBERT). We use a randomly initialized decoder to finetune such models in a Seq2Seq fashion. The third category of PLMs is Seq2Seq models (*e.g.,* PLBART), which we directly finetune on translation tasks. - **CodeGPT and CodeGPT-adapted** are GPT2 (Radford et al., 2019) style models pre-trained on CodeSearchNet (Lu et al., 2021). Note that CodeGPT-adapted starts from the GPT-2 checkpoint, while CodeGPT is pre-trained from scratch.
- **CodeBERT** is an encoder-only model that is pre-trained on unlabeled source code via masked language modeling (MLM) and replaced token detection objectives (Feng et al., 2020).
- **GraphCodeBERT** is pre-trained using MLM,
data flow edge prediction, and variable-alignment between code and its' data flow (Guo et al., 2021).
- **PLBART** is a Transformer LM pre-trained via denoising autoencoding (Ahmad et al., 2021).
- **CodeT5** is a Transformer LM pre-trained via identifier-aware denoising (Wang et al., 2021).
In addition, we fine-tune TransCoder-ST, which is the best translation model in the literature.
## 3.3 **Hyperparameters Details**
We individually fine-tune the models for Java to Python and Python to Java program and function translation, respectively. We fine-tune the models for a maximum of 20 epochs using the Adam (Kingma and Ba, 2015) optimizer with a batch size of 32. We tune the learning rate in the range [1e − 4, 5e − 5, 3e − 5, 1e − 5]. The final models are selected based on the validation BLEU score. We use beam decoding with a beam size set to 10 for inference across all the models.
## 3.4 **Results**
Program Translation The performance comparison of all the experiment models is presented in Table 2. In general, all the models perform well in terms of match-based metrics, *e.g.,* BLEU and CodeBLEU. However, the computational accuracy
(CA) clearly indicates that these models are far from perfect in generating functionally accurate translations. Overall, the best-performing model is PLBART, resulting in the highest execution accuracy (EA) and CA in Java to Python translation.
Models Java to Python Python to Java
BLEU SM DM CB EA CA BLEU SM DM CB EA CA
TransCoder 38.7 31.6 38.2 36.4 77.3 0 45.2 39.3 20.1 32.4 0 0 DOBF 42.0 32.9 **42.9** 38.9 78.3 0 42.3 39.5 19.0 31.2 0 0
TransCoder-ST 41.7 33.1 42.6 39.3 85.8 0 42.5 37.4 20.4 30.7 0 0
Seq2Seq+Attn. 57.4 40.9 34.8 42.6 92.2 2.8 59.5 50.1 26.6 43.0 48.4 0.8 Transformer 39.6 35.0 33.5 34.8 92.3 0.4 43.5 44.9 25.2 35.6 63.8 0.4
CodeGPT 46.3 32.2 22.2 30.2 79.4 2.8 48.9 42.7 34.1 38.0 40.7 2.0
CodeGPT-adapted 44.3 31.6 20.4 29.3 80.2 2.4 48.0 43.0 28.3 36.7 46.7 0.8
CodeBERT 51.1 34.4 29.2 35.0 92.8 0.4 35.1 41.1 31.5 33.2 54.1 0
GraphCodeBERT 57.9 38.0 32.2 39.0 92.9 2.0 38.3 42.6 32.7 36.9 66.8 0
PLBART **63.1 42.2** 37.9 **46.2 96.4 6.8 69.7** 54.2 30.9 48.8 **78.3** 0.8 CodeT5 62.7 41.7 37.9 **46.2** 91.8 6.0 60.8 **55.1 39.6 50.3** 68.7 1.6
TransCoder-ST 55.4 41.6 36.1 43.8 94.9 5.6 66.0 53.3 31.7 48.6 72.4 2.0
Table 2: Test set results using AVATAR for Java-Python program translation. SM, DM, CB, EA, and CA stand for Syntax Match, Dataflow Match, CodeBLEU, Execution Accuracy, and Computational Accuracy, respectively.
Models Java to Python Python to Java
BLEU SM DM CB EA CA BLEU SM DM CB EA CA
TransCoder 72.4 55.7 65.7 67.9 69.2 49.1 65.4 72.6 70.3 70.7 58.9 35.7
DOBF 72.2 56.6 63.7 67.5 73.1 52.2 67.7 72.8 69.4 71.2 63.5 44.4
TransCoder-ST 73.1 57.0 **66.3** 68.7 86.6 68.5 70.0 73.0 69.5 71.9 68.3 58.1
Seq2Seq+Attn. 50.9 53.6 55.2 56.6 51.5 28.9 29.5 44.0 13.5 29.3 18.0 1.5
Transformer 38.5 35.3 40.7 41.2 42.0 2.59 40.6 50.9 20.4 38.5 19.9 1.7
CodeGPT 64.9 53.2 52.7 59.3 65.9 41.8 49.2 54.9 48.5 51.3 47.3 31.1
CodeGPT-adapted 67.4 56.3 55.1 62.0 68.8 50.4 59.0 62.6 56.1 59.7 49.8 35.9
CodeBERT 52.0 45.6 41.5 48.9 45.5 10.4 45.4 54.9 32.6 45.0 25.7 4.2 GraphCodeBERT 58.6 49.6 46.9 54.5 46.8 18.3 51.9 58.9 37.4 50.4 27.0 10.0
PLBART **79.9 64.9** 64.8 **73.2 88.4** 68.9 80.5 **78.6** 67.4 76.8 70.1 57.5
CodeT5 79.4 64.1 63.2 72.5 83.8 61.0 79.0 77.1 67.7 75.9 64.3 52.7
TransCoder-ST 79.3 64.2 64.7 72.9 87.5 **69.4 81.4 78.6 72.1 78.4 73.7 62.0**
Note that the zero EA score of TransCoder, DOBF,
and TransCoder-ST in Python to Java translation is due to not generating a class correctly that fails execution of all translated programs.
Function Translation The performance comparison of all the experiment models is presented in Table 3. Apart from models trained from scratch, CodeBERT, and GraphCodeBERT, all the models perform well in terms of match-based metrics, execution, and computational accuracy. Overall, the best-performing model is fine-tuned TransCoderST, and PLBART is the closest competitor model.
## 3.5 **Analysis**
Execution-based Evaluation Breakdown We present the breakdown for the test-case-based evaluation in Table 4 (in the Appendix). We present the number of success, failure, and error counts. For program translation evaluation, AVATAR consists of 250 evaluation examples with unit tests. For function translation evaluation, we use the test examples released by (Lachaux et al., 2020). Among the examples, 464 Java to Python and 482 Python to Java examples have test cases. We further present the compilation and runtime error breakdown in Table 5 (in the Appendix).
To analyze program translation errors, we manually examine the errors made by PLBART. We observe that PLBART does not generate the import statements in Java properly, resulting in many failures to find symbols (*e.g.,* StringTokenizer, BufferedReader). Moreover, a quick look at the error made by all models reveals that *type mismatch* is one of the primary causes of compilation errors in all the models. We also notice that models fail to translate longer programs.
Qualitative Examples We demonstrate a couple of qualitative examples of Java to Python program translation by PLBART in Figure 1. We observe that PLBART correctly translates Java API
Math.pow() to pow() in Python. We also observe that PLBART learns to translate a class with a function in Java to a function only in Python.
In Figure 2, we present an example of Python to Java program translation. We see PLBART fail to translate correctly. We notice PLBART unnecessarily generates InputReader class that uses BufferedReader to read from standard input. Furthermore, we observed another behavior:
when translating from Python to Java, PLBART
generates classes with the name either Main or GFG. This is presumably due to the generic class name used in many programming solutions and GeeksforGeeks examples.
We present qualitative examples of Java to Python and Python to Java function translation by PLBART in Figure 3 and 4. Overall, we observe a pretty good quality of translations, although there are translations that do not pass all the unit tests, as demonstrated by the performance in terms of computational accuracy in the main result.
## 4 **Related Works**
Several works in the past have contributed to building a parallel corpus for source code translation.
Nguyen et al. (2013) curated the first parallel corpus of Java and C\# functions by developing a semiautomatic tool to search for similar class names and method signatures from two open source projects, Lucene and Db4o. Similarly, Karaivanov et al.
(2014) built a mining tool that uses the Java and C\# ANTLR grammar to search for similar methods from five open source projects - Db4o, Lucene, Hibernate, Quartz, and Spring. Subsequent works used libraries and transcompilers to construct parallel corpus. For example, Aggarwal et al. (2015) used 2to3, a Python library9and Chen et al. (2018) used a transcompiler to create a parallel corpus between Python 2 - Python 3 and CoffeeScript - Javascript, respectively. Recently, Lachaux et al. (2020) collected programming problem solutions in Java, Python, and C++ (∼850 func-9https://docs.python.org/2/library/
2to3 tions in each language) from GeeksforGeeks to evaluate their proposed translation model. Concurrent works (CodeGeeX, 2022; Athiwaratkun et al.,
2023) present unit tests-based benchmarks to evaluate zero-shot translation capabilities of large language models. Different from these works, we propose a sizeable parallel corpus of Java and Python programs by collecting programming problem solutions from competitive programming sites, online platforms, and open-source repositories.
## 5 **Conclusion**
This work proposes a parallel corpus of Java and Python programs to contribute to the development of translation systems for programming languages that have a sizeable impact on software development. We evaluate several neural machine translation systems on the proposed dataset and perform analysis to reveal crucial factors that affect program translation accuracy. In our future work, we want to increase the size of the parallel corpus and support more programming languages.
## Limitations
The proposed benchmark has a few limitations.
First, AVATAR has a smaller training data size which limits training deep neural models from scratch. Second, the dataset covers only two programming languages. Third, AVATAR includes parallel examples of programs and functions that mostly focus on the use of data structures and algorithms. On the other hand, most software developers write programs as part of software projects that include API dependencies. Therefore, it is unknown whether AVATAR could facilitate program or function translation for such settings. Due to a lack of computational resources, we could not evaluate large language models (LLMs) (Nijkamp et al., 2023; Fried et al., 2023; CodeGeeX, 2022).
Therefore, it is unknown how much AVATAR could bring value for LLMs. However, our code release would help to evaluate LLMs.
## Ethics Statement
License The LeetCode examples we crawled from the GitHub repository are under an MIT license. On the other hand, Project Euler and Code Jam examples collected from GitHub do not have any license information. The AtCoder and AIZU examples are collected from CodeNet which is under Apache-2.0 license. We crawl examples from GeeksforGeeks and Codeforces and release them under CC BY-NC-SA 4.0 license.
To use the AVATAR benchmark, we are required to adhere to these licenses strictly.
Carbon Footprint We avoided fine-tuning large models due to computational limitations, resulting in a reduced impact on the environment. We finetuned nine models on program and function translation tasks and due to the smaller size of the training data, all jobs took a total of 1–2 days on RTX 2080 Ti GPUs. A total of 100 hours of training in a single RTX 2080 Ti GPU results in approximately 7.5kg of carbon emission into the environment.10 Sensitive Information AVATAR composed of parallel programs and functions that do not have any natural language (NL) comments or docstring.
We remove them to get rid of any personally identifiable information or offensive content. However, there could still be such content in the form of string as we do not manually check each example.
## Acknowledgements
We thank the anonymous reviewers for their insightful comments.
## References
Karan Aggarwal, Mohammad Salameh, and Abram Hindle. 2015. Using machine translation for converting python 2 to python 3 code. Technical report, PeerJ
PrePrints.
Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 4998–5007, Online. Association for Computational Linguistics.
Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics.
Marie anne Lachaux, Baptiste Roziere, Marc Szafraniec, and Guillaume Lample. 2021. DOBF: A deobfuscation pre-training objective for programming languages. In *Advances in Neural Information Processing Systems*.
10Estimations were conducted using the MachineLearning Impact calculator presented in (Lacoste et al., 2019). We use Amazon Web Services as the provider.
Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, and Bing Xiang. 2023. Multi-lingual evaluation of code generation models. In *The Eleventh International* Conference on Learning Representations.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *International Conference on Learning Representations*.
Xinyun Chen, Chang Liu, and Dawn Song. 2018. Treeto-tree neural networks for program translation. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
CodeGeeX. 2022. Codegeex: A multilingual code generation model. http://keg.cs.tsinghua. edu.cn/codegeex.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1536–1547, Online. Association for Computational Linguistics.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, and Mike Lewis. 2023. Incoder:
A generative model for code infilling and synthesis.
In *The Eleventh International Conference on Learning Representations*.
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Jian Yin, Daxin Jiang, et al. 2021. Graphcodebert: Pre-training code representations with data flow. In International Conference on Learning Representations.
Svetoslav Karaivanov, Veselin Raychev, and Martin Vechev. 2014. Phrase-based statistical translation of programming languages. In Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming &
Software, pages 173–184.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In International Conference on Learning Representations.
Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, and Guillaume Lample. 2020. Unsupervised translation of programming languages. In *Advances in Neural Information Processing Systems*,
volume 33, pages 20601–20611. Curran Associates, Inc.
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, MING GONG, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie LIU. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
Kawser Wazed Nafi, Tonny Shekha Kar, Banani Roy, Chanchal K Roy, and Kevin A Schneider. 2019. Clcdsa: cross language code clone detection using syntactical features and api documentation. In *2019* 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 1026–
1037. IEEE.
Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N
Nguyen. 2013. Lexical statistical machine translation for language migration. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pages 651–654.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Ruchir Puri, David S Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. Codenet: A large-scale AI for code dataset for learning a diversity of coding tasks. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track
(Round 2).
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9.
Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Ming Zhou, Ambrosio Blanco, and Shuai Ma. 2020. Codebleu: a method for automatic evaluation of code synthesis. arXiv preprint arXiv:2009.10297.
Baptiste Roziere, Jie Zhang, Francois Charton, Mark Harman, Gabriel Synnaeve, and Guillaume Lample.
2022. Leveraging automated unit tests for unsupervised code translation. In International Conference on Learning Representations.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30*, pages 5998–6008. Curran Associates, Inc.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H.
Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
| Models | Java to Python | Python to Java | | | | | | | | |
|------------------------------------------------------------------------------------------------------------------------|------------------|------------------|---------|---------|--------|-------|---------|---------|---------|-----|
| #Tests | Error | Failure | Timeout | Success | #Tests | Error | Failure | Timeout | Success | |
| Program Translation TransCoder 250 | 53 | 197 | 0 | 0 | 250 | 250 | 0 | 0 | 0 | |
| DOBF | 250 | 62 | 188 | 0 | 0 | 250 | 250 | 0 | 0 | 0 |
| TransCoder-ST | 250 | 55 | 195 | 0 | 0 | 250 | 250 | 0 | 0 | 0 |
| Seq2Seq+Attn. | 250 | 143 | 98 | 2 | 7 | 250 | 218 | 30 | 0 | 2 |
| Transformer | 250 | 156 | 92 | 1 | 1 | 250 | 246 | 3 | 0 | 1 |
| CodeGPT | 250 | 140 | 102 | 1 | 7 | 250 | 169 | 76 | 0 | 5 |
| CodeGPT-adapted | 250 | 119 | 121 | 4 | 6 | 250 | 245 | 3 | 0 | 2 |
| CodeBERT | 250 | 189 | 57 | 3 | 1 | 250 | 248 | 2 | 0 | 0 |
| GraphCodeBERT | 250 | 93 | 147 | 5 | 5 | 250 | 216 | 34 | 0 | 0 |
| PLBART | 250 | 102 | 124 | 7 | 17 | 250 | 241 | 6 | 1 | 2 |
| CodeT5 | 250 | 111 | 119 | 5 | 15 | 250 | 226 | 20 | 0 | 4 |
| TransCoder-ST | 250 | 135 | 92 | 9 | 14 | 250 | 194 | 51 | 0 | 5 |
| Function Translation TransCoder 464 | 143 | 89 | 4 | 228 | 482 | 198 | 106 | 6 | 172 | |
| DOBF | 464 | 125 | 88 | 9 | 242 | 482 | 176 | 88 | 4 | 214 |
| TransCoder-ST | 464 | 62 | 79 | 5 | 318 | 482 | 153 | 48 | 1 | 280 |
| Seq2Seq+Attn. | 464 | 225 | 97 | 8 | 134 | 482 | 395 | 77 | 3 | 7 |
| Transformer | 464 | 269 | 170 | 13 | 12 | 482 | 386 | 83 | 5 | 8 |
| CodeGPT | 464 | 158 | 103 | 9 | 194 | 482 | 254 | 74 | 4 | 150 |
| CodeGPT-adapted | 464 | 145 | 78 | 7 | 234 | 482 | 242 | 64 | 3 | 173 |
| CodeBERT | 464 | 253 | 149 | 14 | 48 | 482 | 358 | 94 | 10 | 20 |
| GraphCodeBERT | 464 | 247 | 118 | 14 | 85 | 482 | 352 | 80 | 2 | 48 |
| PLBART | 464 | 54 | 91 | 4 | 315 | 482 | 144 | 58 | 3 | 277 |
| CodeT5 | 464 | 75 | 97 | 9 | 283 | 482 | 172 | 51 | 5 | 254 |
| TransCoder-ST | 464 | 58 | 79 | 5 | 322 | 482 | 127 | 51 | 5 | 299 |
| Table 4: Breakdown of the success, error, failure, and timeout in the execution based evaluation. #Tests indicates the | | | | | | | | | | |
Table 4: Breakdown of the success, error, failure, and timeout in the execution based evaluation. \#Tests indicates the number of evaluation examples with unit tests. While success indicates the number of examples passing all the unit tests, Failure indicates number of examples that did not at least one of the unit tests. The Error count indicates number of examples with compilation and runtime errors.
| Models | Java to Python | Python to Java | | | | |
|-----------------|------------------|------------------|--------|-----|-------|------|
| #Tests | CE | RE | #Tests | CE | RE | |
| TransCoder | 464 | 0% | 30.8% | 482 | 31.3% | 9.8% |
| DOBF | 464 | 0% | 26.9% | 482 | 27.4% | 9.1% |
| TransCoder-ST | 464 | 0% | 13.4% | 482 | 24.9% | 6.9% |
| Seq2Seq+Attn. | 464 | 0% | 48.5% | 482 | 80.3% | 1.5% |
| Transformer | 464 | 0% | 58.0% | 482 | 78.0% | 2.1% |
| CodeGPT | 464 | 0% | 34.1% | 482 | 49.4% | 2.7% |
| CodeGPT-adapted | 464 | 0% | 31.3% | 482 | 46.7% | 3.3% |
| CodeBERT | 464 | 0% | 54.5% | 482 | 71.4% | 2.7% |
| GraphCodeBERT | 464 | 0% | 53.2% | 482 | 71.8% | 1.2% |
| PLBART | 464 | 0% | 11.6% | 482 | 25.3% | 4.6% |
| CodeT5 | 464 | 0% | 16.2% | 482 | 32.4% | 3.1% |
| TransCoder-ST | 464 | 0% | 12.5% | 482 | 22.6% | 3.7% |
Table 5: Compilation error (CE) vs. runtime error (RE) percentage in function translation.
## Input Program In Java
1 **c l a s s** GFG { 2 s t a t i c **i n t** cntSquares ( **i n t** n ) { 3 **i n t** squares = 0 ;
4 f o r ( **i n t** i = 1 ; i <= n ; i ++ ) {
5 squares += Math . pow ( i , 2 ) ; 6 }
7 **return** squares ;
8 }
9 p u b l i c s t a t i c **void** main ( String args [ ] ) {
10 **i n t** n = 4 ;
11 System . out . print ( cntSquares ( 4 ) ) ;
12 }
13 }
JAVA TO PYTHON TRANSLATION
1 def cntSquares ( n ) :
2 squares = 0 3 f o r i in **range** ( 1 , n + 1 ) :
4 squares += pow ( i , 2 )
5 **return** squares 6 7 n = 4 8 **p r i n t** ( cntSquares ( 4 ) )
INPUT PROGRAM IN JAVA
1 **import** java . util . Scanner ;
2 3 p u b l i c **c l a s s** Main {
4 p u b l i c s t a t i c **void** main ( String args [ ] ) **throws** Exception {
5 Scanner sc = new Scanner ( System . in ) ;
6 String s = sc . next ( ) ;
7 **i n t** L = s . length ( ) ; 8 **i n t** same = 0 ; 9 i f ( s . charAt ( 0 ) == s . charAt ( L - 1 ) ) {
10 same = 1 ;
11 }
12 i f ( ( L + same ) % 2 == 0 ) {
13 System . out . println ( "Second" ) ; 14 }
15 **e l s e** {
16 System . out . println ( "First" ) ;
17 }
18 } 19 }
JAVA TO PYTHON TRANSLATION
1 s = **input** ( ) 2 i f s [ 0 ] == s [ - 1 ] : 3 **p r i n t** ( "First" ) 4 **e l i f** s [ 0 ] == s [ - 1 ] : 5 **p r i n t** ( "Second" )
6 **e l s e** :
7 **p r i n t** ( "First" )
Figure 1: Qualitative examples of program translation from Java to Python by PLBART.
## Input Program In Python
1 S = input ( )
2 i f S [ 0 ] == S [ - 1 ] : 3 i f len ( S ) % 2 == 0 :
4 print ( "First" )
5 **e l s e** :
6 print ( "Second" )
7 **e l s e** : 8 i f len ( S ) % 2 == 1 :
9 print ( "First" )
10 **e l s e** :
11 print ( "Second" )
PYTHON TO JAVA TRANSLATION
1 **import** java . io . OutputStream ;
2 **import** java . io . IOException ; 3 **import** java . io . InputStream ; 4 **import** java . io . PrintWriter ;
5 **import** java . util . InputMismatchException ; 6 **import** java . io . IOException ; 7 **import** java . io . InputStream ;
8 **import** java . io . InputStream ;
9 10 public **c l a s s** Main {
11 public static **void** main ( String [ ] args ) { 12 InputStream inputStream = System . in ;
13 OutputStream outputStream = System . out ;
14 InputReader in = new InputReader ( inputStream ) ;
15 PrintWriter out = new PrintWriter ( outputStream ) ;
16 Task solver = new Task ( ) ;
17 solver . solve ( 1 , in , out ) ;
18 out . close ( ) ;
19 }
20 static **c l a s s** Task {
21 public **void** solve ( **i n t** testNumber , InputReader in , PrintWriter out ) { 22 String s = in . **next** ( ) ; 23 i f ( s . length ( ) % 2 == 0 ) {
24 out . println ( "First" ) ; 25 }
26 **e l s e** {
27 out . println ( "Second" ) ; 28 }
29 }
30 }
31 static **c l a s s** InputReader {
32 public BufferedReader reader ; 33 public StringTokenizer tokenizer ;
34 public InputReader ( InputStream stream ) {
35 reader = new BufferedReader ( new InputStreamReader ( stream ) , 32768 ) ; 36 tokenizer = null ;
37 }
38 public String **next** ( ) { 39 **while** ( tokenizer == null || ! tokenizer . hasMoreTokens ( ) ) { 40 **t r y** {
41 tokenizer = new StringTokenizer ( reader . readLine ( ) ) ;
42 }
43 catch ( IOException e ) { 44 throw new InputMismatchException ( ) ;
45 }
46 }
47 **return** tokenizer . nextToken ( ) ;
48 }
49 public **i n t** nextInt ( ) { 50 **return** Integer . parseInt ( **next** ( ) ) ;
51 } 52 }
53 }
Figure 2: A qualitative examples of program translation from Python to Java by PLBART.
## Input Function In Java
1 s t a t i c **i n t** smallest ( i n t x , i n t y , **i n t** z ) { 2 i f ( ( y / x ) != 1 ) **return** ( ( y / z ) != 1 ) ? y : z ;
3 **return** ( ( x / z ) != 1 ) ? x : z ;
4 }
TRANSLATION IN PYTHON BY PLBART
1 def smallest ( x , y , z ) : 2 i f ( ( y // x ) != 1 ) : 3 **return** y 4 i f ( ( y // z ) != 1 ) : 5 **return** x 6 **return** z INPUT FUNCTION IN JAVA
1 s t a t i c **i n t** maxvolume ( **i n t** s ) { 2 **i n t** length = s / 3 ; s -= length ;
3 **i n t** breadth = s / 2 ;
4 **i n t** height = s - breadth ;
5 **return** length * breadth * height ;
6 }
TRANSLATION IN PYTHON BY PLBART
1 def maxvolume ( s ) :
2 length = s // 3 3 s -= length 4 breadth = s // 2 5 height = s - breadth 6 **return** length * breadth * height
## Input Function In Java
1 s t a t i c **void** printClosest ( **i n t** arr [ ] , i n t n , **i n t** x ) { 2 **i n t** res_l = 0 , res_r = 0 ; 3 **i n t** l = 0 , r = n - 1 , diff = Integer . MAX_VALUE ;
4 **while** ( r > l ) {
5 i f ( Math . abs ( arr [ l ] + arr [ r ] - x ) < diff ) {
6 res_l = l ;
7 res_r = r ; 8 diff = Math . abs ( arr [ l ] + arr [ r ] - x ) ;
9 }
10 i f ( arr [ l ] + arr [ r ] > x ) r -- ; 11 **e l s e** l ++ ;
12 }
13 System . out . println ( " The closest pair is " + arr [ res_l ] + " and " + arr
[ res_r ] ) ;
14 }
TRANSLATION IN PYTHON BY PLBART
1 def printClosest ( arr , n , x ) :
2 res_l = 0 3 res_r = 0 4 l = 0 5 r = n - 1 6 diff = sys . maxsize NEW_LINE **while** ( r > l ) :
7 i f ( abs ( arr [ l ] + arr [ r ] - x ) < diff ) :
8 res_l = l 9 res_r = r 10 diff = abs ( arr [ l ] + arr [ r ] - x ) 11 i f ( arr [ l ] + arr [ r ] > x ) :
12 r -= 1 13 **e l s e** :
14 l += 1 15 **p r i n t** ( " The closest pair is " , arr [ res_l ] , " and " , arr [ res_r ] )
Figure 3: Qualitative examples of function translation from Java to Python by PLBART.
INPUT FUNCTION IN PYTHON
1 def minimum_cost ( a , n ) :
2 mn = sys . maxsize 3 sum = 0 4 f o r i in **range** ( n ) :
5 mn = min ( a [ i ] , mn ) 6 sum += a [ i ]
7 return mn * ( sum - mn )
TRANSLATION IN JAVA BY PLBART
1 s t a t i c **i n t** minimum_cost ( **i n t** a [ ] , **i n t** n ) { 2 **i n t** mn = Integer . MAX_VALUE ;
3 **i n t** sum = 0 ; 4 f o r ( **i n t** i = 0 ; i < n ; i ++ ) {
5 mn = Math . min ( a [ i ] , mn ) ;
6 sum += a [ i ] ;
7 }
8 **return** mn * ( sum - mn ) ;
9 }
INPUT FUNCTION IN PYTHON
1 def nextPowerOf2 ( n ) :
2 p = 1 3 i f ( n and not ( n & ( n - 1 ) ) ) : 4 **return** n 5 **while** ( p < n ) :
6 p <<= 1 7 **return** p TRANSLATION IN JAVA BY PLBART
1 s t a t i c **i n t** nextPowerOf2 ( **i n t** n ) {
2 **i n t** p = 1 ; 3 i f ( n && ! ( n & ( n - 1 ) ) ) 4 **return** n ; 5 **while** ( p < n )
6 p <<= 1 ;
7 **return** p ;
8 }
INPUT FUNCTION IN PYTHON
1 def isPerfectSquare ( n ) :
2 i = 1 3 the_sum = 0 4 **while** the_sum < n :
5 the_sum += i 6 i f the_sum == n : 7 **return** True 8 i += 2 9 **return** False TRANSLATION IN JAVA BY PLBART
1 s t a t i c **boolean** isPerfectSquare ( **i n t** n ) { 2 **i n t** i = 1 , the_sum = 0 ;
3 **while** ( the_sum < n ) {
4 the_sum += i ;
5 i f ( the_sum == n )
6 return **t ru e** ;
7 i += 2 ;
8 }
9 return **f a l s e** ;
10 }
Figure 4: Qualitative examples of function translation from Python to Java by PLBART.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section after conclusion
✓ A2. Did you discuss any potential risks of your work?
In the limitations section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
Section 2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics Statement
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics Statement
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Ethics Statement
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2
## C ✓ **Did You Run Computational Experiments?** Section 3 And Appendix
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Ethics Statement The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We will release the source code.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We will release the source code.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
jelenic-etal-2023-dataset | On Dataset Transferability in Active Learning for Transformers | https://aclanthology.org/2023.findings-acl.144 | Active learning (AL) aims to reduce labeling costs by querying the examples most beneficial for model learning. While the effectiveness of AL for fine-tuning transformer-based pre-trained language models (PLMs) has been demonstrated, it is less clear to what extent the AL gains obtained with one model transfer to others. We consider the problem of transferability of actively acquired datasets in text classification and investigate whether AL gains persist when a dataset built using AL coupled with a specific PLM is used to train a different PLM. We link the AL dataset transferability to the similarity of instances queried by the different PLMs and show that AL methods with similar acquisition sequences produce highly transferable datasets regardless of the models used. Additionally, we show that the similarity of acquisition sequences is influenced more by the choice of the AL method than the choice of the model. | # On Dataset Transferability In Active Learning For Transformers
Fran Jelenic Josip Juki ´ **c Nina Drobac Jan Šnajder** ´
University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia Text Analysis and Knowledge Engineering Lab
{fran.jelenic, josip.jukic, nina.drobac, jan.snajder}@fer.hr
## Abstract
Active learning (AL) aims to reduce labeling costs by querying the examples most beneficial for model learning. While the effectiveness of AL for fine-tuning transformer-based pre-trained language models (PLMs) has been demonstrated, it is less clear to what extent the AL gains obtained with one model transfer to others. We consider the problem of transferability of actively acquired datasets in text classification and investigate whether AL gains persist when a dataset built using AL coupled with a specific PLM is used to train a different PLM. We link the AL dataset transferability to the similarity of instances queried by the different PLMs and show that AL methods with similar acquisition sequences produce highly transferable datasets regardless of the models used. Additionally, we show that the similarity of acquisition sequences is influenced more by the choice of the AL method than the choice of the model.
## 1 Introduction
Pre-trained language models (PLMs) - large overparameterized models based on the transformer architecture (Vaswani et al., 2017) and trained on large corpora - are the leading paradigm in modern NLP, yielding state-of-the-art results on a wide range of NLP tasks. However, large models require large amounts of data. *Active learning* (AL; Settles, 2009) addresses the data bottleneck problem by improving data labeling efficiency. It employs human-in-the-loop labeling with the model iteratively selecting data points most informative for labeling. Recent work has demonstrated the effectiveness of AL for fine-tuning PLMs (Dor et al.,
2020; Grießhaber et al., 2020; Margatina et al.,
2022; Yuan et al., 2020; Shelmanov et al., 2021).
While AL may considerably reduce model development costs, it also potentially limits the scope of use of the actively acquired datasets. Since data sampling in AL is guided by the inductive bias of the acquisition model, the dataset will typically not represent the original population's distribution (Attenberg and Provost, 2011). This is troublesome if one wishes to use the actively acquired dataset to train a different model (*consumer model*) from the one used for AL (*acquisition model*). If the two models' inductive biases differ, the AL gains can cancel or even revert: the consumer model may perform worse when trained on the actively acquired dataset than on a randomly sampled one. However, the robustness of the actively acquired dataset to the choice of the consumer model is obviously highly desirable, as the acquisition model may become unavailable or dated. The latter is common in NLP,
where new and better models are being developed faster than new datasets. However, most AL studies use the same acquisition and consumer models, and dataset transferability is seldom mentioned in AL literature. A notable exception is the work of Lowell et al. (2018), who showed the unreliability of dataset transfer on standard NLP tasks.
In this work, we examine the problem of AL
dataset transferability for transformer-based PLMs and conduct a preliminary empirical study on text classification datasets. We first probe whether AL gains persist between different transformerbased PLMs, considering several AL methods and datasets. Observing that on most datasets, the transfer works in some cases but fails in others, we investigate the mechanisms underlying transferability.
We hypothesize a link between AL dataset transferability and how the acquisition and consumer models sample instances. To probe this, we introduce *acquisition sequence mismatch* (ASM) to characterize to what extent the two models differ in how they sample instances throughout AL iterations. We investigate how ASM affects dataset transferability and how ASM is affected by other AL variables. We show that, while it is generally reasonable to transfer actively acquired datasets between transformer-based PLMs, AL methods that retain low ASM produce more transferable datasets.
We also show that the choice of the AL method affects ASM more than the choice of models.
To summarize our contributions: we (1) conduct an empirical study on the transferability of actively acquired datasets between transformer-based PLMs, (2) propose a measure to quantify the mismatch in the acquisition sequences of AL models and link this to dataset transferability, and (3) analyze what design choices affect this mismatch. We provide code for the experiments1 with the hope that our results will encourage NLP practitioners to use AL when fine-tuning PLMs and motivate further research into the AL dataset's transferability.
## 2 Related Work
Although AL has been extensively studied for shallow and standard neural models (without pretraining), research on combining AL and PLMs lags behind. The initial studies showed promise, with AL methods outperforming random sampling for text classification (Dor et al., 2020; Grießhaber et al., 2020). The field is gradually gaining traction with studies demonstrating AL effectiveness even with simple uncertainty-based methods (Gonsior et al., 2022; Schröder et al., 2022). Moreover, PLMs open up new possibilities, such as complementing AL with model adaptation using unlabeled data (Yuan et al., 2020; Margatina et al., 2022).
While there is much research on AL for standard scenarios where the acquisition and consumer models are the same, there is little research on AL dataset transfer. Prabhu et al. (2019) demonstrated that combining uncertainty AL strategies with deep models produces sampled datasets with good sampling properties that have a large overlap with support vectors of SVM trained on the entire dataset. Likewise, Farquhar et al. (2021)
showed that deep neural models benefit from the sample bias induced by the acquisition model (the opposite is true for shallow models). However, the jury is still out on the effects of sample bias on the consumer model. The most prominent empirical study on AL transfer with neural models (Lowell et al., 2018) predates PLMs. Tsvigun et al. (2022)
focused on alleviating the effects of acquisitionconsumer mismatch in PLMs by using lightweight distilled models for acquisition and larger versions of the models as consumer models. Even though the study focuses on improving the transferability 1https://github.com/fjelenic/al-transfer of actively acquired datasets, the reasons behind the successful transfer are yet to be explored. An older study of AL dataset transferability for text classification and shallow models by Tomanek and Morik (2011) showed that transfer works in most cases but that neither sample nor model similarity explains transferability. Our study explores these characteristics for acquisition-consumer pairings of different PLMs.
## 3 Experimental Setup
Our study used four datasets, three models, and three AL methods (cf. Appendix B for details). The datasets we used are Subjectivity (**SUBJ**; Pang and Lee, 2004), CoLA (**COLA**; Warstadt et al., 2018),
AG-News (AGN; Zhang et al., 2015), and TREC
(**TREC**; Li and Roth, 2002)). The three transformer models we used are BERT (Devlin et al., 2018),
RoBERTa (Liu et al., 2019), and ELECTRA (Clark et al., 2020). The AL methods we considered are entropy (ENT; Settles, 2009), core-set (CS; Sener and Savarese, 2017), and BADGE (BA; Ash et al.,
2019)). This gives 108 AL configurations (72 transfer and 36 no-transfer configurations). Furthermore, we ran each configuration with 20 different warm-start sets to account for stochasticity. The AL
acquisition was simulated until the budget of 1500 labeled data points was exhausted (model performance for all datasets reached a plateau), labeling 50 data points per step.
We assessed dataset transferability using the difference in the area under the F1 curve of the model trained on the actively acquired dataset and the same model trained on a randomly sampled dataset
(∆AUC). We deem the AL dataset transfer successful if ∆AUC is not significantly less than zero and unsuccessful otherwise. We chose ∆AUC to make the notion of transferability independent of when the AL acquisition terminates. On the other hand, as terminating the AL after acquiring too few labeled data is unrealistic, we also report ∆AUC10, which is ∆AUC calculated with an offset of 10 iterations (500 labeled instances) of the AL loop.
Comparing ∆AUC10 to ∆AUC provides insights into how transferability changes through time.
## 4 Results 4.1 Dataset Transferability
We grouped the 108 AL configurations into three groups based on the sign of the mean ∆AUC value and the p-value of the difference between AUC
| − 10 | ∆0 10 | ∆ + 10 | Σ |
|--------|---------|----------|-----|
SUBJ 0 0 18 0 0 18 18
COLA 2 8 8 2 7 9 18
AGN 7 4 7 3 2 13 18 TREC 8 3 7 0 2 16 18
R→B 2 2 8 0 1 11 12 E→B 2 2 8 0 2 10 12
B→R 2 4 6 0 1 11 12
E→R 2 4 6 1 2 9 12 B→E 5 1 6 2 2 8 12
R→E 4 2 6 2 3 7 12
ENT 11 3 10 3 2 19 24 CS 4 10 10 2 6 16 24
BA 2 2 20 0 3 21 24
Σ 17 15 40 5 11 56
scores of transfer and random sampling:2 negative (∆AUC < 0 and p<.05), neutral (p≥.05), and positive (∆AUC ≥ 0 and p<.05) transfer. The notransfer AL configurations (where the acquisition and consumer models are the same) are generally successful (25 positive, 9 neutral, and 2 negative configurations as per ∆AUC; 33 positive, 2 neutral, and 1 negative configuration as per ∆AUC10).
The grouping of the remaining 72 configurations with AL dataset transfer is given in Table 1. We observe that the dataset, the acquisition-consumer model pairing, and the AL method all affect transfer success.
Evidently, transferability differs across datasets:
the transfer is always positive on SUBJ (which is the simplest task we considered in terms of the number of labels, the balance of classes, and the MDL task complexity measure; cf. Appendix B),
while most neutral transfers occur on COLA. A
more interesting picture emerges from the different acquisition-consumer model pairings and AL
methods. Most negative transfers are transfers to ELECTRA, while most neutral transfers are those to RoBERTa (perhaps due to it being optimized for robustness). On the other hand, transfer to BERT is positive in most cases, perhaps because BERT's pre-training regime is most similar to that of the other two models. Among the AL methods, entropy mostly makes the transfer negative, most neutral transfers occur with core-set, and BADGE
is the best choice for ensuring positive transferability. However, when looking at the later steps of the AL loop, differences between entropy and BADGE vanish, while the core-set lags slightly behind. Thus, ∆AUC tends to increase throughout the AL process, suggesting that increasing the amount of sampled data lowers the risk of unsuccessful transfer (cf. Appendix C for additional F1 scores analysis).
## 4.2 Acquisition Sequence Mismatch
We hypothesize there is a link between dataset transferability and the sequence in which data points are acquired for labeling by AL. In particular, we posit that dataset transferability will be successful when the acquisition sequence of the acquisition model does not differ from what the acquisition sequence of a consumer model would be if that model had access to the original dataset.
We introduce the *acquisition sequence mismatch*
(ASM) to measure the differences in acquisition sequences. To compute the ASM between two acquisition sequences, we pair the corresponding batches of the two sequences and average their pairwise differences. To measure the difference between a pair of batches, we take the average of the distances of best-matched examples between the batches. To account for the fact that AL methods may choose numerically different yet semantically similar data points, we measure the similarity of acquired instances in representation space. We use GloVe embeddings (Pennington et al., 2014)
as a common representation space independent of the choice of acquisition and consumer models and compute the cosine distance between averaged word embeddings. Lastly, we use the Hungarian algorithm (Kuhn, 1955) to construct a bipartite graph between two batches with distance-weighted edges to find the best-matching examples. Formally, we define ASM as follows:
$$:\sum_{t=1}^{T}\frac{1}{|B_{t}|}\operatorname*{min}_{S(B_{A}^{t}),S(B_{B}^{t})}\left(\sum_{i=1}^{|B_{t}|}d(x_{A}^{i},x_{B}^{i})\right)\quad(1)$$
1
T
$\mathbf{M}$
where T is the length of the sequence (the number of steps of the AL loop), S(Bt) is the set of all of the permutations of instances in the selected batch at step t, and d(x i A
, xiB
) is the cosine distance between instance representations from sequences A and B for a batch at position i of a given batch permutation. Intuitively, ASM assumes that both batches cater to the same informational need of the
![3_image_0.png](3_image_0.png)
model, so it calculates how much the instances that should carry out the same role in the batch differ.
Given a dataset, we hypothesize ASM may be affected by both the choice of the models and the choice of the AL method. Figure 1 shows that the distributions of ASM values are more alike when grouped by the AL methods than when grouped by the model pairings. To verify this observation, we conducted two Kruskal-Wallis H-tests for each dataset: in the first, populations were determined by the AL method, and we concluded that there was a significant difference in ASM (p<.05); in the second, the populations were determined by the model pairing, and there was no significant difference in ASM (p>.05). This suggests that the choice of AL method affects ASM more than the choice of acquisition-consumer model pairing.
## 4.3 Acquisition Mismatch Analysis
We found a statistically significant negative correlation between ∆AUC and ASM for each dataset.3 This supports our hypothesis that the lower the mismatch between acquisition sequences of the two models, the higher the transferability of a dataset from one model to the other. Besides ASM, we use another measure for analyzing dataset transferability: the difference between the dataset acquired with AL using the acquisition model and the dataset acquired with AL using the consumer model. We call this measure the *acquired dataset mismatch*
(ADM). Essentially, ADM computes the mismatch between samples similarly to ASM but between entire datasets obtained after the last sampling step.
Above we showed that the choice of the AL
method affects the ASM. Figure 2 shows that BADGE gives smaller ASM than the other two methods, whereas core-set gives larger ASM than the other two methods.4 However, the intriguing effect emerges when comparing the difference in batches through time and differences in the entire acquired datasets through time. In the early steps, BADGE gives the highest similarity of acquired datasets among the considered methods, which leads to it having the lowest ASM. However, in later steps, entropy dominates the similarity of acquired datasets.5It seems as if entropy acquired similar datasets for different models by taking those models through different sequences of the population distribution. This effect is seen in Table 1, where entropy is the worst method when using
∆AUC to measure transfer success while managing to parry BADGE when using ∆AUC10. The difference in transferability between entropy and BADGE completely vanishes when looking at the last step of the AL loop (cf. Appendix, Table 3).
3Spearman correlation coefficients are −0.11 for SUBJ,
−0.19 for COLA, −0.27 for AGN, and −0.38 for TREC, all significant with p<.05.
4Verified using three one-sided Wilcoxon signed-rank tests with p<.05 corrected for FWER.
5Verified using three one-sided Wilcoxon signed-rank tests on ADM with p<.05 corrected for FWER.
![4_image_0.png](4_image_0.png)
It is clear that entropy can produce transferable datasets, but it requires more time to do so.
We speculate that the effect of BADGE having the lowest ASM yet entropy achieving the lowest ADM could emerge due to the interaction between the AL method and the model's decision boundary. Namely, uncertainty AL methods sample data points on the decision boundary with high overlap with support vectors of the SVM trained on the whole dataset, as pointed out by Prabhu et al.
(2019). Since BADGE combines uncertainty and diversity, i.e., it samples data points the model is uncertain about for diverse reasons, it samples along the entire decision boundary at each step, and since decision boundaries of the models are roughly the same, so are the sampled data points. Entropy, on the other hand, relies solely on uncertainty. Due to its greedy nature, entropy tends to sample similar points because if one data point has high uncertainty, data points similar to it are also going to have high uncertainty (Zhdanov, 2019). This may manifest as sampling local patches of space on the decision boundary. Therefore, entropy may take more time to define the boundary than BADGE because it is forming the boundary from patches of space with the highest uncertainty at a given AL step rather than holistically sampling along the boundary at each step. Since the shape of the decision boundary is more similar between different models than the local interactions along the boundary, entropy has a higher batch mismatch in the early steps. However, once more data is labeled and the boundary becomes stable, both entropy and BADGE start to have a low batch mismatch, as seen in Figure 2. Since entropy is deterministic and never strays from the decision boundary, it ends up having a lower ADM than BADGE. Lastly, we believe that the core-set method has the highest ASM and ADM because it selects data based on diversity in the model's representation space, which is more model-specific and shares fewer properties between different models than the decision boundary. Further exploring the described interaction is a compelling direction for future work.
It may be that AL methods with different acquisition sequences end up acquiring a similar dataset and have high transferability, as in the case of entropy, an uncertainty-based acquisition function.
It is also possible that acquired datasets differ between models but that the transfer remains successful because it taps into some other essential aspect of a transferable dataset, as is the case with core-set, a diversity-based acquisition function. However, the best strategy to ensure dataset transferability appears to be a mixture of uncertainty and diversity, as provided by BADGE. This appears to minimize ASM between models, making datasets transferable regardless of the number of AL steps.
## 5 Conclusion
We presented an empirical study on the transferability of actively acquired text classification datasets for transformer-based PLMs. Our results indicate no significant risk in transferring datasets, especially for larger amounts of data. We also showed that transfer is largely successful when preserving the sequence and similarity of acquired instances between the models, which is what methods combining uncertainty and diversity acquisition functions seem to do. Transferability appears to differ considerably across datasets, so future work should examine what dataset characteristics are predictive of transfer success.
## Limitations
Our study revealed considerable differences in transferability and other measures we considered across different datasets. Nonetheless, the study focused on the differences in transferability arising from the choice of the models and the AL methods rather than the dataset. To eliminate confounding due to datasets, we grouped the results by datasets and analyzed each group separately. Despite this, the scope of our results is limited by the fact that all datasets used are in English and possibly contain their own biases.
Even though we showed that it could still be useful to transfer actively acquired datasets between transformer-based PLMs, it is important to keep in mind that actively acquired datasets are not representative of the original data distribution due to the sampling bias introduced by active learning.
## Acknowledgments
This research was supported by the AIDWAS
KK.01.2.1.02.0285 grant. We thank the anonymous reviewers for their insightful comments and suggestions.
## References
Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2019. Deep batch active learning by diverse, uncertain gradient lower bounds. *arXiv preprint arXiv:1906.03671*.
Josh Attenberg and Foster Provost. 2011. Inactive learning? difficulties employing active learning in practice.
ACM SIGKDD Explorations Newsletter, 12(2):36–
41.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. *arXiv preprint arXiv:2003.10555*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Liat Ein Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020.
Active learning for BERT: an empirical study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7949–7962.
Sebastian Farquhar, Yarin Gal, and Tom Rainforth. 2021.
On statistical bias in active learning: How and when to fix it. *arXiv preprint arXiv:2101.11665*.
Julius Gonsior, Christian Falkenberg, Silvio Magino, Anja Reusch, Maik Thiele, and Wolfgang Lehner.
2022. To softmax, or not to softmax: that is the question when applying active learning for transformer models. *arXiv preprint arXiv:2210.03005*.
Daniel Grießhaber, Johannes Maucher, and Ngoc Thang Vu. 2020. Fine-tuning BERT for low-resource natural language understanding via active learning. *arXiv* preprint arXiv:2012.02462.
Harold W Kuhn. 1955. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83–97.
Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*.
David Lowell, Zachary C Lipton, and Byron C Wallace.
2018. Practical obstacles to deploying active learning.
arXiv preprint arXiv:1807.04801.
Katerina Margatina, Loïc Barrault, and Nikolaos Aletras. 2022. On the importance of effectively adapting pretrained language models for active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 825–836.
Bo Pang and Lillian Lee. 2004. A sentimental education:
Sentiment analysis using subjectivity summarization based on minimum cuts. In *Proceedings of the ACL*.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021.
Rissanen data analysis: Examining dataset characteristics via description length. In International Conference on Machine Learning, pages 8500–8513.
PMLR.
Ameya Prabhu, Charles Dognin, and Maneesh Singh.
2019. Sampling bias in deep active classification: An empirical study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 4058–4068.
Christopher Schröder, Andreas Niekler, and Martin Potthast. 2022. Revisiting uncertainty-based query strategies for active learning with transformers. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2194–2203.
Ozan Sener and Silvio Savarese. 2017. Active learning for convolutional neural networks: A core-set approach. *arXiv preprint arXiv:1708.00489*.
Burr Settles. 2009. Active learning literature survey.
Computer sciences technical report.
Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V Dylov, and Alexander Panchenko. 2021.
Active learning for sequence tagging with deep pretrained models and bayesian uncertainty estimates.
arXiv preprint arXiv:2101.08133.
Katrin Tomanek and Katherina Morik. 2011. Inspecting sample reusability for active learning. In Active Learning and Experimental Design workshop In conjunction with AISTATS 2010, pages 169–181. JMLR
Workshop and Conference Proceedings.
Akim Tsvigun, Artem Shelmanov, Gleb Kuzmin, Leonid Sanochkin, Daniil Larionov, Gleb Gusev, Manvel Avetisian, and Leonid Zhukov. 2022. Towards computationally feasible deep active learning.
In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1198–1218.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments.
arXiv preprint arXiv:1805.12471.
Michelle Yuan, Hsuan-Tien Lin, and Jordan BoydGraber. 2020. Cold-start active learning through self-supervised language modeling. arXiv preprint arXiv:2010.09535.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. *Advances in neural information processing* systems, 28.
Fedor Zhdanov. 2019. Diverse mini-batch active learning. *arXiv preprint arXiv:1901.05954*.
| Train | Test | # Labels | NLE | MDL | |
|---------|--------|------------|-------|-------|------|
| SUBJ | 8000 | 2000 | 2 | 1.00 | 0.30 |
| COLA | 8551 | 1043 | 2 | 0.88 | 1.00 |
| AGN | 20000∗ | 7600 | 4 | 1.00 | 0.56 |
| TREC | 5452 | 500 | 6 | 0.92 | 0.34 |
## A Reproducibility
We conducted our experiments on 4× AMD Ryzen Threadripper 3970X 32-Core Processors and 4×
NVIDIA GeForce RTX 3090 GPUs with 24GB of RAM, which took roughly one week. We used PyTorch version 1.12.1, Transformers version 4.21.3, and CUDA 11.4.
## B Experimental Design Choices B.1 Datasets
The datasets used in this paper are standard benchmarks in NLP for text classification. We chose these datasets to represent different attributes: the number of labels (binary or multi-class classification) and the balancing of the labels (balanced and imbalanced classes). The diversity of the dataset characteristics can give an insight into the impact of these attributes on dataset transferability. We present dataset statistics in Table 2. There we also show *minimum description length* (MDL) (Perez et al., 2021) of each dataset, which can be interpreted as the complexity of the task. Subjectivity: Movie-review data with reviews labeled as either subjective or objective. This is a balanced dataset with binary labels.
CoLA: The Corpus of Linguistic Acceptability is a dataset containing sentences labeled as grammatical or not. This is an imbalanced dataset with binary labels.
AG-News: Corpus of news articles annotated by the article's topic (World, Sports, Business, Sci/Tech). The dataset was created by subsampling the corpus to the size of 20,000 examples. This is a balanced dataset with four classes.
TREC: The dataset contains questions labeled with the type of subject of the question. This is an imbalanced dataset with six classes.
## B.2 Models
We picked the models that share the common architecture; they are all transformer-based PLMs but differ in pre-training data and pre-training objectives. This choice of models enables us to analyze the impact of different pre-training design choices on dataset transferability. All models were trained using ADAM optimizer with a learning rate of 2 · 10−5and batch size of 64 for five epochs for both acquisition and evaluation phases.
BERT: One of the first and most popular transformer-based pre-trained language models.
The model was pre-trained using a generative masked language modeling objective. This model has 12 layers, a hidden state size of 768, and 12 heads with 110M parameters in total.
RoBERTa: A model with the same architecture and pre-training objective as BERT but trained on more data and with optimized hyperparameters to make the model more robust. This model has 12 layers, a hidden state size of 768, and 12 heads with 125M parameters in total.
ELECTRA: It uses the same architecture and pretraining data as BERT but with discriminative instead of generative pre-training objectives. Instead of masking some tokens in text and having to guess the identity of masked tokens as BERT does, the generative pre-training objective corrupts some tokens by replacing them with plausible alternatives, and then the model has to decide for each token whether it is the original token or the replaced one.
This model has 12 layers, a hidden state size of 768, and 12 heads with 110M parameters in total.
## B.3 Al Methods
AL methods used to select the most informative data points are divided into two types of heuristics: uncertainty and diversity. Methods using uncertainty as a heuristic select data based on some measure of the model's uncertainty. The intuition behind the uncertainty methods is that the more uncertain the model is about a data point, the more it can learn from knowing its label. In comparison, diversity-based methods try to represent the input space (which is not always the same as the input population) as accurately as possible with as few data points as possible. AL methods can combine those two heuristics to select a group of data points the model is uncertain about for different reasons.
The choice of the AL methods used in this experiment was motivated by the type of heuristic
| F 1 | F |
|-------|-----|
| − | 0 |
SUBJ 0 1 17 18
COLA 3 11 4 18
AGN 1 4 13 18 TREC 0 2 16 18
R→B 0 3 9 12 E→B 0 4 8 12
B→R 1 1 10 12
E→R 1 2 9 12 B→E 1 3 8 12
R→E 1 5 6 12
ENT 1 5 18 24 CS 2 8 14 24
BA 1 5 18 24
4 18 50
(uncertainty vs. diversity) they used for sampling.
These methods allow us to analyze the impact of the choice of heuristic on the success of dataset transfer in AL.
Entropy: An uncertainty-based method that selects data points with maximal information entropy of their posterior class distribution.
Core-set: This diversity-based method selects data points that best cover the representation space.
BADGE: A method that combines uncertainty and diversity by using k-MEANS++ algorithm on the would-be gradients of the models' last layer for the data points if their most probable labels were their actual labels.
## C Experiment Runs
This section presents more results from our experiment to complement the already presented results.
Table 3 shows transferability for different combinations in the fashion of Table 1. However, instead of measuring transferability with ∆AUC this table uses the F1 score at the end of the AL loop (1500 labeled instances). To illustrate the success of regular AL (without the transfer), we present Table 4.
That table shows the same information as Table 1 and Table 3 but for situations where acquisition and consumer models are the same. Lastly, we present the learning curves of all of the runs of the experiment in Figure 3 for Subjectivity, Figure 4 for CoLA, Figure 5 for AG-News, and Figure 6 for TREC dataset.
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
![10_image_0.png](10_image_0.png)
![11_image_0.png](11_image_0.png)
![11_image_1.png](11_image_1.png)
![11_image_2.png](11_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
(Not numbered) Limitations
✓ A2. Did you discuss any potential risks of your work?
(Not numbered) Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 Experimental Setup
✓ B1. Did you cite the creators of artifacts you used?
3 Experimental Setup
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All are standard NLP datasets and models.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
All are standard NLP datasets and models.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
All are standard NLP datasets and models.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
All are standard NLP datasets and models.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
(Appendix) B Experimental design choices
## C ✓ **Did You Run Computational Experiments?** 3 Experimental Setup, 4 Results
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3 Experimental Setup, (Appendix) B Experimental design choices
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4 Results
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
(Appendix) A Reproducibility, (Appendix) B Experimental design choices
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
weber-etal-2023-structured | Structured Persuasive Writing Support in Legal Education: A Model and Tool for {G}erman Legal Case Solutions | https://aclanthology.org/2023.findings-acl.145 | We present an annotation approach for capturing structured components and arguments inlegal case solutions of German students. Based on the appraisal style, which dictates the structured way of persuasive writing in German law, we propose an annotation scheme with annotation guidelines that identify structured writing in legal case solutions. We conducted an annotation study with two annotators and annotated legal case solutions to capture the structures of a persuasive legal text. Based on our dataset, we trained three transformer-based models to show that the annotated components can be successfully predicted, e.g. to provide users with writing assistance for legal texts. We evaluated a writing support system in which our models were integrated in an online experiment with law students and found positive learning success and users{'} perceptions. Finally, we present our freely available corpus of 413 law student case studies to support the development of intelligent writing support systems. | # Structured Persuasive Writing Support In Legal Education: A Model And Tool For German Legal Case Solutions
Florian Weber University of Kassel / GER
[email protected] Seyed Parsa Neshaei EPFL / CH
[email protected]
## Abstract
We present an annotation approach for capturing structured components and arguments in legal case solutions of German students. Based on the appraisal style, which dictates the structured way of persuasive writing in German law, we propose an annotation scheme with annotation guidelines that identify structured writing in legal case solutions. We conducted an annotation study with two annotators and annotated legal case solutions to capture the structures of a persuasive legal text. Based on our dataset, we trained three transformer-based models to show that the annotated components can be successfully predicted, e.g. to provide users with writing assistance for legal texts. We evaluated a writing support system in which our models were integrated in an online experiment with law students and found positive learning success and users' perceptions. Finally, we present our freely available corpus of 413 law student case studies to support the development of intelligent writing support systems.
## 1 Introduction
Writing persuasive texts plays a major role in law education (Kosse and Butle Ritchie, 2003). As a part of their training for learning how to write legal opinions, law students are typically challenged to solve legal problems or case studies in the form of persuasive case solutions (Enqvist-Jensen et al.,
2017). To write a persuasive legal case solution, students in German law courses must be able to follow the structural requirements of the appraisal style (see Figure 1) (Stuckenberg, 2020) and justify their derived conclusions argumentatively via legal claims and premises (see Section 2). To learn a skill such as writing a persuasive case solution, individual feedback is important to the learning process
(Hattie and Timperley, 2007; Black and Wiliam, 2009). Individualized feedback for students during their writing or learning processes is lacking, particularly in the field of law. The characteris-
Thiemo Wambsganss EPFL / CH
[email protected] Matthias Söllner University of Kassel / GER
[email protected] tic large-scale learning scenarios in legal studies, which result in a low supervision ratio, are part of the reason for this. Organizational restrictions are another cause of the absence of personal feedback. For instance, there aren't enough lecturers who can assess students' case solutions (Henderson, 2003). At the same time, technical solutions that could help students improve their legal writing fall short of expectations (Beurskens, 2016).
One promising solution to better support students in their writing process and to overcome the limitations in law courses would be the use of writing support systems that could provide individualized feedback to students (Wambsganss et al., 2020a).
![0_image_0.png](0_image_0.png)
Figure 1: Annotation scheme for structured legal writing in a case solution based on the appraisal style: major claim, definition, subsumption and *conclusion*.
2296 To model legal persuasive writing with predictive algorithms, high-quality annotated corpora are needed. Pioneering work in argumentation mining has already focused on jurisprudence (Mochales and Moens, 2008; Mochales and Ieven, 2009),
since the structural approach of legal writing facilitates the unambiguous determination of argumentation components (Lytos et al., 2019; Urchs et al., 2020). Existing corpora in law range from classification of judgments (Urchs et al., 2020), to summarization of legal texts (Hachey and Grover, 2005) and to evaluation of jury verdicts (Poudyal et al., 2019). Corpora dealing with the annotation of structural elements in student written legal texts are not available. A few corpora are suitable for designing and developing systems to support persuasive writing (Stab and Gurevych, 2017b; Wambsganss et al., 2020b; Lawrence and Reed, 2019; Stab and Gurevych, 2014). However, these corpora are of limited use for modeling the structure of writing and argumentation in law, since persuasive writing in the legal domain follows a particular logic
(see Section 2) that is not represented by available corpora. Consequently, there is a lack of evaluated annotation schemes and linguistic corpora for training models that support users in legal writing.
Therefore, we propose a novel annotation scheme for persuasive student-written case solutions. We introduce a corpus of 413 student-written case solutions with 25,103 sentences that are annotated for the components of the appraisal style, arguments (legal claim and premises), the relations of the arguments, and the relations of distinct components of the appraisal style. We trained different types of models (e.g. BERT and DistilBERT) and compared their accuracy to analyze which model performs best. Finally, we embedded the three best performing transformer-based BERT models in a novel writing support system that provides individual feedback and recommendations in a writing scenario. The design of our writing support system is based on the theory of learning from errors
(Metcalfe, 2017) and aims to provide students with individual feedback on their errors during the writing process (Fazio and Marsh, 2009). We tested the systems in an online learning scenario with law students. The students were asked to use the system to write a case solution. We show promising results in terms of the students' understanding of the appraisal style and their perception of the system.
The participants perceive the system as useful and rate the system's feedback as accurate. Our analyzed results support that our presented corpus and the models are able to support students' learning effectively.
Our work makes five major contributions. First, we derive a novel modeling approach for a new data domain by developing an annotation scheme based on the theory of structured legal writing based on the appraisal style (Man, 2022; Stuckenberg, 2020). Second, we present an annotation study based on 100 student case solutions to show that annotation of student case solutions is accurately possible. Based on the annotation, we trained three transformer-based BERT models (Devlin et al., 2019) to demonstrate that the prediction of the annotated structures is possible with a certain accuracy. Fourth, we provide a corpus of 413 student case solutions in German collected in different law lectures. Finally, we show in an online experiment that the models can be used effectively in a writing support system. Therefore, we encourage further investigation into the enhancement of law students' persuasive structured writing and the development of writing support systems using NLP. This research aims to enhance students' skills regardless of their location, time constraints, or instructor availability.
## 2 Related Work
Persuasive Writing in Law Courses Classically, students are asked to solve legal problems or case studies in the form of persuasive case solutions
(Enqvist-Jensen et al., 2017). In these case solutions, students are forced to use specialized and highly concept-driven knowledge. The theoretical knowledge specializes more in the correct application of paragraphs and the setting of priorities in the case solution. In contrast, the concept-driven knowledge is largely composed of the concepts of writing case solutions in a structured way. To do this, students must follow established legal concepts. Among the most important concepts in German jurisprudence are the appraisal style and the judgment style, whereby the appraisal style is primarily important for legal education (Stuckenberg, 2020; Urchs et al., 2020). Since the term "*appraisal* style" is a peculiarity of the German legal language, there is no direct equivalent in English. We define the term appraisal style as "*the form and writing style of a legal opinion*" (Stuckenberg, 2020).
The appraisal style is used to solve complex legal problems. The four elements of appraisal style are briefly explained in Table 1 and supplemented by an example in Figure 2.
Corpora in the Legal Field Although law is a promising discipline for annotating the components of legal writing and arguments due to its fixed logical structure (Moens et al., 2007; Urchs et al., 2020), evaluated open-access corpora for law are rare (Reed, 2006; Mochales and Moens, 2011; Urchs et al., 2020). There are, however, some publicly accessible corpora. Hachey and Grover (2005)
present a corpus of 188 annotated English court opinions. To construct a system for automatic summarizing of court judgments, they annotated rhetorical status, significance, and linguistic markup.
Other annotated corpora deal explicitly with the annotation of argumentation structures in court decisions (Houy et al., 2013) or legal cases (MochalesPalau and Moens, 2007). Mochales-Palau and Moens (2007) present a corpus of English-language judicial cases gathered from the European Court of Human Rights (ECHR). They chose 55 papers at random, which included 25 court decisions and 29 admissibility reports. The texts were annotated and studied systematically in two layers (argumentative and non-argumentative sentences). A following study showed that the detection of argumentative sentences in court decisions is possible.
Work such as that of Walker et al. (2014) has focused on identifying successful and failed patterns of reasoning in U.S. Court decisions. Patterns of reasoning are identified and used to illustrate the difficulty of developing a type or annotation system for characterizing these patterns. The corpus is based on legal cases of vaccine-injury compensations. There are several German corpora in addition to the largely English-language corpora for recognizing decisions and legal cases. Urchs et al.
(2020) created a corpus based on Bavarian Court of Justice decisions. They discover argumentation structures in judgments using 200 court decisions.
Other research groups focused on the identification of arguments in the German Federal Constitutional Court's Decision Corpus (Houy et al., 2013) and the development of a German referent corpus comprised of articles from legal journals, decision texts, and norm texts (Gauer et al., 2016).
A number of corpora have previously been proposed in research to enhance students' structured and persuasive writing in real-world applications, including Stab and Gurevych (2017a) and Stab and Gurevych (2014). Stab and Gurevych (2014) produced a corpus based on student essays for building and implementing systems to promote persuasive writing for adaptive feedback using argumentation mining (AM) approaches. Further research uses the corpus as a model to annotate persuasive writings (Carlile et al., 2018) or construct a model for assessing persuasive essays (Ke et al., 2018). However, the existing literature does not adequately transfer corpora for structured writing or reasoning to other educational domains, like law or to other languages.
To summarize, we see that literature falls short of annotated corpora, which can be used to model components in student-written legal case solutions.
Without the availability of these corpora, the design of adaptive NLP-based applications for lawful writing is naturally hindered. To the best of our knowledge, there is only one approach by Urchs et al. (2020) that aims to detect the components of legal writing, but the approach focuses on court decisions and the judgment style. Therefore, we aim to address this literature gap by presenting and evaluating an annotation scheme as well as an annotated corpus built on student-written texts with the objective of developing an intelligent writing support system for students in law courses.
## 3 Construction Of The Corpus 3.1 Data Source
The data for our corpus were collected in a law courses at a German university. We compiled the corpus with the case solutions of law students who have written solutions to different legal problems
(four different case studies) from different areas of law. In total, we collected 413 legal case solutions, with a typical length of 55.07 sentences and 331.35 tokens per document.1 The case studies are mainly based on example cases from civil law and are oriented towards basic cases of Musielak and Hau (2005). Students solved the cases as a component of a comprehensive law lecture, utilizing them as a means of exam preparation. It is important to note that the quality of the 413 student-written case solutions may vary, as the students are not all at the same level of proficiency or understanding.
The data were collected in the mentioned lecture between 2020 and 2022. The course deals with the teaching of the basics of legal writing and the funda-1The data collection was conducted according to the ethical guidelines of our university.
mental knowledge of business law were introduced.
Accordingly, the course has dealt with essential basics that are also important for non-law students, such as business students. The data collected are thus relevant not only in the context of foundational legal education but also for many other Germanlanguage legal studies programs (e.g., law courses in the education of business students).
## 3.2 Annotation Scheme
The correct application of structured legal writing in the appraisal style is the basis for a persuasive legal opinion. In the following, the components of the legal writing structure, as well as its annotation, are explained. The structure consists of four components: *major claim, definition, subsumption*
(premise and legal claim), and *conclusion* (Sieckmann, 2020; Backer, 2009) (see Table 1).
| Components | Definition | | | | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|------|--------|-----|----|
| Major claim | The major claim explains the elements of the offense (fact) that are to be fulfilled. It raises a question or possible consequence. The question is discussed in the following steps and is finally answered in the conclusion. | | | | |
| Definition | The definition determines the constituent elements that must occur in the legal problem so that the case solution can come to a conclusion. The elements always depend on the question raised in the major claim. | | | | |
| Subsumption (premise and legal claim): | In the subsumption, | it is exam | | | |
| ined | to | what | extent | the | condi |
| tions (elements) of the definition are given. Here, the facts of the case are weighed against the preconditions from the definitions and the premises (facts). Legal consequences are drawn from the premises, so-called legal claims. | | | | | |
| Conclusion | The conclusion is the answer to the major claim. Thus, the case solution reaches a final result here. | | | | |
Structural Components in Legal Case Solutions A persuasive case solution in the appraisal style consists of four main components (see Table 1).
The appraisal style always starts with a *major claim*.
The *major claim* raises a question, explains the elements of the offense that are to be fulfilled, and is to be written in the subjunctive. *Definitions* define the elements to be fulfilled. The elements always depend on the question raised in the major claim.
Only essential passages of the law should be mentioned here; therefore, irrelevant passages should not be annotated. In the *subsumption*, we examine to what extent the conditions (elements) of the definition are given. Here, the facts of the case are weighed argumentatively. This weighing follows established models in argumentation theory
(Toulmin et al., 1984; Freeman, 2001). Thus, an argument comprises various elements, including a legal claim and at least one premise that either supports or challenges it. The purpose of the premise is to support the validity of a claim within the context of law by presenting factual statements, legal judgments, or the prevailing opinions of legal experts. It serves as a justification that makes the legal claim understandable. The *conclusion* is the answer to the question that was raised in the major claim. Thus, the case solution here comes to a final conclusion. The question formulated in the major claim is answered. A conclusion is always written in the indicative. Reasons are out of place here; they only belong in the definition or subsumption.
## Relations In Legal Case Solutions Apart From
the various components that make up the structure of a legal argument, there exist two essential connections between these components. The first pivotal link revolves around the dependence on the major claim and the subsequent conclusion. Every question or issue presented within the major claim must be addressed and resolved in the conclusion.
This connection ensures that the arguments presented in support or refutation of the major claim ultimately lead to a clear and definitive conclusion.
In other words, the conclusion should provide a resolution to the questions raised in the major claim, tying together the various premises and evidence presented throughout the argument. These relation is illustrated in Figure 2. The subsumption contains the second crucial connection. Here the argumentative elements legal claim and premise are weighed argumentatively. Premises are facts that lead to certain conclusions by the previously attached definition. As a result, the premises back up the legal claims made. More complex combinations of conclusions and premises are feasible. Several different premises can support a legal claim. Equally, a legal claim can be supported by only one premise
(see Figure 4 in the Appendix A). A premise might be formed from the facts of the case, past decisions, or the so-called majority view (the majority of legal scholars support a certain interpretation of a fact).
![4_image_0.png](4_image_0.png)
Since it is necessary, especially in jurisprudence, to support the findings obtained in subsumption in a comprehensible way, arguments are used in case solutions in the subsumption to convincingly support the legal conclusion drawn.
## 3.3 Annotation Process
Two native German speakers independently annotated the legal case solutions according to the components - *major claim, definition, subsumption* (premise and legal claim) and *conclusion* - ,
as well as for the argumentative relations according to the annotation guidelines we provided. The annotators were trained and educated in the legal domain. Our guidelines consist of thirteen pages.
In the guideline2, we precisely explain and define the components of legal argumentation, which scheme to use for annotation, and how to annotate the subsumption and its argumentative structures
(Tettinger, 1982; Backer, 2009; Sieckmann, 2020).
In addition, the guideline specifies that sentences failing to meet the criteria of expert opinion style should not be annotated. This decision is based on quality assurance considerations, considering that there may be variations in the texts within the dataset, and not all sentences are expected to align with the requirements of legal writing. Six team workshops were conducted with the annotators as well as a senior researcher to develop a common understanding of the annotation guidelines and to resolve potential disagreements. The senior researcher also has a background in law education, so he was able to assist with legal problems and issues as well. For annotation, we used the tool tagtog3. The tool offers the advantages of a graphical interface for marking up units of text and allows monitoring of Inter-Annotator Agreements (IAA)
through a dashboard of metrics. Furthermore, the tool has already been used successfully in similar projects (Wambsganss and Niklaus, 2022; Wambsganss et al., 2021). In three workshops, we analyzed the metrics inform of IAAs (e. g., percentage agreement, Kripp. α, Fleiss' Kappa see in Section 4) at 10, 30, and 70 case milestones and highlighted potential difficulties and errors in the guidelines.
After annotating the first 100 texts (these texts were each individually edited by both annotators), the two annotators individually annotated the remaining 313 texts. Accordingly, each annotator still annotated 157 texts (or 156 texts) individually. All conflicts in the annotation process were discussed and resolved with three senior researcher. The annotation process was continued on the basis of the agreement; if the agreement was too weak, it was discussed how annotation could be improved. In order to achieve consistency in the annotation process, certain annotation steps needed to be repeated based on this foundation. This was necessary to ensure that the dataset received uniform and accurate annotations throughout. By revisiting these annotation steps, any inconsistencies or discrepancies in the data set could be identified and addressed, allowing for a more reliable and cohesive annotation process. This iterative approach aimed to enhance the overall quality and reliability of the annotations, ultimately leading to a more consistent dataset. Figure 2 shows an example of an annotated part of a case solution with the corresponding components.
## 4 Corpus Analysis 4.1 Inter-Annotator Agreement
To evaluate the reliability of our annotated components and their relationships to each other, we followed the approaches of Stab and Gurevych (2014)
as well as Wambsganss and Niklaus (2022) to calculate three different Inter-Annotator Agreements
(IAA).
Components Percentage Kripp.
α
Fleiss' Kappa
Major claim 0.9845 0.9292 0.9292
Definition 0.9720 0.7878 0.7878
Subsumption 0.9622 0.6260 0.6259
Premise 0.9341 0.5590 0.5589
Legal claim 0.9560 0.4502 0.4502
Conclusion 0.9752 0.8836 0.8836
None 0.9026 0.8052 0.8432
Table 2: Inter-annotator agreement of legal component annotations.
Structural Components in Legal Case Solutions To annotate the components of a legal case solution, the annotators determined the individual components in sentences. If a sentence contains a component, it receives one of the labels. Otherwise, it receives the label *none*. Basically, a label can only be assigned to one sentence at a time. An exception is the label of the subsumption. The subsumption is the superior component of the legal claim and premise components. Accordingly, claims and premises must always be subsumptions, but not every subsumption must be a claim or a premise.
To represent this circumstance, we have decided to use three models (see Table 5). To evaluate the agreement between the annotators, we compute the percentage agreement p as well as the measures Krippendorff's α (Krippendorff, 1980) and Fleiss' Kappa (Fleiss, 1971). Table 2 illustrates the final resulting IAA values after 100 annotated case solutions. The percentage agreement divides the number of agreements by the label count (Meyer et al.,
2014). In order to evaluate the accuracy and reliability of the annotation, a thorough analysis was conducted on the individual values. The results revealed a high level of agreement for the major claim component, with a score of 0.9292, indicating a perfect agreement among the annotators. Similarly, the conclusion component demonstrated a perfect agreement level of 0.8836. However, when examining the premise, legal claim, and subsumption components, the agreement levels were relatively lower, falling below 0.67. Although these components exhibited moderate agreement, they still provided valuable insights for further refinement and clarification. On the other hand, the component related to definition displayed a substantial level of agreement, with a score of 0.7878 according to the
(Landis and Koch, 1977). This signifies a significant level of consistency and concurrence among the annotators regarding the definitions within the annotation process. By thoroughly analyzing these individual values, the assessment provided a comprehensive understanding of the effectiveness and reliability of the annotation, highlighting areas of strong agreement as well as identifying aspects that may require further attention and improvement.
The evaluation of Fleiss' Kappa comes to similar results. With a total agreement of 0.7751 (Krippendorff's α) and 0.7751 (Fleiss' Kappa), we draw the conclusion that it is consistently possible to annotate argumentative elements in student case solutions. The total agreements show, according to Landis and Koch (1977), a substantial agreement for Fleiss' Kappa and an acceptable agreement for Krippendorff's α (Batanovic et al. ´ , 2020). Relations in Legal Case Solutions To assess the reliability of relations, we examined all relations that were annotated in the dataset, i.e., all pairs of a major claim and a conclusion, as well as all pairs of a legal claim and a premise. In total, the markable elements include 3276 pairs, of which 1430 are annotated as legal claims and premises relations, while 2890 of the pairs are annotated as major claims and conclusion relations. We obtained an percentage IAA of 79.7% for the relations between the major claims and the conclusions. The percentage IAA between the claims and premises is 56% (Meyer et al., 2014). Due to this, we calculate the values for Krippendorff's α and Fleiss' Kappa (see Table 6 in the Appendix A). For the relation of major claims and conclusion, we obtained a substantial agreement (0.7750) for Fleiss' kappa (Landis and Koch, 1977) and an acceptable agreement (0.7813) for Krippendorff's α (Krippendorff, 2011; Batanovic et al. ´ , 2020; Krippendorff, 1980). The relationship between the legal claims and premises shows a fair agreement (0.3979) for Fleiss' kappa (Landis and Koch, 1977). We conclude that component relations and argumentative relations can be reliably annotated in legal case solutions. Nevertheless, it should be noted that the relational agreement between legal claims and premises according to Krippendorff (2011, 1980) is not acceptable. For the legal claim premise agreement, Fleis' kappa and the percentage agreement, however, indicate acceptable values.
## 4.1.1 Corpus Statistics
The final corpus comprises 413 case solutions written by students, covering four distinct case studies from the field of civil law. The case solutions are made up of 22,743 sentences totaling and 328,543 tokens (see Table 3). On average, each document has 55.07 sentences and 331.35 tokens. The distribution of the components can be taken from Table 4.
Other text fragments were detected as a component with no parameters ("None").
\begin{tabular}{c|c c c c} & total & mean & SD & min & max \\ \hline Sentences & 22,743 & 56.96 & 27.90 & 3 & 133 \\ Tokens & 328,543 & 676.16 & 331.35 & 32 & 1790 \\ \end{tabular}
Table 3: Overview of the distribution of sentences and tokens in the final corpus. Mean, standard deviation
(SD), min and max of sentences and tokens are indicated per document.
total mean SD min max
Major claim 3514 8.51 4.76 1 24
Definition 2288 5.54 2.96 1 17
Subsumption 2837 6.87 3.55 1 17
Premise 3304 8.00 4.77 1 27
Legal claim 1949 4.72 2.79 1 17
Conclusion 3531 8.55 4.37 1 23
## 5 Application Of The Corpus Modelling Components And Relations Of Legal
Case Solutions After constructing and analyzing our corpus, we leveraged the novel data to train different ML-models. The detection of the components and relations of legal case solutions is a multiclass classification task. The first task is to classify the single components of the appraisal style. Each sentence can be either a *major claim*, a *definition*, a subsumption, a conclusion or *non-component*. The second task is the classification of sentences that refer to the component *subsumption*. Each sentence that is a *subsumption* can be a *legal claim*, a premise, or a *none* within the subsumption. The third task is to classify the relations between the legal claims and the *premises* in a *subsumption*4.
To perform the classifications, we trained three different text models, we used BERT, RoBERTa, DistilBERT and DistilRoBERTa (see Table 7 in the Appendix A). 20% of the original dataset was used for evaluation, and the remaining 80% for training all the models. The BERT models performed with the highest accuracy, according to our analysis of the models (see Table 7). Therefore, we decided to use three BERT models for the classification. More information about the three models classifier can be found in Table 5 and more information about the performance per class (precision, recall and F1 score) can be found in Table 7. The pre-trained BERT model was acquired from HuggingFace (Wolf et al., 2020) and was subsequently trained using the training dataset. BERT can apply the knowledge it has gained from the initial dataset to the field of legal texts. To make the corpus suitable for sentence-based inputs, it was preprocessed using Spacy5. To train the model, we employed 8-piece batches with a maximum sequence length of 128. The BERT models used a warm-up ratio of 0.06, a learning rate of 4e5, and an Adam epsilon of 1e-8. For consistency and effectiveness, we adopted the hyperparameters from the pre-trained bert-base-german-cased model and the default parameters of the widely used SimpleTransformers Python library, which have proven successful in similar NLP tasks (Reimers and Gurevych, 2019).
Through extensive experiments, we determined that our models performed adequately with the default parameters. We acknowledge the significance of hyperparameter selection and firmly believe that our approach was effective for our specific task and dataset, as demonstrated by competitive results when compared to state-of-the-art models (Wambsganss and Niklaus, 2022).
Extension of the Models by Syntactic Rules To better meet the prediction requirements of a legal case solution, we have extended the model with syntactic rules. In the first step, we have added the identification of headings to the model. Thus, all sentences that begin with a Roman or Arabic numeral, e.g., 1, II, or with a letter (a., a), A), etc.) are marked as headings. This is important because the 4The model does not incorporate the relations between the major claim and the conclusion, as we determined that simple heuristics offered a superior approach to providing feedback to students regarding the connection between these two components.
5https://spacy.io
![7_image_0.png](7_image_0.png)
Model Description
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
headings have nothing to do with the specified components but are often used by students to structure their case solutions (see Figure 3). In the second step, we have defined a collection of abbreviations typically used in legal case solutions (see Table 8 in the Appendix A). These abbreviations played a vital role in our sentence-based models by ensuring that sentences would not be inappropriately segmented due to the presence of punctuation marks immediately following the abbreviations. By incorporating these abbreviations into the models, we maintained the integrity and coherence of the text, enabling more accurate and effective analysis of the legal case solutions.
Writing Support System for Persuasive Legal Case Solutions We have designed a writing support system in which we have implemented the three BERT models as feedback algorithms. The system is based on a user-centered design and fundamentally follows the theory of learning from errors (Metcalfe, 2017) and supports learners through scaffolding (Wong and Lim, 2019; Cagiltay, 2006).
Based on our three models, the system can provide individual feedback to students in German law courses during their writing process (Hattie and Timperley, 2007). Our writing support system is presented in Figure 3.
Evaluation in a Writing Task We evaluated our writing support system in an online experiment with 34 students who were enrolled in a law or business law program. We selected Prolific6as the experimental platform due to its consistent track record of delivering high response quality and a diverse range of samples, making it one of the most reliable platforms for conducting behavioral research (Peer et al., 2017). In the online exper6https://www.prolific.co iment, students were asked to solve a legal problem with the assistance of our writing support system. The students were randomly divided into two groups. The control group solved the problem with a reduced version of our system, receiving only static feedback on persuasive writing of case solutions. The treatment group solved the task with feedback based on the three BERT models. Apart from the feedback, the two versions of the system were identical in design to maintain consistency.
Both systems use a case study, useful paragraphs, and a checklist (see Figure 3). Before the students started the writing task, we conducted a pre-test to make sure that they had the same knowledge about legal writing. In the pre-test, students received two predefined case solutions and were required to rate them on a scale of 1–5. The scale indicates how qualitatively well the case solutions were written
(see Table 9 in the Appendix A).
After the interaction with the system, the participants were asked to complete a post-survey to measure the learning outcome between the two versions of the systems. In the post-test, students were again asked to rate two predefined cases (scale of 1–5) and explain why they rated the cases accordingly (assessment task). After the post-test, students were asked questions regarding their perception of the system (post-survey) (see Table 10 in the Appendix A). To evaluate the perception of the system, the students were asked questions about the technology acceptances of the system (Venkatesh and Bala, 2008) and the feedback accuracy of the system (Podsakoff and Farh, 1989). To test whether participants conscientiously completed the surveys, we included two control questions.
Results After analyzing that all participants were either law students or had already attended a law lecture in the course of their study program, had sufficient knowledge of German, and could answer both control questions, we obtained 29 valid responses (14 treatment group, 15 control group).
The participants had an average age of 25.83 (SD
= 5.66). Among them, there were 9 females, 16 males, and four individuals who identified as nonbinary. The participants spent between 30 and 55 minutes writing the case solution. The post-test required approximately 20 minutes to complete.
To assess participant responses, we employed a standardized 7-point Likert Scale commonly used in psychology. In this scale, the value of 4 represents neutrality. Values higher than 4 indicate positive outcomes and provide evidence of the system's effective design. The perceived usefulness has a value of 5.07 (SD = 1.14). Perceived usefulness (PU) shows if the users believe in an increased value by using the system (Davis, 1989). Perceived ease of use (PEOU) was also rated by participants above the neutral value of 4 (mean = 5.5, SD =
1.08). PEOU promotes intrinsic learner motivation and can lead to increased learning success (Barto et al., 2004). Finally, the participants also rated the intention to use (ITU) with a mean value of 5.43
(SD = 0.99), which is above the neutral value. The ITU indicates that the participants would use the corresponding system in a law course (Agarwal and Karahanna, 2000). We also analyzed feedback accuracy to determine whether participants perceive the feedback algorithm to be accurate (Podsakoff and Farh, 1989). The results show that participants rate the feedback with a mean value of 4.95 (SD =
0.74) hence 0.95 higher than the neutral value 4.
In addition to participants' perceptions of the system, we also measured the learning success of the system in a post-test. In the post-survey, we indicated that the participants performed significantly better than the control group at the assessment task
(p-value = 0.0404, W = 146.5)7. At the same time, we could show that there were no significant differences between the two groups in the pre-test, which excludes a bias by a control group with possibly more knowledge (see Table 11 in the Appendix A).
## 6 Conclusion
In this research, we offer a novel scheme for annotating structured elements and arguments in student-written case solutions. We used the scheme to create a corpus of 413 students' written legal case solutions, which consists of 25,103 sentences and 310,363 words. Furthermore, we present an annotation study based on 100 case solutions and show that the annotation of student-written case solutions is possible. Finally, we integrated and evaluated three trained BERT-models based on our corpus in a writing support system. In order to improve teaching in large-scale learning settings, we expect that integrating the provided annotation scheme and our argumentation corpus would encourage the creation of writing support systems.
## Limitations
Regarding our work, a few limitations should be mentioned. During the annotation process, conflicts between annotations occurred. All conflicts were discussed with a senior researcher and resolved in this way. Thus, we reached the best possible agreements, but still some agreements are lower than others (see Table 2). For example, legal claims and premises have a relatively large room for interpretation. Perfect results can only be expected by over-anchoring the annotators and weakening the guideline, which we have consciously avoided in our research. The comparison of the IAA with other works in the field of NLP from legal science is not possible, because the works either do not examine the components of the appraisal style or identify the components of the judgment style without the indication of the IAAs (Urchs et al., 2020). Compared to works that also annotated premises (Kripp.
α = 51.08%) and claims (Kripp. α = 55.49%) in business pitches (Wambsganss and Niklaus, 2022), our work provides comparable results with respect to the agreement of the Krippendorff α (premise
= 55.89%, legal claim = 45.02%). Further work shows similar results α = 44.1% (Park and Cardie, 2018). All in all, we can assume that both our components and our mounted relationships, achieve comparable or better results than comparable works
(e.g., Park and Cardie (2018)).
Although our model shows accurate values between 78% and 92% for predicting the components of the appraisal style, the values for determining legal claims and premises are lower (62% and 78%)
compared to the other values. However, they display reasonable values when compared to previous NLP studies. We can only compare our work to other related work in another domain because values for detecting legal claims and premises are not available in the NLP literature. For instance Wambsganss and Niklaus (2022), present an accuracy of 54.12% for their Long Short-Term Memory
(LSTM) model which detects claims and premises.
With the mentioned model the authors shows positive outcomes in supporting students' argumentative skills. Our models show similar or higher precision in comparison to the works of Poudyal et al. (2020) or Wambsganss and Niklaus (2022)
(see Table 7 in the Appendix A) and our post-test results also show significant learning outcomes (see Table 11 in the Appendix A). Although we can show a significant learning output, it must be noted that this is only short term. As a result, we intend to carry out additional field experiments in the future to establish the system's effectiveness over a more extended period and demonstrate long-term success.
As a third possible limitation, our models are limited to applying the appraisal style in German only. In the future, further efforts have to be made to investigate the transfer-ability or adaptation of our models to other countries with other legal systems and other languages. However, we assume that this is possible in principle, since some countries such as China now use the appraisal style in law teaching (Man, 2022) and countries such as the U.S. use at least similar approaches such as learning with case studies using the IRAC formula
(Metzler, 2002). Nevertheless, some adaptation of the models is needed, since the language and the legal form in each country have their own specificities.
## Ethics Consideration
It is important to acknowledge that this research was conducted by a diverse team of authors and annotators with backgrounds encompassing Western European, Asian, female, and male perspectives.
All data collection procedures strictly adhered to the ethical and privacy policies outlined by our university and the respective platforms involved. Prior to participation in surveys or interviews, all participants were duly informed about the data processing procedures and provided their explicit consent. To ensure privacy, all data were anonymized during analysis and could be deleted upon the request of participants.
In collaboration with our university, we conducted a comprehensive risk assessment and ethics review for this project. The findings from both investigations affirm that the project does not pose any risks to the students. Our models and the system utilizing them do not present any potential dependencies or hazards that could negatively impact students. It is worth noting that similar models have been trained in the past, aimed at enhancing students' argumentation skills among other objectives. Based on our current knowledge, no risks have been identified associated with the utilization of these models.
We are committed to upholding the highest ethical standards throughout our research, prioritizing the well-being of all participants involved.
## References
Ritu Agarwal and Elena Karahanna. 2000. Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS
quarterly, 24(4):665–694.
Carsten Backer. 2009. Der Syllogismus als Grundstruktur des Juristischen Begrundens. *Rechtstheorie*,
40(3):404–424.
Andrew G Barto, Satinder Singh, Nuttapong Chentanez, et al. 2004. Intrinsically motivated learning of hierarchical collections of skills. In Proceedings of international conference on developmental learning, pages 112–119.
Vuk Batanovic, Miloš Cvetanovi ´ c, and Boško Nikoli ´ c.´
2020. A versatile framework for resource-limited sentiment articulation, annotation, and analysis of short texts. *PLoS One*, 15(11):1–30.
Michael Beurskens. 2016. Neue Spielräume durch Digitalisierung? E-Learning in der deutschen Rechtslehre. *ZDRW Zeitschrift für Didaktik der Rechtswissenschaft*, 3(1):1–17.
Paul Black and Dylan Wiliam. 2009. Developing the theory of formative assessment. *Educational Assessment, Evaluation and Accountability (formerly: Journal of Personnel Evaluation in Education)*, 21(1):5–
31.
Kursat Cagiltay. 2006. Scaffolding strategies in electronic performance support systems: Types and challenges. *Innovations in education and Teaching International*, 43(1):93–103.
Winston Carlile, Nishant Gurrapadi, Zixuan Ke, and Vincent Ng. 2018. Give me more feedback: Annotating argument persuasiveness and related attributes in student essays. In *Proceedings of the 56th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 621–631.
Fred D Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. *MIS quarterly*, 13(3):319–340.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Cecilie Enqvist-Jensen, Monika Nerland, and Ingvill Rasmussen. 2017. Maintaining doubt to keep problems open for exploration: An analysis of law students' collaborative work with case assignments.
Learning, culture and social interaction, 13:38–49.
Lisa K Fazio and Elizabeth J Marsh. 2009. Surprising feedback improves later memory. Psychonomic Bulletin & Review, 16(1):88–92.
Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*,
76(5):378–382.
James B Freeman. 2001. Argument structure and disciplinary perspective. *Argumentation*, 15(4):397–423.
Isabelle Gauer, Hanjo Hamann, and Friedemann Vogel. 2016. Das juristische Referenzkorpus (JuReko)-
Computergestützte Rechtslinguistik als empirischer Beitrag zu Gesetzgebung und Justiz. In *DHd 2016:*
Modellierung - Vernetzung - Visualisierung, page 129–131.
Ben Hachey and Claire Grover. 2005. Automatic legal text summarisation: experiments with summary structuring. In Proceedings of the 10th International Conference on Artificial intelligence and Law, pages 75–84.
John Hattie and Helen Timperley. 2007. The power of feedback. *Review of educational research*, 77(1):81–
112.
Bethany Rubin Henderson. 2003. Asking the lost question: what is the purpose of law school. Journal of Legal Education, 53(1):48–79.
Constantin Houy, Tim Niesen, Peter Fettke, and Peter Loos. 2013. Towards automated identification and analysis of argumentation structures in the decision corpus of the german federal constitutional court. In 2013 7th IEEE International Conference on Digital Ecosystems and Technologies (DEST), pages 72–77.
IEEE.
Zixuan Ke, Winston Carlile, Nishant Gurrapadi, and Vincent Ng. 2018. Learning to Give Feedback: Modeling Attributes Affecting Argument Persuasiveness in Student Essays. In *IJCAI*, pages 4130–4136.
Susan Hanley Kosse and David T Butle Ritchie. 2003.
How judges, practitioners, and legal writing teachers assess the writing skills of new law graduates:
A comparative study. *Journal of Legal Education*,
53(1):80–102.
Klaus Krippendorff. 1980. *Validity in content analysis*.
Campus, New York.
Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability. *Departmental Papers (ASC)*, pages 1–10.
J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data.
International Biometric Society, pages 159–174.
John Lawrence and Chris Reed. 2019. Argument mining: A survey. *Computational Linguistics*, 45(4):765–
818.
Anastasios Lytos, Thomas Lagkas, Panagiotis Sarigiannidis, and Kalina Bontcheva. 2019. The evolution of argumentation mining: From models to social media and emerging tools. Information Processing &
Management, 56(6):102–157.
JIN Man. 2022. The Appraisal-Based Case Teaching Method in China's Legal education. *Canadian Social* Science, 18(2):1–4.
Janet Metcalfe. 2017. Learning from errors. *Annual* Review of Psychology, 68:465–489.
Jeffrey Metzler. 2002. The importance of IRAC and legal writing. *University of Detroit Mercy Law Review*,
80:501–514.
Christian M Meyer, Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro agreement: An open-source Java library for measuring inter-rater agreement. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: System demonstrations, pages 105–109.
Raquel Mochales and Aagje Ieven. 2009. Creating an argumentation corpus: do theories apply to real arguments? A case study on the legal argumentation of the ECHR. In Proceedings of the 12th international conference on artificial intelligence and law, pages 21–30.
Raquel Mochales and Marie-Francine Moens. 2008.
Study on the Structure of Argumentation in Case Law. In *Proceedings of the 2008 Conference on* Legal Knowledge and Information Systems: JURIX
2008: The Twenty-First Annual Conference, pages 11–20, Amsterdam, The Netherlands, The Netherlands. IOS Press.
Raquel Mochales and Marie-Francine Moens. 2011. Argumentation mining. *Artificial Intelligence and Law*,
19(1):1–22.
Raquel Mochales-Palau and M Moens. 2007. Study on sentence relations in the automatic detection of argumentation in legal cases. *Frontiers in Artificial* Intelligence and Applications, 165:89–99.
Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In *Proceedings of the 11th* international conference on Artificial intelligence and law, pages 225–230.
Hans-Joachim Musielak and Wolfgang Hau. 2005.
Grundkurs BGB, 17 edition. CH Beck, München.
Joonsuk Park and Claire Cardie. 2018. A corpus of erulemaking user comments for measuring evaluability of arguments. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC).
Eyal Peer, Laura Brandimarte, Sonam Samat, and Alessandro Acquisti. 2017. Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. *Journal of Experimental Social Psychology*,
70:153–163.
Philip M Podsakoff and Jiing-Lih Farh. 1989. Effects of feedback sign and credibility on goal setting and task performance. *Organizational behavior and human* decision processes, 44(1):45–67.
Prakash Poudyal, Teresa Gonalves, and Paulo Quaresma.
2019. Using Clustering Techniques to Identify Arguments in Legal Documents. In *Third Workshop* on Automated Semantic Analysis of Information in Legal, pages 1–8.
Prakash Poudyal, Jaromír Šavelka, Aagje Ieven, Marie Francine Moens, Teresa Goncalves, and Paulo Quaresma. 2020. Echr: legal corpus for argument mining. In *Proceedings of the 7th Workshop on Argument Mining*, pages 67–75.
Chris Reed. 2006. Preliminary results from an argument corpus. *Linguistics in the twenty-first century*, pages 185–196.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3982–3992, Hong Kong, China.
Jan-R Sieckmann. 2020. *Logik juristischer Argumentation*, 76 edition. Nomos Verlag, Baden-Baden.
Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In *Proceedings of COLING 2014, the 25th international conference on computational linguistics:*
Technical papers, pages 1501–1510.
Christian Stab and Iryna Gurevych. 2017a. Parsing argumentation structures in persuasive essays. *Computational Linguistics*, 43(3):619–659.
Christian Stab and Iryna Gurevych. 2017b. Recognizing insufficiently supported arguments in argumentative essays. In *Proceedings of the 15th Conference of* the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 980–990.
Carl-Friedrich Stuckenberg. 2020. Der juristische Gutachtenstil als cartesische Methode. *ZDRW* Zeitschrift für Didaktik der Rechtswissenschaft, 6(4):323–341.
Peter Josef Tettinger. 1982. *Einführung in die juristische Arbeitstechnik*, 81 edition. Beck, München.
Stephen Toulmin, Richard D. Rieke, and Allan Janik.
1984. *An Introduction to Reasoning*, 2 edition.
Macmillan.
Stefanie Urchs, Jelena Mitrovic, and Michael Granitzer. ´
2020. Towards Classifying Parts of German Legal Writing Styles in German Legal Judgments. In *2020* 10th International Conference on Advanced Computer Information Technologies (ACIT), pages 451–
454. IEEE.
Viswanath Venkatesh and Hillol Bala. 2008. Technology acceptance model 3 and a research agenda on interventions. *Decision sciences*, 39(2):273–315.
Vern Walker, Karina Vazirova, and Cass Sanford. 2014.
Annotating patterns of reasoning about medical theories of causation in vaccine cases: toward a type system for arguments. In Proceedings of the first workshop on argumentation mining, pages 1–10.
Thiemo Wambsganss and Christina Niklaus. 2022.
Modeling Persuasive Discourse to Adaptively Support Students' Argumentative Writing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 8748–8760.
Thiemo Wambsganss, Christina Niklaus, Matthias Cetto, Matthias Söllner, Siegfried Handschuh, and Jan Marco Leimeister. 2020a. Al: an adaptive learning support system for argumentation skills. In *Proceedings of the 2020 CHI Conference on Human* Factors in Computing Systems, pages 1–14.
Thiemo Wambsganss, Christina Niklaus, Matthias Söllner, Siegfried Handschuh, and Jan Marco Leimeister. 2020b. A corpus for argumentative writing support in german. In *Proceedings of the 28th International Conference on Computational Linguistics*,
page 856–869.
Thiemo Wambsganss, Christina Niklaus, Matthias Söllner, Siegfried Handschuh, and Jan Marco Leimeister.
2021. Supporting cognitive and emotional empathic writing of students. *In 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language, pages 4063–4077.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 conference on empirical methods in natural language* processing: system demonstrations, pages 38–45.
Sarah Shi Hui Wong and Stephen Wee Hun Lim. 2019.
Prevention-permission-promotion: A review of approaches to errors in learning. *Educational Psychologist*, 54(1):1–19.
## A Appendix
![12_image_0.png](12_image_0.png)
α Fleiss' Kappa 0.5600 0.4147 0.3979
0.7970 0.7813 0.7750
![12_image_1.png](12_image_1.png)
![12_image_2.png](12_image_2.png)
Table 6: Inter-annotator agreement of relations annotations.
| Classifier Model Name | Model Type | Precision | Recall | F1 Score | Class |
|-------------------------|--------------|-------------|----------|------------|---------|
| 0.92 | 0.95 | 0.93 | MC | | |
| 0.87 | 0.92 | 0.89 | C | | |
| 0.78 | 0.86 | 0.82 | D | | |
| 0.69 | 0.73 | 0.71 | S | | |
| 0.91 | 0.86 | 0.88 | N | | |
| BERT | 0.93 | 0.96 | 0.94 | MC | |
| 0.90 | 0.90 | 0.90 | C | | |
| 0.79 | 0.89 | 0.84 | D | | |
| 0.60 | 0.80 | 0.69 | S | | |
| 0.93 | 0.84 | 0.88 | N | | |
| RoBERTa | | | | | |
| Lawful Components | 0.95 | 0.95 | 0.95 | MC | |
| 0.87 | 0.91 | 0.89 | C | | |
| 0.82 | 0.86 | 0.84 | D | | |
| 0.62 | 0.71 | 0.66 | S | | |
| 0.92 | 0.87 | 0.89 | N | | |
| DistilBERT | 0.91 | 0.93 | 0.92 | MC | |
| 0.81 | 0.88 | 0.84 | C | | |
| 0.80 | 0.77 | 0.78 | D | | |
| 0.53 | 0.67 | 0.59 | S | | |
| 0.89 | 0.82 | 0.86 | N | | |
| DistilRoBERTa | 0.78 | 0.58 | 0.66 | LC | |
| BERT | 0.62 | 0.79 | 0.69 | P | |
| 0.83 | 0.74 | 0.78 | N | | |
| 0.76 | 0.51 | 0.61 | LC | | |
| RoBERTa | 0.58 | 0.68 | 0.63 | P | |
| 0.77 | 0.76 | 0.76 | N | | |
| Subsumption Types | 0.67 | 0.57 | 0.62 | LC | |
| DistilBERT | 0.61 | 0.72 | 0.66 | P | |
| 0.79 | 0.73 | 0.76 | N | | |
| 0.56 | 0.51 | 0.53 | LC | | |
| DistilRoBERTa | 0.52 | 0.66 | 0.58 | P | |
| 0.77 | 0.63 | 0.69 | N | | |
| BERT | 0.90 | 0.90 | 0.90 | - | |
| RoBERTa | 0.89 | 0.60 | 0.72 | - | |
| DistilBERT | 0.88 | 0.87 | 0.88 | - | |
| DistilRoBERTa | 0.74 | 0.98 | 0.85 | - | |
| Claim-Premise Relation | | | | | |
| Abbreviation (German) | ff. | abs. | art. | gem. | nr. | ggf. |
|-------------------------|-------------|-----------------|--------------|--------------|---------------|--------------|
| Translation | et seqq. | para. | art. | acc. to | no. | - |
| Meaning | Refers | to | fur | | | |
| ther paragraphs. | paragraph | article | according to | number | if applicable | |
| Abbreviation (German) | abl. | abschn. | abschl. | allg. | anm. | ausf. |
| Translation | - | sec. | - | - | - | - |
| Meaning | deprecating | section | markdown | in general | comment | in detail |
| Abbreviation (German) | vgl. | i.S.d. | insbes. | grds. | ggü. | bzw. |
| Translation | - | - | - | - | - | resp. |
| Meaning | see | in the sense of | notably | in principle | vis-à-vis | respectively |
| Abbreviation (German) | bzgl. | bspw. | bsp. | betr. | begr. | Beschl. |
| Translation | - | e.g. | - | - | - | - |
| Meaning | regarding | for example | example | concerning | justifying | resolution |
Table 8: Overview of abbreviations with which the models were extended. The models understand the appropriate abbreviations as such and do not break up sentences. The list will be extended in the future.
| Section | Variables | Items | Scale | | |
|---------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|---------------|-----|--------|
| PreSurvey | Previous | experi | | | |
| ence | with | legal | | | |
| writing | Have you already taken or completed a law class? (This also includes courses such as introduction to law or similar courses that are offered, for example, as part of a business administration degree program.) | Yes / No | | | |
| PreSurvey | Previous | experi | | | |
| ence | with | legal | | | |
| writing | In which field are you studying or have you studied ? | Law, | Business | | |
| Law, | Business | | | | |
| Administration, Business Sciences | | | | | |
| PreSurvey | Demographics | 1. Age 2. Gender 3. Language | Open Scale | 1-5 | (good, |
| rather | good, | aver | | | |
| age, | rather | weak, | | | |
| weak) | + | Open | | | |
| (Open question for the explanation of the evaluation) | | | | | |
| Writing | Online assignment | "In the following, you can solve the civil law | | | |
| Task | case. Use the appraisal style. The writing support system will help you write your case solution and will also provide you with the exact facts of the case in the form of a case study. Your case solution should be about 350-450 words (the system will show you your word count)." | | | | |
| PreTest | Checking the level of knowledge before interacting with the system | Evaluation case solution 1.1 (civil law - rather good solution) Evaluation case solution 1.2 (civil law - weak solution) | Open question | | |
| Table 9: Overview of the pre-survey, pre-test and the writing task. | | | | | |
| Section | Variables | Items | Scale | |
|----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|--------|
| PostTest | Checking the level of knowledge after interacting with the system (assessment task) | Scale | 1-5 | (good, |
| rather | good, | aver | | |
| age, | rather | weak, | | |
| weak) | + | Open | | |
| (Open question for the explanation of the evaluation) | | | | |
| PostSurvey | Intention | to | use | |
| (Agarwal | and | | | |
| Karahanna, 2000) | Evaluation case solution 2.1 (civil law - weak solution) Evaluation case solution 2.2 (civil law - good solution) "Assuming the system would be available for a law course, I would use it again." "Assuming the system would be available at a law course, I would plan to use it." | 1- 7 Likert scale (7: highest) | | |
| PostSurvey | Perceived | useful | | |
| ness (Agarwal and Karahanna, 2000) | "Using the writing support system helps me more effectively write persuasive case solutions using the appraisal style." "I find the interaction with the system useful in writing persuasive case solutions using the appraisal style." | 1- 7 Likert scale (7: highest) | | |
| PostSurvey | Perceived | ease | of | |
| use (Venkatesh and Bala, 2008) | "Learning how to use the system would be easy for me." "I perceived the interaction with the system as easy." "I think it would be easy for me to become skillful in using the system." | 1- 7 Likert scale (7: highest) | | |
| PostSurvey | Feedback accuracy (Podsakoff and Farh, 1989) | "The systems evaluation of my case solution reflects my actual performance." "The systems has accurately evaluated my performance." "The recommendations I received from the system was an accurate assessment of my performance." "I assume that the system will help me improve my ability to write persuasive case solutions in the appraisal style." | 1- 7 Likert scale (7: highest) | |
| Postsurvey | Control questions | "Please check "Strongly agree." "A certain word was mentioned in the system tutorial video. Please write this word in the text box below." | Open question | |
| Table 10: Overview of the post-survey and the post-test. | | | | |
Group p-value W Mean (TG) Mean (CG) SD (TG) SD (CG)
Pre-Test 0.638 115 0.286 0.233 0.323 0.319 Post-Test 0.0404 146.5 0.357* 0.133 0.305 0.229 Table 11: Results of the analysis of the *learning outcome*. We show the mean, the standard derivation, Wilcoxon statistic (W), middle rank of the control group and the treatment group, as well as the results of a Wilcoxon rank-sum test. We set the significance level at alpha 0.05: p<=0.05*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1 (Introduction).
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5 (Application Of The Corpus).
B1. Did you cite the creators of artifacts you used?
Not applicable. We have developed the system ourselves and there is still no cited source in which the system appears.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. We have developed the system ourselves.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. We have developed the system ourselves.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
All the data we have collected has been only processed anoymized. In the course of our publication, no connections to personal data can be made.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5 (501-513).
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Training of the models for the system. See Section 5 (442-483).
## C ✓ **Did You Run Computational Experiments?**
Section 5 (442-483).
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
No, because we can't report anything in the paper. However, we can show the data here. Infrastructure for training: M1 Pro (Apple Silicon) Time for fine-tuning: The majorclaim/subsumption/conclusion/definition/none classifier: 50 minutes The premise/claim/none classifier: 15 minutes The relation classifier: 10 minutes
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
For the number of parameters we refer to this work: https://arxiv.org/pdf/1810.04805.pdf. Additionally we 110 million (since we used BERT-base).
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5 (442-483) and the Appendix.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3.3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
See Guideline in the supplementary material.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 5 (514-556).
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 8
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
We made a request to the Ethics Committee of our university. This was accepted.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 3.3 Section 8 |
zheng-etal-2023-characterizing | Characterizing the Impacts of Instances on Robustness | https://aclanthology.org/2023.findings-acl.146 | Building robust deep neural networks (DNNs) against adversarial attacks is an important but challenging task. Previous defense approaches mainly focus on developing new model structures or training algorithms, but they do little to tap the potential of training instances, especially instances with robust patterns carring innate robustness. In this paper, we show that robust and non-robust instances in the training dataset, though are both important for test performance, have contrary impacts on robustness, which makes it possible to build a highly robust model by leveraging the training dataset in a more effective way. We propose a new method that can distinguish between robust instances from non-robust ones according to the model{'}s sensitivity to perturbations on individual instances during training. Surprisingly, we find that the model under standard training easily overfits the robust instances by relying on their simple patterns before the model completely learns their robust features. Finally, we propose a new mitigation algorithm to further release the potential of robust instances. Experimental results show that proper use of robust instances in the original dataset is a new line to achieve highly robust models. | # Characterizing The Impacts Of Instances On Robustness
Rui Zheng1∗, Zhiheng Xi1∗, Qin Liu2, Wenbin Lai1**, Tao Gui**3†,
Qi Zhang1, Xuanjing Huang1, Jin Ma4, Ying Shan4**, Weifeng Ge**1†
1 School of Computer Science, Fudan University 2 Viterbi School of Engineering, University of Southern California 3Institute of Modern Languages and Linguistics, Fudan University 4 Tencent PCG
{rzheng20,tgui,qz,xjhuang}@fudan.edu.cn , [email protected]
{zhxi22,wblai21}@m.fudan.edu.cn
## Abstract
Building robust deep neural networks (DNNs)
against adversarial attacks is an important but challenging task. Previous defense approaches mainly focus on developing new model structures or training algorithms, but they do little to tap the potential of training instances, especially instances with robust patterns carring innate robustness. In this paper, we show that robust and non-robust instances in the training dataset, though are both important for test performance, have contrary impacts on robustness, which makes it possible to build a highly robust model by leveraging the training dataset in a more effective way. We propose a new method that can distinguish robust instances from nonrobust ones according to the model's sensitivity to perturbations on individual instances during training. Surprisingly, we find that the model under standard training easily overfits the robust instances by relying on their simple patterns before the model completely learns their robust features. Finally, we propose a new mitigation algorithm to further release the potential of robust instances. Experimental results show that proper use of robust instances in the original dataset is a new line to achieve highly robust models. Our codes are publicly available at https://github.com/ruizheng20/robust_data.
## 1 Introduction
Deep neural networks (DNNs) have made significant progress in a number of fields, such as computer vision (He et al., 2016) and natural language processing (Devlin et al., 2019), but they are susceptible to adversarial examples, which are crafted by adding small, human-imperceptible adversarial perturbations to normal examples (Goodfellow et al., 2015; Alzantot et al., 2018). To improve the robustness of models, many techniques have been developed, such as robust architecture search (Guo et al., 2020; Huang et al., 2021), model pruning (Sehwag et al., 2020; Zheng et al., 2022), adversarial training (Madry et al., 2018; Zhu et al., 2020) and regularizations (Lyu et al., 2015; Wang et al., 2021).
However, most of these defensive approaches focus on developing new model structures or training algorithms, ignoring the fact that training data has a decisive impact on the trained model.
It is widely believed that the more abundant the labeled data, the higher the likelihood of learning diverse features, which in turn leads to well generalized models (Swayamdipta et al., 2020). However, in practice, adversarial robustness remains a challenge that cannot be solved simply by scaling up the dataset (Xie and Yuille, 2020). On the one hand, recent theoretical work argues that training a model invariant to adversarial perturbations requires a much larger dataset than that is required for standard generalization (Schmidt et al., 2018; Alayrac et al., 2019). On the other hand, the model tends to use any available signal to maximize accuracy, and thus, adversarial examples can arise as a result of manipulating highly predictive but fragile features in the data (Ilyas et al., 2019). The above evidences indicate that, adversarial vulnerability is not only associated with the training data size, but is also an inherent property of the data.
Most of existing defense methods treat all data equally, which requires us to have a closer look at the dataset whether all instances contribute equally to improving the robustness of the model.
In this paper, we focus on exploring the relationship between training data and adversarial robustness, with the aim of figuring out the following questions:
Q1: Which instances are important for adversarial robustness, and how do we find them? We delve into the training dynamics of each instance and find that instances have different robustness.
Even without the help of adversarial training, a portion of the data progressively becomes more robust to perturbations, and these instances are called *robust instances*.
1 When we train models on these data in isolation, they are more helpful in improving robustness than other subsets of data with the same size. Motivated by this phenomenon, we propose a metric based on the adversarial loss of each instance across the training epochs to indicate the impact of training instances on robustness. As shown in Figure 2, this metric reveals three distinct regions in the dataset: a region with inherently robust instances, a region with non-robust instances, and a region with instances that fluctuate between robust and non-robust. Based on the proposed metrics, a significant portion of robust instances can be selected from the training dataset to significantly improve the robustness of the model.
Q2: Why is the benefit of robust instances held back when mixed with other training instances? How to make the best out of them to improve robustness? DNNs exhibit memorization effects in that they first memorize easy and clean patterns, and then hard and noisy ones
(Zhang et al., 2017; Wang et al., 2019). The robust instances have simple and straightforward task patterns that better align with human perception. We find that the model under standard training easily overfits the robust instances by relying on their simple patterns before the model starts to learn their robust features, which limits the power of robust instances. To address this problem, we propose a new mitigation algorithm that impedes overconfident predictions by regularizations for robust instances to avoid overfitting. The proposed method effectively releases the potential of robust instances, while other instances contribute little to robustness improvement. In particular, our contributions are:
- We show that the instances are not equally important to improve the robustness of the model. The robust instances are more critical to robustness than other instances.
- We propose a new approach to distinguish the robust instances from non-robust ones based on their sensitivity to perturbations during training.
- We find that the standard training easily overfits the robust instances relying on their simple patterns rather than learning robust features.
1In this paper, a robust instance means that the model is insensitive to perturbations of this instance. In the later sections, we will show that robust instances also have a positive impact on the robustness of the model.
- We propose a new mitigation algorithm to further release the potential of robust instances.
Our analysis and results are verified by extensive experiments.
## 2 Characterizing Robust Instances
We find that the model have different sensitivities to perturbations of the instances during the training phase, and this property is strongly correlated with the robustness of the trained models. Based on this, we propose an approach to identify these innately robust instances and demonstrate that they contribute more to robustness when trained in isolation.
## 2.1 Adversarial Loss During Training
Given a C-class dataset D = {(x 0 i
, yi)}
N
i=1 of size N, x 0 i denotes the natural input embeddings and yi is the label vector. Our method assumes a model fθ whose parameters θ are optimized to minimize the empirical risk, as in standard training, without any extra regularization. The loss function on the natural input x 0 i is ℓ(x 0 i
, yi, θ). We use a stochastic gradient-based optimization procedure to optimize the model parameters, with training instances randomly ordered at each epoch, across T epochs.
To measure robustness of the instances during training, we perturb the input word embeddings.2 The goal of an attack method is to find an adversarial example xithat remains in the ϵ-ball centered at x 0 i
(∥xi − x 0 i∥F ≤ ϵ) but can fool the model to make an incorrect predication (fθ(xi) ̸= yi).
The loss function on adversarial example xi can reflect to what extent the robust and useful features are preserved under adversarial perturbation (Ilyas et al., 2019):
$$\ell_{\rm adv}({\bf x}_{i},{\bf y}_{i},\mathbf{\theta})=\max_{\|{\bf x}_{i}-{\bf x}_{i}^{0}\|_{F}\leq\epsilon}\ell({\bf x}_{i},{\bf y}_{i},\mathbf{\theta}).\tag{1}$$
A wide range of attack methods have been proposed to craft adversarial examples. Projected Gradient Descent (PGD) iteratively perturbs normal input x 0for a number of steps K with fixed step size η. If the perturbation goes beyond the ϵ-ball, it is projected back to the ϵ-ball (Madry et al., 2018):
$$\mathbf{x}_{i}^{k}=\prod\left(\mathbf{x}_{i}^{k-1}+\eta\cdot\operatorname{sign}(\nabla_{\mathbf{x}}\ell(\mathbf{x}_{i}^{k-1},\mathbf{y}_{i},\boldsymbol{\theta}))\right),$$
where x k i is the adversarial example at the k-th step, sign(·) denotes the sign function and Q(·) is the projection function.
2In our work, the robustness of an instance refers to the robustness of the model on a specific instance.
![2_image_0.png](2_image_0.png)
We characterize the evolution of robustness using statistics of adversarial losses throughout training. The first statistic aims to measure the sensitivity of the model predictions in the face of perturbations. We define **sensitivity** of individual instance x 0 i as mean adversarial loss across epochs:
$${\hat{\mu}}_{i}={\frac{1}{T}}\sum_{t=1}^{T}\ell_{\mathrm{adv}}(\mathbf{x}_{i},\mathbf{y}_{i},{\boldsymbol{\theta}}_{t}),\qquad\quad(2)$$
where θt denotes the model parameters at the end of t-th epoch. We also consider a more coarse, discrete, and perhaps more intuitive statistic, the rate of times the model provides incorrect predictions when the input is perturbed, referred to as flip rate; this score has only T + 1 possible values.
Finally, we consider **variability**, i.e., the spread of adversarial loss across epochs as measured by the standard deviation:
$${\hat{\sigma}}_{i}={\sqrt{\frac{\sum_{t=1}^{T}\left(\ell_{\mathrm{adv}}(\mathbf{x}_{i},\mathbf{y}_{i},\theta_{t})-{\hat{\mu}}_{i}\right)^{2}}{T}}}.\quad(3)$$
If the model consistently assigns the same prediction to a perturbed instance (whether correct or not), this instance will have low variability. On the contrary, if the model is indecisive, this instance will have high variability.
## 2.2 Data Maps
In order to better illustrate the differences in instances, we use the above statistics as coordinates to construct a data map (Swayamdipta et al., 2020).
We construct data maps for three widely used benchmark datasets: SST-2 (Socher et al., 2013)
- a binary classification task that needs to classify movie reviews as positive or negative; QQP (Wang et al., 2017) - a paraphrase identification task to determine if two questions are paraphrases of each other; AGNews (Zhang et al., 2015) is a text classification task that classifies news articles into one of four topics. All data maps are built using results from the models based on the BERT-base (Devlin et al., 2019) architecture.
Figure 1 shows the data map for the *SST-2* dataset. It is obvious that the data follow a bellshaped curve with respect to sensitivity and variability. The majority of instances fall within the high sensitivity and moderate variability regions on the map (Figure 1, bottom-right). These instances are always non-robust to perturbations (for the model); therefore, we refer to them as *nonrobust instances*. The second group is smaller and consists of instances with low sensitivity and low variability (Figure 1, bottom-left). As such instances are robust to perturbations during training,
![3_image_0.png](3_image_0.png)
we refer to them as *robust instances*. The third group consists of instances with high variability
(Figure 1, top); these instances swing between being sensitive or robust to perturbations. Therefore, we refer to them as *swing instances*.
Robust dynamic. We consider three data subsets, i.e., 10% of the most *robust*, 10% of the most *nonrobust*, and 10% of the most *swing*. Figure 2 shows the adversarial losses of instances from these three regions of the *SST-2* dataset during the training procedure. The most significant difference among the three regions is the decline rate of their adversarial losses, which is much faster in the region of *robust* instances than in the other two regions. This means that robust instances have more robust features and they become more robust to perturbations as training proceeds, without the help of robust learning methods such as adversarial training.
Case study. Table 5 shows instances of *SST2* that belong to the different regions mentioned above. *Robust* instances have more straightforward task patterns, are better aligned with human perception, and are easy to understand. In contrast, most *non-robust* and *swing* instances are ambiguous, have no obvious task patterns, and are challenging for humans, which may explain why these instances are vulnerable to perturbations (Tsipras et al., 2019).
## 3 Data Selection Using Data Maps
The data map shows the different regions in the dataset. It is natural to wonder what role instances from different regions play in learning and adversarial robustness. We answer this question empirically by training the model solely on instances selected from each region, and then performing standard
| Dataset | Baseline | Accuracy | Robustness |
|----------------|------------|------------|--------------|
| 100% train | 92.1 | 6.1 | |
| 100% FreeLB | 91.7 | 29.4 | |
| 50% non-robust | 93.1 | 4.7 | |
| 50% swing | 91.6 | 17.2 | |
| 50% robust | 91.1 | 23.9 | |
| SST-2 | 100% train | 90.1 | 20.8 |
| 100% FreeLB | 90.2 | 27.4 | |
| 50% non-robust | 86.2 | 18.3 | |
| 50% swing | 88.9 | 20.6 | |
| 50% robust | 75.5 | 28.7 | |
| QQP | | | |
generalization (Accuracy) as well as robustness
(accuracy under attack) evaluations.
The training strategy is simple and straightforward - we train the model from scratch on the subsets of the training data selected by ranking instances based on the statistics described above.
We hypothesize that *robust* and *swing* instances are more important for improving the robustness of the model because they have more robust features and more stable to perturbations as training proceeds. We compare the performance of the models trained on different data regions with other baselines. All considered subsets contain 50% of the training data (to control the effect of training data size on performance).
Baselines. The most natural baseline is using all of the data (100% **train**). Our data selection baselines consider the subsets of 50% of the most *robust*
(50% **robust**), 50% of the most *non-robust* (50%
non-robust) and 50% of the most indecisive (50%
swing), which is a trade-off between robust and non-robust instances. Finally, we also compare our models trained on data subsets with a textual adversarial training method, Freelb (Zhu et al., 2020),
which is a strong defense baseline in NLP (100%
FreeLB).
Results. We report accuracy on the test set to evaluate generalization performance, and accuracy under attack using TextFooler (Jin et al., 2020) as the attacker to measure adversarial robustness.
Table 1 shows our results on the SST-2 and QQP
datasets. We can observe that: 1) Training on 50%
most *robust* instances results in the best robustness performance among all data selections, ex-
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
Loss
ceeding that of 100% train and even better than 100% FreeLB on QQP. 2) In both datasets, the robustness performance of *non-robust* instances is lower than all other baselines, which is expected because the features of these instances are non-robust.
3) The generalization performance of *non-robust* instances is better than that of *robust*, because well generalized features are more sensitive to adversarial perturbations. 4) The model trained on most swing instances sacrifices a bit of robustness to improve generalization compared with the performance of the model trained on *robust* instances.
In Figure 3, we show the evolution of generalization performance and adversarial robustness when we change the size of the selected subsets of data.
Each point in the figure corresponds to retraining the model from scratch (with the same hyperparameters as the base model) on an increasingly larger subset of the training data. We observe that the generalization performance improves rapidly when we increase the size of *non-robust* and *swing* subsets. This means that the original training dataset is redundant to the model, and training the model on a small portion of the data also gives excellent generalization performance. Comparatively, by increasing the number of *robust* and *swing*, 50% of the data is sufficient to achieve excellent robustness performance, while more data actually hurts robustness. On the one hand, we require sufficient data to improve the generalization, and on the other hand, excessive well-generalized data will harm the robustness. We cannot rely on data selection alone to get the best generalization and robustness at the same time, and in the next sections we will show how to leverage instances from different regions to achieve win-win results in performance.
## 4 Why Do Robust Instances Fail?
In the previous section, we observed that *robust* instances are import for adversarial robustness. This leads us to wonder why *robust* instances fail in competition with *non-robust* instances, and is it inevitable when we train these data together? In this section, we provide further insight into the training procedure by investigating the interactions between robust and *non-robust* instances.
## 4.1 Overfitting To Robust Instances
As shown in Figure 4, we find that *robust* instances are easy to learn and converge faster than non-
![5_image_1.png](5_image_1.png)
![5_image_0.png](5_image_0.png)
robust instances. As the training loss of robust instances approaches zero, the test loss is increasing, which means that the model overfits the robust instances. However, the robustness of the model continues to improve, even as the training loss becomes (close to) zero. This suggests that the model under standard training easily overfits the *robust* instances before the model starts to learn their robust features. We further show the results in Figure 4(c) when we use the Flooding (Ishida et al., 2020)
algorithm to mitigate overfitting to *robust* instances, where Flooding intentionally prevents further reduction of the training loss when it reaches a reasonably small value. Flooding prevents the model from memorizing and being overconfident in these instances. By mitigating the overfitting to robust instances, the benefits of robust data are further demonstrated.
The above analysis leads us to believe that, to some extent, the overfitting to *robust* instances reduces their ability to improve robustness, especially when competing with *non-robust* instances. To test this hypothesis, we conduct an experiment inspired by the standard continuous learning setup (Toneva et al., 2019). We created two equally sized datasets by extracting 50% of the most *robust* and 50% of the most *non-robust* instances, respectively. Then, we train a model for 2 epochs on each partition in an alternating fashion, while tracking generalization and robustness on the test set. The background color represents which of the two datasets is currently being used for training.
It can be concluded that: 1) Figures 5(a) and 5(d) show that even if we train the *non-robust* instances first, the *robust* instances can still improve robustness in the next 2 epochs. However, as the training loss of the *robust* instances converges to zero, the model learns less and less from the robust instances. 2) As shown in Figure 5(a), there is a significant conflict in learning between the robust and non-robust instances, which means that the model learns distinct features from them. 3) Figure 5(b)
shows the conflict effect between robust and nonrobust instances is reduced by mitigating the overfitting of the model to robust instances. Therefore, mitigating overfitting to robust instances allows the model to better generalize their robust features.
## 4.2 Mitigating Overfitting
Based on the above analysis, our aim is to mitigate the overfitting to *robust* instances in the standard training process. While overfitting has been extensively studied in the machine learning community to reduce the generalization gap, few approaches consider the impact of overfitting on adversarial robustness.
To address this problem, we propose to regularize the predictions of *robust* instances from being over-confident by integrating loss-restricted (LR)
methods (Szegedy et al., 2016; Ishida et al., 2020)
into the standard training framework. We believe that LR is suitable for standard training because it can be easily implemented by adding a term to the objective function. Specifically, we construct a new dataset D
p%
r using the p% most *robust* instances in the original dataset, and other instances construct D\Dp%
r . The training loss on the instance xi with LR can be expressed as:
$$\ell_{\text{LR}}(\mathbf{x}_{i},\mathbf{y}_{i},\boldsymbol{\theta})=\begin{cases}\ell(\mathbf{x}_{i},\mathbf{y}_{i},\boldsymbol{\theta}),\ \mathbf{x}_{i}\in\mathcal{D}\backslash\mathcal{D}_{r}^{p\%},\\ \mathcal{R}(\ell\left(\mathbf{x}_{i},\mathbf{y}_{i},\boldsymbol{\theta}\right))\,,\ \mathbf{x}_{i}\in\mathcal{D}_{r}^{p\%},\end{cases}$$ where $\mathcal{R}(\cdot)$ denotes regularization term. In our
method, we consider two regularizations, Flooding
(Ishida et al., 2020) and Label Smoothing (Szegedy et al., 2016), to control the training loss for alleviating overfitting.
Flooding is a direct solution to the issue that the training loss becomes (near-)zero. When the training loss reaches a reasonably small value, Flooding intentionally prevents further reduction of the training loss, and the flood level corresponds to the level of training loss that the user wants to maintain. The algorithm of Flooding is simple, modifying the training loss as (Liu et al., 2022):
$$y_{i},\theta))=|\ell(\mathbf{x}$$
RFL(ℓ(xi, yi, θ)) = |ℓ(xi, yi, θ) − b| + b, (4)
where b > 0 is the user-specified flood level. By using Flooding, the training loss will oscillate around the flood level. The model will continue to "random walk" with the same non-zero training loss, thus the model will move into a region with a flat loss landscape, leading to better generalization.
Label smoothing is another widely known technique to mitigate the overfitting problem by penalizing overconfident model outputs (Müller et al.,
2019). For a model trained with hard labels, we minimize the expected value of the cross-entropy between the true label yi and the model's output for xi, where yiis a one-hot vector with "1" for correct class and "0" for others. For a model trained with the label smoothing, we minimize the crossentropy between the modified label y LS
iand the model's output:
$${\mathcal{R}}_{\mathrm{LS}}(\ell(\mathbf{x}_{i},\mathbf{y}_{i},\boldsymbol{\theta}))=\ell(\mathbf{x}_{i},\mathbf{y}_{i}^{\mathrm{LS}},\boldsymbol{\theta}),$$
i, θ), (5)
where y LS
i = yi(1−α)+α/C, α is the smoothing parameter and C is the number of classes .
From Table 2, we can observe that the above regularizations are valid. The robustness of the model is significantly improved by mitigating the overfitting of *robust* instances, while also achieving high accuracy. We use two regularizations to prove that the above conclusion does not depend on any specific regularization. In the experimental section, we perform more experiments to verify the effectiveness of the proposed method.
## 5 Experiments
In this section, we provide experimental results using BERT-base (Devlin et al., 2019) as a backbone model on the SST-2 (Socher et al., 2013), QQP
(Wang et al., 2017) and AGNews (Zhang et al.,
| Dataset | Baseline | Accuracy | Robustness |
|-------------------|------------|------------|--------------|
| Standard Training | 92.1 | 6.1 | |
| SST-2 | + Flooding | 91.9 | 46.0 |
| + Label Smoothing | 92.5 | 41.4 | |
| Standard Training | 90.1 | 20.8 | |
| QQP | + Flooding | 90.9 | 39.4 |
| + Label Smoothing | 91.0 | 45.1 | |
$$,y_{i},\theta)-b|+$$
2015) datasets to validate and analyze the effectiveness of our proposed approach. Experimental implementation details and hyperparameters are provided in Appendix A.
## 5.1 Robust Evaluation
The evaluation metrics used in our experimental analyses include: 1) **Clean**%: the accuracy on the clean test dataset; 2) Aua%: the model's prediction accuracy under attack; 3) **\#Query**: the average number of times the attacker queries the victim model. For a robust model, higher accuracy under attack and higher query times are expected.
The baselines we used and adversarial settings are shown in Appendix A. More experimental results and analysis are presented in Appendix B.
Results. Table 3 shows the results of the proposed method and other baselines under adversarial attack. We can observe that the proposed method achieves a significant improvement in robustness compared to other defense methods. Both Flooding and Label Smoothing work well in our approach.
The proposed method improves the robustness without sacrificing accuracy, while robust tickets lose much accuracy on SST-2 despite also having a high robustness. We consistently demonstrate the effectiveness of our approach on different datasets.
## 6 Related Work
Text attacks typically generate adversarial examples by manipulating characters (Ebrahimi et al.,
2018; Gao et al., 2018), words (Ren et al., 2019; Jin et al., 2020; Li et al., 2020; Alzantot et al.,
2018; Zang et al., 2020; Maheshwary et al., 2021),
phrases (Iyyer et al., 2018) of the original input, or even entire sentences (Wang et al., 2020), to deceive the model. The most widely used attacks are word-level attacks, which replace words in a sentence with synonyms and maintain a high-level similarity and validity in the semantic (Li et al.,
2020) or embedding space (Jin et al., 2020).
To counter adversarial attacks, a number of defense methods have been developed, such as adversarial training (Madry et al., 2018; Zhu et al.,
2020; Li and Qiu, 2021), information compression
(Wang et al., 2021; Zhang et al., 2022), and model pruning (Zheng et al., 2022; Xi et al., 2022). However, most of these defensive approaches focus on developing new model structures and training algorithms, ignoring the fact that training data has a decisive impact on the robustness of the model. In this paper, we propose a new defense method from a data perspective to improve the robustness of the model by better utilizing the robust instances in the original dataset.
| Dataset | Method | Clean% | BERT-Attack | TextFooler | TextBugger | | |
|----------------------|----------|----------|---------------|--------------|--------------|------|-------|
| Aua% | #Query | Aua% | #Query | Aua% | #Query | | |
| Fine-tune | 92.1 | 3.8 | 106.4 | 6.1 | 90.5 | 28.7 | 46.0 |
| PGD | 92.2 | 13.4 | 151.3 | 18.1 | 118.5 | 44.2 | 53.6 |
| FreeLB | 91.7 | 23.9 | 174.7 | 29.4 | 132.6 | 49.7 | 53.8 |
| InfoBERT | 92.1 | 14.4 | 162.3 | 18.3 | 121.1 | 40.3 | 51.2 |
| RobustT | 90.9 | 20.8 | 169.2 | 28.6 | 149.8 | 43.1 | 53.9 |
| Ours+Flooding | 92.3 | 42.4 | 224.6 | 46.8 | 163.3 | 55.9 | 63.2 |
| Ours+Label Smoothing | 91.9 | 41.3 | 235.7 | 47.3 | 170.5 | 58.8 | 63.4 |
| Fine-tune | 90.1 | 18.1 | 187.8 | 20.8 | 131.3 | 24.3 | 58.8 |
| PGD | 91.2 | 30.5 | 254.1 | 33.6 | 174.2 | 35.9 | 89.2 |
| FreeLB | 91.3 | 32.8 | 262.8 | 36.4 | 180.2 | 37.7 | 96.8 |
| InfoBERT | 91.5 | 33.0 | 263.9 | 36.3 | 180.1 | 38.2 | 94.6 |
| RobustT | 91.2 | 35.2 | 271.2 | 37.3 | 183.9 | 39.5 | 97.0 |
| Ours+Flooding | 90.9 | 37.0 | 289.8 | 39.4 | 195.8 | 40.8 | 98.4 |
| Ours+Label Smoothing | 91.1 | 42.4 | 316.1 | 44.5 | 208.9 | 47.3 | 102.1 |
| Fine-tune | 94.7 | 4.1 | 412.9 | 14.7 | 306.4 | 40.0 | 166.2 |
| PGD | 95.0 | 20.9 | 593.2 | 36.0 | 399.2 | 56.4 | 193.9 |
| FreeLB | 95.0 | 19.9 | 581.8 | 33.2 | 396.0 | 52.9 | 201.1 |
| InfoBERT | 94.4 | 11.1 | 517.0 | 25.1 | 374.7 | 47.9 | 193.1 |
| RobustT | 94.9 | 21.8 | 617.5 | 35.2 | 415.6 | 49.0 | 206.9 |
| Ours+Flooding | 94.5 | 73.1 | 874.2 | 76.6 | 527.5 | 78.7 | 252.9 |
| Ours+Label Smoothing | 94.7 | 75.4 | 904.0 | 79.4 | 947.3 | 82.3 | 262.6 |
A body of work tends to view the existence of adversarial examples as an inevitable consequence of using high-dimensional inputs and the statistical fluctuations due to data size and data noise (Goodfellow et al., 2015; Gilmer et al., 2018). However, Ilyas et al. (2019) claim that adversarial vulnerability is a direct result of sensitivity to wellgeneralizing features in the data. Data-related studies in the field of robustness focus on improving the robustness of models using more unlabeled data
(Carmon et al., 2019; Alayrac et al., 2019) and data augmentation (Lee et al., 2020; Rebuffi et al., 2021).
Dong et al. (2021) find that low-quality data may not be useful or even detrimental to adversarial robustness. To the best of our knowledge, no work has attempted to characterize the impact of each instance in the training dataset on robustness.
| SST-2 QQP AGNews |
|--------------------|
## 7 Conclusion
In this paper, we address the challenge of understanding the impact of training instances on robustness, particularly to improve the robustness of the model. We study the adversarial losses of each instance during training and show how these losses can be used as a metric to identify robust instances.
Our empirical results suggest that the proposed metric is a very promising measure for characterizing the contribution of training instances to robustness, and can be used to prune out non-robust instances to construct a dataset that is inherently robust. Furthermore, we show that standard training can easily overfit robust instances by relying on their simple patterns before the model learns the robust features.
The robustness of the model can be significantly improved by mitigating the overfitting of the model to robust instances during the standard training.
Further investigations in this direction may lead to
## Limitations
In this work, we find that robust instances are helpful for model robustness and propose a metric to select them. However, we only applied one single criterion, i.e. the training dynamic of adversarial loss, as selection metric. More instance features can be inspected in terms of the relation with model robustness and further serve as metrics for robust data selection. Moreover, in this work, we use the selected data for standard fine-tuning with simple regularization, while the impact of data robustness on adversarial training is not studied. These two problems will be explored in future work.
## Acknowledgements
The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62206057,62076069,61976056), Shanghai Rising-Star Program (23QA1400200), Natural Science Foundation of Shanghai (23ZR1403500),
and CCF-Tencent Open Fund, except the third author Qin Liu, who is funded by Graduate Fellowship from University of Southern California.
## References
Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. 2019. Are labels required for improving adversarial robustness? In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 12192–12202.
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang.
2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics.
Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C. Duchi, and Percy Liang. 2019. Unlabeled data improves adversarial robustness. In *Advances* in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 11190–11201.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Chengyu Dong, Liyuan Liu, and Jingbo Shang. 2021.
Data quality matters for adversarial training: An empirical study. *arXiv preprint arXiv:2102.07437*.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics.
Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In *2018* IEEE Security and Privacy Workshops, SP Workshops 2018, San Francisco, CA, USA, May 24, 2018, pages 50–56. IEEE Computer Society.
Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S.
Schoenholz, Maithra Raghu, Martin Wattenberg, and Ian J. Goodfellow. 2018. Adversarial spheres. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -
May 3, 2018, Workshop Track Proceedings. OpenReview.net.
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Minghao Guo, Yuzhe Yang, Rui Xu, Ziwei Liu, and Dahua Lin. 2020. When NAS meets robustness: In search of robust architectures against adversarial attacks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 628–637. Computer Vision Foundation / IEEE.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE
Computer Society.
Hanxun Huang, Yisen Wang, Sarah M. Erfani, Quanquan Gu, James Bailey, and Xingjun Ma. 2021. Exploring architectural ingredients of adversarially robust deep neural networks. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 5545–5559.
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry.
2019. Adversarial examples are not bugs, they are features. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural* Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 125–136.
Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020. Do we need zero training loss after achieving zero training error? In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine* Learning Research, pages 4604–4614. PMLR.
Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI
Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI
Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018–8025. AAAI Press.
Jin-Ha Lee, Muhammad Zaigham Zaheer, Marcella Astrid, and Seung-Ik Lee. 2020. Smoothmix: a simple yet effective data augmentation to train robust classifiers. In *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020*,
pages 3264–3274. Computer Vision Foundation /
IEEE.
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics.
Linyang Li and Xipeng Qiu. 2021. Token-aware virtual adversarial training in natural language understanding. In *Thirty-Fifth AAAI Conference on Artificial*
Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 8410–8418.
AAAI Press.
Qin Liu, Rui Zheng, Bao Rong, Jingyi Liu, ZhiHua Liu, Zhanzhan Cheng, Liang Qiao, Tao Gui, Qi Zhang, and Xuanjing Huang. 2022. Flooding-X: Improving BERT's resistance to adversarial attacks via lossrestricted fine-tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5634–
5644, Dublin, Ireland. Association for Computational Linguistics.
Chunchuan Lyu, Kaizhu Huang, and Hai-Ning Liang.
2015. A unified gradient regularization family for adversarial examples. In *2015 IEEE International* Conference on Data Mining, ICDM 2015, Atlantic City, NJ, USA, November 14-17, 2015, pages 301–
309. IEEE Computer Society.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018.
Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting. In Thirty-Fifth AAAI
Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13525–13533. AAAI Press.
Rafael Müller, Simon Kornblith, and Geoffrey E. Hinton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems 32:
Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4696–4705.
Sylvestre-Alvise Rebuffi, Sven Gowal, Dan Andrei Calian, Florian Stimberg, Olivia Wiles, and Timothy A. Mann. 2021. Data augmentation can improve robustness. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural* Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 29935–29948.
Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che.
2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085–
1097, Florence, Italy. Association for Computational Linguistics.
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. 2018. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 5019–5031.
Vikash Sehwag, Shiqi Wang, Prateek Mittal, and Suman Jana. 2020. HYDRA: pruning adversarially robust neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS
2020, December 6-12, 2020, virtual.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9275–9293, Online. Association for Computational Linguistics.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2818–2826. IEEE
Computer Society.
Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. 2019. An empirical study of example forgetting during deep neural network learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In *7th International Conference on Learning Representations,*
ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
OpenReview.net.
Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. 2021. Infobert:
Improving robustness of language models from an information theoretic perspective. In *9th International* Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, and Ed Chi. 2020. CATgen: Improving robustness in NLP models via controlled adversarial text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5141–
5146, Online. Association for Computational Linguistics.
Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. 2019. Symmetric cross entropy for robust learning with noisy labels. In *2019* IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27
- November 2, 2019, pages 322–330. IEEE.
Zhiguo Wang, Wael Hamza, and Radu Florian. 2017.
Bilateral multi-perspective matching for natural language sentences. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4144–4150. ijcai.org.
Zhiheng Xi, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang. 2022. Efficient adversarial training with robust early-bird tickets. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8318–8331, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cihang Xie and Alan L. Yuille. 2020. Intriguing properties of adversarial training at scale. In *8th International Conference on Learning Representations,*
ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020.
Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066–6080, Online. Association for Computational Linguistics.
Cenyuan Zhang, Xiang Zhou, Yixin Wan, Xiaoqing Zheng, Kai-Wei Chang, and Cho-Jui Hsieh. 2022.
Improving the adversarial robustness of NLP models by information bottleneck. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 3588–3598, Dublin, Ireland. Association for Computational Linguistics.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657.
Rui Zheng, Bao Rong, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, and Xuanjing Huang. 2022. Robust lottery tickets for pre-trained language models. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2211–2224, Dublin, Ireland. Association for Computational Linguistics.
Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
## A Experimental Details A.1 Implementation Details
Our implementation of the proposed method is mainly based on BERT, so most of the hyperparameter settings are based on them.3 We use AdamW
as our optimizer with the learning rate 2e−5, a batch size 32 and a linear learning rate decay schedule with a warm-up of 0.1. The dropout rate is set to 0.1 for all task-specific layers. We implement three adversarial attack methods using TextAttack framework and follow the default parameter settings.4 The accuracy (Clean%) is tested on the whole test set. Other adversarial robustness evaluation metrics (e.g., Aua% and \#Query) are evaluated on the 1000 randomly selected test instances for all datasets. All experiments are conducted using NVIDIA RTX3090 GPUs.
## A.2 Hyperparameters
Our proposed method consists of two stages; the first stage finds robust instance from the training dataset based on the statistics of adversarial loss, and the second stage, uses regularizations to mitigate the overfitting of the model to robust instances during standard training. Adversarial loss objective introduces four hyperparameters: the perturbation step size η, the initial magnitude of perturbations ϵ0, the number of adversarial steps K, and we do not constrain the bound of perturbations. In addition, we report the flood level b, smoothing parameter α and the p% most *robust* instances in the proposed overfitting mitigation method.
| Hyperparameters | SST-2 | QQP | AGNEWS | |
|-------------------|---------|-------|----------|----|
| η | 0.08 | 0.08 | 0.08 | |
| ϵ0 | 0.05 | 0.05 | 0.05 | |
| Stage1 | K | 8 | 8 | 8 |
| Epoch | 10 | 10 | 10 | |
| b | 0.2 | 0.2 | 0.2 | |
| α | 0.8 | 0.8 | 0.8 | |
| Stage2 | p% | 30 | 50 | 50 |
| Epoch | 5 | 5 | 5 | |
Table 4: Hyperparameters used in the proposed method.
## A.3 Baselines
The baseline methods we use include: 1) **Fine-tune**
(Zhang et al., 2015): the official BERT implementation on downstream tasks; 2) PGD (Madry et al.,
2018): standard adversarial training with PGD attacks; 3) **FreeLB** (Zhu et al., 2020): an enhanced adversarial training to generate adversarial examples at low cost; 4) **infoBERT** (Wang et al., 2021):
the information bottleneck-based approach filtering out redundant and noisy information to improve the robustness of the features; 5) **RobustT** (Zheng et al., 2022): the robust sub-network extracted from the original model with innately better robustness.
## A.4 Attack Settings.
Three widely accepted attack methods are used to evaluate the robustness of the proposed approach and other baselines. **BERT-Attack** (Li et al., 2020) and **TextFooler** (Jin et al., 2020) are two word-level attackers that first identify the important words in a sentence, and then replace them with semantically similar and grammatically correct synonyms.
TextBugger (Li et al., 2019) generates adversarial typos by using both character-level and word-level perturbations.
## B Additional Results B.1 Case Study
Table 5 shows the case study for the instances selected by the proposed metric. *Robust* instances have more straightforward task patterns, are better aligned with human perception, and are easy to understand. In contrast, most *non-robust* and swing instances are ambiguous, have no obvious task patterns, and are challenging for humans.
## B.2 Importance Of Robust Dynamic
In this paper, we propose a new metric that identifies important instances contributing to adversarial robustness based on the adversarial loss during training. To further understand the role of the adversarial loss in our approach, we compared our method with a metric based on the original training loss. From the results in Table 6, data selection based on training loss statistics can identify instances that play an important role in generalization, rather than robustness.
## B.3 **Mitigating Overfitting To Other Instances**
In the proposed method, we use regularization to mitigate the overfitting of model to robust instances. In Table 7, we show the results when regularization is applied on other instances with different sizes.
When we use regularization on robust and swing instances, the robustness of the model is significantly
| Instances | Sentence | Label |
|--------------------------------------------------------------------------------------------------------------------|---------------------------------------------|----------|
| a charming , funny and beautifully crafted import | Positive | |
| a lovely and beautifully | Positive | |
| have a great time | Positive | |
| bright shining star | Positive | |
| bad writing , bad direction and bad acting - the trifecta of badness | Negative | |
| a good time | Positive | |
| charming , funny and beautifully crafted import | Positive | |
| a lovely and beautifully photographed romance . | Positive | |
| 's lovely and amazing | Positive | |
| have a good time | Positive | |
| Robust | vulgarity , sex scenes , and | Negative |
| hard-driving narcissism is a given | Negative | |
| marinated in clichés and mawkish dialogue | Negative | |
| as its uncanny tale of love , communal discord , and justice | Negative | |
| cloying messages and irksome characters | Negative | |
| seems a prostituted muse ... | Negative | |
| is tragically | Negative | |
| painful , horrifying and oppressively tragic | Negative | |
| some weird relative trots out the video he took of the family vacation to stonehenge | Negative | |
| cheesy backdrops , ridiculous action sequences , | Negative | |
| Non-robust | hide new secretions from the parental units | Negative |
| an overall sense of brusqueness | Negative | |
| , dragon loses its fire midway , nearly flickering out by its perfunctory conclusion . | Negative | |
| your brain and your secret agent decoder ring at the door | Negative | |
| , two towers outdoes its spectacle . | Positive | |
| as-nasty - | Negative | |
| semi-surrealist exploration of the creative act . | Positive | |
| viscerally repellent | Negative | |
| silly - and gross - but it 's rarely as moronic as some campus gross-out films . | Negative | |
| bittersweet | Positive | |
| Swing | | |
| Table 5: Examples of the most robust, non-robust and swing instances in the SST-2 training set, with gold standard | | |
Table 5: Examples of the most robust, non-robust and swing instances in the SST-2 training set, with gold standard labels. *Robust* instances have more straightforward task patterns, are better aligned with human perception, and are easy to understand.
improved, while using regularization on non-robust instances does not improve the robustness. This suggests that the excellent performance of the proposed work is due to our better exploitation of the robust features in the data rather than depending on regularizations. Although previous work finds that the Flooding algorithm can improve the robustness of the model, it cannot obtain a performance comparable to the proposed method. Moreover, there is no evidence to show that the robustness of the model can be improved by using label smoothing alone.
## B.4 Instances From More Regions
Table 6 shows the accuracy and robustness evaluation for models trained on instances selected from different regions on the data map. The model trained on robust instances (with low sensitivity and low variability) achieves the best robustness.
## B.5 **Effect Of Regularized Instance Proportion**
Figure 6 shows the proposed method across all proportions of regularized instances. The adversarial robustness improves as the proportions of regularized instances grows until a certain threshold, then the robustness deteriorates.
## B.6 Additional Data Maps
The data maps for AGNEWS and QQP are shown in Figure 7 and Figure 8, respectively.
| Datasets | Metrics | Regions | Top 10% | Top 30% | Top 50% | | |
|------------------|--------------|-----------|-----------|-----------|-----------|------|------|
| Clean% | Aua% | Clean% | Aua% | Clean% | Aua% | | |
| Bottom-Left | 87.9 | 9.9 | 89.6 | 11.5 | 91.1 | 23.9 | |
| Adversarial Loss | Bottom-Right | 83.8 | 2.9 | 89.7 | 4.2 | 93.1 | 4.7 |
| Top | 88.9 | 8.0 | 90.8 | 9.3 | 91.6 | 17.2 | |
| SST-2 | Bottom-Left | 83.5 | 13.8 | 88.2 | 11.2 | 90.7 | 12.7 |
| Training Loss | Bottom-Right | 86.0 | 4.5 | 91.2 | 5.4 | 92.4 | 4.8 |
| Top | 34.1 | 0.2 | 91.6 | 10.2 | 92.7 | 8.9 | |
| Bottom-Left | 69.0 | 23.4 | 71.7 | 25.5 | 75.5 | 28.7 | |
| Adversarial Loss | Bottom-Right | 62.3 | 13.7 | 76.2 | 16.8 | 86.2 | 18.3 |
| Top | 78.8 | 17.7 | 85.2 | 23.5 | 88.9 | 20.6 | |
| QQP | Bottom-Left | 58.9 | 15.5 | 66.3 | 11.8 | 76.7 | 14.5 |
| Training Loss | Bottom-Right | 20.5 | 0.4 | 83.1 | 7.4 | 89.0 | 11.2 |
| Top | 19.5 | 0.8 | 77.8 | 4.3 | 90.3 | 11.0 | |
Table 6: Compare the performance of models trained on instances that are selected by the statistics of adversarial loss and original training loss. Our results show that adversarial loss plays a vital role in the proposed metric.
| Datasets | Instances | Regularization | Top 10% | Top 30% | Top 50% | | | |
|-----------------|-------------|------------------|-----------|-----------|-----------|------|------|------|
| Clean% | Aua% | Clean% | Aua% | Clean% | Aua% | | | |
| Robust | Flooding | 92.7 | 25.4 | 92.3 | 46.8 | 91.9 | 46.0 | |
| Label Smoothing | 92.6 | 32.0 | 92.4 | 47.3 | 92.5 | 41.4 | | |
| Non-robust | Flooding | 92.4 | 5.4 | 92.3 | 14.0 | 92.0 | 11.9 | |
| Label Smoothing | 92.4 | 11.1 | 92.4 | 10.5 | 92.0 | 7.1 | | |
| Swing | Flooding | 92.4 | 4.4 | 92.0 | 7.9 | 92.4 | 14.6 | |
| Label Smoothing | 92.4 | 23.8 | 93.2 | 32.7 | 92.1 | 26.3 | | |
| SST-2 | Robust | Flooding | 90.9 | 20.8 | 91.1 | 31.6 | 90.9 | 39.4 |
| Label Smoothing | 91.1 | 37.4 | 90.9 | 41.0 | 91.1 | 44.5 | | |
| Non-robust | Flooding | 91.2 | 20.2 | 91.1 | 22.2 | 91.1 | 27.6 | |
| Label Smoothing | 91.4 | 28.7 | 91.2 | 27.2 | 91.1 | 24.2 | | |
| Swing | Flooding | 91.1 | 22.0 | 91.0 | 22.6 | 90.9 | 29.0 | |
| Label Smoothing | 91.2 | 30.2 | 91.2 | 32.8 | 91.0 | 35.2 | | |
| QQP | | | | | | | | |
![14_image_0.png](14_image_0.png)
| SST-2 QQP |
|-------------|
Datasets Instances Top 10% Top 30% Top 50%
Clean% Aua% Clean% Aua% Clean% Aua%
Robust 87.9 9.9 89.6 11.5 91.2 23.9
Non-robust 83.8 2.9 89.7 4.2 93*.1 4.*7
Low-Sensitivity 88.1 9.0 89.7 10.7 91.1 21.0 High-Sensitivity 83.7 3.2 89.7 4.5 93.2 4.5 Low-Variability 87.7 7.9 88.8 4.9 91.6 5.8 Swing 88.9 8.0 90*.8 9.*3 91.6 17.2
Small Filp Rate 62*.3 9.*7 88.6 9.4 88.9 12.8 Large Filp Rate 91.2 7.1 91.9 6.3 92.5 4.7
Robust 69.0 23.4 71.7 25.5 75.5 28.7 Non-robust 62.3 13.7 86.2 16.8 87.0 18.3
Low-Sensitivity 68.0 19.3 70.5 21.5 75.0 24.7
High-Sensitivity 65.2 9.0 77.3 20.9 86.2 21.9 Low-Variability 79.6 16.0 84.9 14.7 88.9 10.3
Swing 78.8 17.7 85.2 23.5 88.9 20.6 Small Filp Rate 69.7 13.5 73.4 19.3 78.5 21.4
Large Filp Rate 70.9 7.6 82*.1 7.*5 89.0 8.4
![15_image_0.png](15_image_0.png)
![16_image_0.png](16_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The limitation section is after the conclusion part of the thesis.
✗ A2. Did you discuss any potential risks of your work?
Our work don't have potetial risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The abstract is at the beginning of the article and the introduction is Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?**
Section 5 (Experimental Settings), Appendix C
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5, Appendix C
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
fu-etal-2023-generate | Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge | https://aclanthology.org/2023.findings-acl.147 | The open-ended Visual Question Answering (VQA) task requires AI models to jointly reason over visual and natural language inputs using world knowledge. Recently, pre-trained Language Models (PLM) such as GPT-3 have been applied to the task and shown to be powerful world knowledge sources. However, these methods suffer from low knowledge coverage caused by PLM bias {--} the tendency to generate certain tokens over other tokens regardless of prompt changes, and high dependency on the PLM quality {--} only models using GPT-3 can achieve the best result. To address the aforementioned challenges, we propose RASO: a new VQA pipeline that deploys a generate-then-select strategy guided by world knowledge for the first time. Rather than following the de facto standard to train a multi-modal model that directly generates the VQA answer, {pasted macro {`}MODEL{'}}name first adopts PLM to generate all the possible answers, and then trains a lightweight answer selection model for the correct answer. As proved in our analysis, RASO expands the knowledge coverage from in-domain training data by a large margin. We provide extensive experimentation and show the effectiveness of our pipeline by advancing the state-of-the-art by 4.1{\%} on OK-VQA, without additional computation cost. | # Generate Then Select: Open-Ended Visual Question Answering Guided By World Knowledge
Xingyu Fu1∗, Sheng Zhang2, Gukyeong Kwon2**, Pramuditha Perera**2, Henghui Zhu2, Yuhao Zhang2, Alexander Hanbo Li2**, William Wang**2, Zhiguo Wang2, Vittorio Castelli2, Patrick Ng2**, Dan Roth**1,2**, Bing Xiang**2 1 University of Pennsylvania, 2 AWS AI Labs [email protected]
## Abstract
The open-ended Visual Question Answering
(VQA) task requires AI models to jointly reason over visual and natural language inputs using world knowledge. Recently, pre-trained Language Models (PLM) such as GPT-3 have been applied to the task and shown to be powerful world knowledge sources. However, these methods suffer from low knowledge coverage caused by PLM bias - the tendency to generate certain tokens over other tokens regardless of prompt changes, and high dependency on the PLM quality - only models using GPT-3 can achieve the best result.
To address the aforementioned challenges, we propose RASO: a new VQA pipeline that deploys a generate-then-select strategy guided by world knowledge for the first time. Rather than following the de facto standard to train a multi-modal model that directly generates the VQA answer, RASO first adopts PLM to generate all the possible answers, and then trains a lightweight answer selection model for the correct answer. As proved in our analysis, RASO expands the knowledge coverage from in-domain training data by a large margin. We provide extensive experimentation and show the effectiveness of our pipeline by advancing the state-of-the-art by +4.1% on OK-VQA,
without additional computation cost. Code and models are released at http://cogcomp.org/
page/publication_view/1010
## 1 Introduction
Open-ended Visual Question Answering (VQA), that requires answering a question based on an image, has received much attention in machine learning research in the past decade (Antol et al.,
2015; Goyal et al., 2017). Knowledge-based VQA(Marino et al., 2019; Schwenk et al., 2022)
is a variant of VQA, where models have to use external knowledge that is not present in the image
∗Work done during internship at AWS AI Labs
![0_image_0.png](0_image_0.png)
ti ti
Figure 1: An example data from the OK-VQA dataset, which requires external knowledge not present in the image to answer the question.
to generate the answer. It is a more challenging problem as it requires joint reasoning over visual and natural language inputs using world knowledge. For example, in Figure 1, the VQA model needs to conduct multiple levels of inference: to detect the objects in the image (e.g. laptops, whiteboard, etc),
to retrieve external world knowledge (e.g, university is an institution and has lecture rooms, lecture rooms have laptops, stairs, and whiteboard, etc),
and combine the important visual parts with retrieved knowledge to induce the final answer (e.g.
university).
In this paper, we focus on improving the important step of external knowledge retrieval. A common procedure of previous VQA methods (Marino et al., 2021; Wu et al., 2022) is to retrieve with knowledge graphs from diverse knowledge bases
(e.g. Wikipedia (Wikipedia contributors, 2004),
ConceptNet (Liu and Singh, 2004), etc.), with the results being input to an answer generation model.
However, the retrieved knowledge could be noisy, irrelevant, and redundant, and therefore lead to mismatches that limit the VQA performance. Motivated by the development of large-scale PLMs such as GPT-3 (Brown et al., 2020) that obtain state-of-the-art (SOTA) performance in most NLP
tasks including text generation (Chowdhery et al.,
2022), more recent approaches PiCA (Yang et al., 2022) and KAT (Gui et al., 2022) propose to re-
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
trieve from GPT-3 and achieve better performance for their neat and high-quality knowledge. Specifically, PiCA directly treats GPT-3 output as the VQA answer, while KAT further uses GPT-3 outputs to train an answer generation model.
| GPT-J UL2 GPT-3 OPT Codex | | | | | |
|-----------------------------|------|------|------|-------|------|
| P romptQ | 32.4 | 32.6 | - | 34.21 | 44.8 |
| P romptQC | 37.1 | 37.5 | 48.0 | 37.8 | 52.9 |
Table 1: Knowledge coverage (%) of different five PLMs, evaluated on OK-VQA. *P rompt*Q means that the prompt to PLM is constructed by the VQA question only, and *P rompt*QC means that the prompt is constructed by the VQA image and question together. Note that the GPT-3 score is taken from (Yang et al., 2022).
While achieving SOTA at the time, the two models suffer from the low knowledge coverage caused by PLM bias - the tendency to generate certain tokens over other tokens despite the prompt changes, and their performance are highly dependent on the PLM quality - only GPT-3 and Codex can achieve good results. As illustrated in Table 1, we report the knowledge coverage percentage of different PLMs on OK-VQA (Marino et al., 2019), a knowledgebased open VQA dataset. We use the accuracy of PiCA as a representation of knowledge coverage, and the first column indicates the PLM input prompts, where *P rompt*Q is constructed by VQA
question only, and *P rompt*QC is constructed by image and question together. The top row lists five selected PLMs with parameter size varying from 6.7B to 175B: GPT J (Wang and Komatsuzaki, 2021), UL2 (Tay et al., 2022), OPT-175B (Zhang et al., 2022), GPT-3, and Codex (Chen et al., 2021).
Table 1 proves that existing VQA approaches using PLMs can only cover less than half (37% - 53%) of the required external knowledge. Further, the small difference (5% - 8%) between *P rompt*Q and P romptQC coverage percentages show that PLM
bias - the tendency to generate certain tokens over others given the same question - is not alleviated by prompt changes such as the inclusion of the image information or not.
To address these challenges, we propose RASO, a new VQA pipeline that expands world knowledge retrieval by requesting PLMs to generate multiple answer choices, followed by an answer selection model. As shown in Figure 2, we first propose a new prompting method to retrieve a long list of possible answers using in-context examples from in-domain training data. Note that for the example data in Figure 1, the PiCA end-task output would be
"office" as in AQC in Figure 2. With this prompting method, we expand the external knowledge coverage by more than +20% for each PLM, without additional training data. Then, as illustrated in Figure 3, we propose a chain-of-thought (CoT) (Wei et al., 2022) guided answer selection approach. By plugging in the previous SOTA method KAT (Gui et al., 2022) as the answer selector, we achieve the new SOTA performance 58.5% (+4.1%) on the OK-VQA dataset without additional computation effort.
Extensive experiments in Section 4 suggest that RASO provides a general way to increase the retrieved world knowledge coverage using PLMs, boosting end-task performance without additional computation cost. We believe our proposed pipeline motivates a new type of generate-thenselect VQA method and facilitates future work.
Our main contributions are: (a) We provide a new prompting method using PLMs that extends the retrieved external knowledge coverage by 20%
over previous approaches in VQA; (b) We are the first to propose a general generate-then-select VQA
pipeline, different from the de facto tradition of direct generation approaches; (c) We achieve the new SOTA on the challenging OK-VQA benchmark.
## 2 Related Work 2.1 Vqa Methods
Visual question answering (VQA) has always been one of the most popular topics in the natural language and computer vision community over recent years. While the VQA task is free-form and openended as first proposed in (Antol et al., 2015), a large portion of previous methods (Shih et al., 2016; Anderson et al., 2018; Lu et al., 2019; Gardères et al., 2020) cast it as a classification problem. It's a common strategy for them to construct a target vocabulary from the dataset's training set by answer frequency, resulting in around two to four thousand candidates in the target vocabulary (Ben-Younes et al., 2017; Yu et al., 2019; Marino et al., 2021; Wu et al., 2022). These methods suffer from the limited answer vocabulary - if the gold answer is outside of the vocabulary, then there is no way for these models to have the correct answer.
Rather than closed-set classification, several recent methods focus on direct generating for the correct answer (Gui et al., 2022; Salaberria et al., 2023)
using transformer-based models such as T5 (Raffel et al., 2020). Large-scale multi-modal models trained on multiple vision language tasks (Alayrac et al., 2022; Chen et al., 2022) have also become popular and achieved good performance on the OK-VQA dataset. However, these models are not publicly available and necessitate a vast quantity of data and computation resources.
Different from all the previous approaches that are either classification or direct generation, our proposed pipeline RASO is the first approach ever to follow a generate-then-select strategy, as far as this paper is written. We hope to benefit from less computation cost in the selection part compared to direct generation, while keeping the free-form open-ended answer vocabulary from the answer generation part.
## 2.2 Knowledge-Based Vqa
While significant progress (Lu et al., 2016; Anderson et al., 2018; Lu et al., 2019; Jiang et al., 2020; Marino et al., 2021; Biten et al., 2022) has been made on the most famous VQA benchmarks (Antol et al., 2015; Goyal et al., 2017; Wang et al.,
2017; Singh et al., 2019), researchers start to raise more challenging questions that require external knowledge not inside the image to answer (Marino et al., 2019; Zellers et al., 2019; Park et al., 2020; Schwenk et al., 2022; Fu et al., 2022).
Two-step approaches (Marino et al., 2021; Wu et al., 2022; Gui et al., 2022; Lin and Byrne, 2022; Gao et al., 2022; Hu et al., 2022; Lin et al., 2022)
that explicitly retrieve world knowledge as input to the end-task model have received much attention. However, these methods could retrieve noisy and redundant information that limits the VQA
performance, or have low knowledge coverage.
In contrast, without retrieving documents, they may suffer from PLM hallucinations. To address these problems, we treat LLM as a world knowledge source with wide coverage, and propose new prompt-engineering methods to retrieve succinct but higher-quality knowledge, represented as answer choices.
## 3 Method
Our method consists of two steps: answer choices generation and answer selection. The overview of the proposed model is shown in Figures 2 and 3 1.
Problem Formulation Given a training dataset D = {(vi, qi, ai)}
N
i=1, where vi denotes the i-th training image and N is the total number of the training images, qi and ai represent the i-th question and its corresponding answer, respectively. We deploy a generate-then-select strategy to first generate a set of answer choices using a frozen PLM g, then trains a model p to select the correct answer from it. g takes vi and qi as inputs, and generates all the possible answers Aˆi = {aˆi0, aˆi1, aˆi2*, ...*}.
Finally, p takes vi, qi, and Aˆi as inputs and learns a set of parameters θ to select from Aˆi for the final answer.
## 3.1 Answer Choices Generation
We design our generation process with inspirations from the previous work (Yang et al., 2022; Gui et al., 2022). As demonstrated in Figures 2 and 4, we follow a similar strategy to use few-shot incontext learning and leverage a frozen PLM g to generate all the possible answer choices.
For each image-question pair, we first convert the image viinto a textual context ci following
(Yang et al., 2022), where ci consists of a caption generated from an image captioning model
(Zhang et al., 2021) and a list of tags predicted by the public Microsoft Azure tagging API32. We then construct two carefully designed text prompts P romptQ and *P rompt*QC, where Q stands for question and QC stands for question and context.
P romptQC consists of a general instruction sentence: "Please list all the possible answers to the question.", the textual context, the question, and few-shot in-context examples. The examples are context-question-answers triples taken from the training set that are most similar to the current image-question pair. Since we want to generate all the possible answers, we use all the gold answers and connect them with "or" in the few-shot examples. *P rompt*Q has similar components: a slightly different instruction sentence, the question, and few-shot examples of question-answers pairs.
Following (Yang et al., 2022; Gui et al., 2022),
we use 16-shot in-context examples and calculate the similarity scores using CLIP (Radford et al., 2021) embedding of the images and the questions. We utilize the frozen PLM g to generate outputs for both *P rompt*Q and *P rompt*QC
as demonstrated in Figure 4. The outputs are combined together to form the final answer choices Aˆi = {aˆi0, aˆi1, aˆi2*, ...*} for the current imagequestion pair. Our goal is to have ai ∈ Aˆi.
## 3.2 Answer Selection
Given vi, ci, qi, Aˆi, this step trains a model p that selects aˆi from Aˆi. Our goal is for p to output ai when ai ∈ Aˆi.
Before training p, we first generate chain-ofthought (CoT) (Wei et al., 2022) style rationales to help guide the selection process, with inspirations from (Schwenk et al., 2022). Specifically, a fixed prompt is pre-designed to generate CoT rationales, with details in Figure 6 in Appendix A.
We then construct the input for the answer selection model. In this paper, we plug in existing text generation models as p, and require them to output one choice with further fine-tuning on OK-VQA.
For each image-question pair, we concatenate the question qi, the image - represented by either ci or the image embedding using CLIP model (Radford et al., 2021), the CoT rationale coti, and the generated answers choices Aˆi. We also add sentinel tokens such that the input turns out to be in the following format: Context : ci, *question* : qi, rationale : coti, choices : Aˆi, *answers* : with minor adaptions for each specific p. Check Figure 5 for inference.
## 4 Experiment 4.1 Dataset
OK-VQA (Marino et al., 2021) is a widely used VQA dataset that requires external world knowledge outside of the image to answer the question. The dataset contains 14,031 images from the COCO dataset (Lin et al., 2014) and 14,055 crowd-
Standard VQA Prompting
![4_image_0.png](4_image_0.png)
RASO Prompting A: library Model Output A: Library, School, University, Office, Hospital.
Model Output II
Figure 4: An illustration of our proposed prompting method for choice generation enabling larger knowledge retrieval coverage, compared with standard prompting as in PiCA (Yang et al., 2022). Note that Model Input I and II corresponds to P romptQ, *P rompt*QC respectively, and correct answers are highlighted.
Table 2: Answer choices generation result on OK-VQA, representing the external knowledge coverage. Top 1, Top 3, Top 5, and All represent the highest accuracy that can be achieved using top 1, top 3, top 5, and all answer choices. All results are in accuracy scores evaluated following (Antol et al., 2015). "both" means that we combine the answer choices generated using both prompts. "ensembled" means that we combine the answer choices of all four PLMs. Note that the GPT-3 result is taken from (Yang et al., 2022).
Figure 5: Example input for the answer selection model for the image in Figures 1 and 2.
| PLM | Prompt Type | Top1 (%) | Top3 (%) | Top5 (%) | All (%) | Avg # |
|-----------|---------------|------------|------------|------------|-----------|---------|
| GPT J | P romptQ | 32.4 | 46.1 | 46.7 | 46.7 | 2.6 |
| P romptQC | 37.1 | 49.5 | 50.7 | 50.7 | 3.0 | |
| both | 37.1 | 52.0 | 55.9 | 57.1 | 4.1 | |
| UL2 | P romptQ | 32.6 | 45.4 | 46.4 | 46.5 | 2.7 |
| P romptQC | 37.5 | 51.3 | 52.8 | 52.9 | 3.0 | |
| both | 37.5 | 53.1 | 57.0 | 58.0 | 4.1 | |
| GPT-3 | P romptQC | 48.0 | - | - | - | - |
| OPT | P romptQ | 34.21 | 48.45 | 49.7 | 49.8 | 3.0 |
| P romptQC | 37.8 | 52.9 | 55.0 | 55.4 | 3.7 | |
| both | 37.8 | 55.6 | 61.0 | 63.4 | 5.2 | |
| Codex | P romptQ | 44.8 | 58.8 | 59.8 | 59.8 | 3.1 |
| P romptQC | 52.9 | 67.8 | 68.9 | 68.9 | 3.2 | |
| both | 52.9 | 68.6 | 72.6 | 73.5 | 4.5 | |
| ensembled | both | 52.9 | 68.6 | 74.6 | 81.9 | 11.0 |
| Model Input II |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Please list all the possible answers. Context: a large room full of laptops with people in the background. wall, computer... Q: What kind of institution does this image depict? Model Output II A: Library, School, University, Office, Hospital. |
sourced questions covering a variety of knowledge categories, with 9,009 training data and 5,046 testing data. Each question has ten annotated answers
(possibly repeated), and we follow the standard evaluation metric recommended by the VQA challenge (Antol et al., 2015). The external knowledge required in OK-VQA is not provided and there is no designated external knowledge source (such as a knowledge base), leaving the benchmark more challenging.
| Model Input |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Please answer the question. Context: a large room full of laptops with people in the background. wall, computer... Q: What kind of institution does this image depict? Model Output A: library |
## 4.2 Publicly Available Plms
We experiment with four different-sized PLMs that are publicly available as follows:
Codex (Chen et al., 2021) The Codex models are descendants of GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub. We use the version code − *davinci* − 002 of Codex. OPT-175b (Zhang et al., 2022) Open Pre-trained Transformers (OPT) is a suite of decoder-only pretrained transformers ranging from 125M to 175B
parameters trained on publicly available datasets.
| Method | External Knowledge Source | Answer Selector | Acc(%) |
|-------------------------------------|-------------------------------|-------------------|----------|
| MUTAN+AN (Ben-Younes et al., 2017) | Wiki | - | 27.8 |
| ConceptBERT (Gardères et al., 2020) | ConceptNet | - | 33.7 |
| KRISP (Marino et al., 2021) | Wiki+ConceptNet | - | 38.9 |
| MAVEx (Wu et al., 2022) | Wiki+ConceptNet+Google Images | - | 39.4 |
| PiCA (Yang et al., 2022) | Frozen GPT-3 Wiki | - | 48.0 |
| KAT (Gui et al., 2022) (ensemble) | Wiki+Frozen GPT-3 Wiki | - | 54.4 |
| ClipCap (Mokady et al., 2021) . | - | - | 22.8 |
| Frozen GPT-J | 29.5 | | |
| Frozen UL2 | 33.1 | | |
| RASO | Frozen OPT | ClipCap | 31.3 |
| Frozen Codex | 35.3 | | |
| All 4 Frozen PLMs | 38.0 | | |
| Frozen GPT-J | 29.6 | | |
| RASO | Frozen UL2 | IterPLM | 33.8 |
| Frozen OPT | 58.5 | | |
| Frozen Codex | 45.7 | | |
| Frozen GPT-J | 47.2 | | |
| Frozen UL2 | 45.8 | | |
| RASO | Frozen OPT | UnifiedQA | 47.8 |
| Frozen Codex | (ensemble) | 51.2 | |
| All 4 Frozen PLMs | 45.6 | | |
| Wiki+Frozen GPT-J | 50.3 | | |
| Wiki+Frozen UL2 | 52.2 | | |
| RASO | Wiki+Frozen OPT | KAT | 53.0 |
| Wiki+Frozen Codex | (ensemble) | 58.5 | |
| Wiki+ All 4 Frozen PLMs | 57.9 | | |
We use the version 175 billion parameters of OPT.
UL2 (Tay et al., 2022) Unified Language Learner
(UL2) is 20 billion parameter novel language pretraining paradigm that improves the performance of language models universally across datasets and setups released recently. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input.
GPT-J (Wang and Komatsuzaki, 2021) GPT-J is a 6 billion parameter, autoregressive text generation model trained following (Wang, 2021). The model consists of 28 layers with a model dimension of 4096, and a feed-forward dimension of 16384.
During prompting, we always set the temperature to 0.001 and max token to 15.
## 4.3 Answer Choices Generation Results
The answer choice generation result is shown in Table 2. Top 1, Top 3,..., All represent the highest accuracy that can be achieved using top 1, top 3,
..., and all answer choices, calculated following the standard VQA evaluation metric in (Antol et al.,
2015). Note that the GPT-3 score is taken from
(Yang et al., 2022). We do not experiment with GPT-3 in this paper due to the required cost. Avg \#
stands for the average number of answer choices.
While previous VQA methods also retrieve from PLMs, they have a similar result as if using P romptQC and Top1 choice. As discussed before, these generation results can represent the external knowledge coverage ratio. From the table, Codex covers the majority of the knowledge needed and has the highest score of 73.5%. Using our promptengineering method, the knowledge coverages of all PLMs increase by a large margin of at least 20% (which are the accuracy differences between Top1 choice by *P rompt*QC and All choices by both prompts).
## 4.4 Answer Selection Models
We plug in existing text-generation models as answer selectors and experiment on four methods:
KAT (Gui et al., 2022) is a VQA method that uses a sequence-to-sequence model composed of an encoder and a decoder, similar to T5 (Raffel et al.,
2020). As far as this paper is written, KAT is known to be the SOTA method on OK-VQA benchmark.
ClipCap (Mokady et al., 2021) uses the CLIP (Radford et al., 2021) encoding as a prefix to generate
| KAT | Top1 All w/o cot All w/ cot | | |
|------------------|-------------------------------|------------|------|
| GPT-J (single) | 45.9 | 47.8 | 49.6 |
| GPT-J (ensemble) | 46.6 | 48.4 | 50.3 |
| UL2 (single) | 50.2 | 50.7 | 51.2 |
| UL2 (ensemble) | 51.1 | 51.5 | 52.2 |
| OPT (single) | 51.7 | 52.3 | 52.5 |
| OPT (ensemble) | 52.1 | 52.9 | 53.0 |
| Codex (single) | 56.2 | 57.1 | 57.5 |
| Codex (ensemble) | 57.1 | 58.1 | 58.5 |
| All (single) | 56.4 | 56.9 | 57.0 |
| All (ensemble) | 57.0 | 57.6 | 57.9 |
| UnifiedQA | All w/o cot | All w/ cot | |
| GPT-J (single) | 45.6 | 46.0 | |
| GPT-J (ensemble) | 46.6 | 47.2 | |
| UL2 (single) | 44.8 | 44.6 | |
| UL2 (ensemble) | 45.8 | 45.8 | |
| OPT (single) | 47.9 | 46.8 | |
| OPT (ensemble) | 49.0 | 47.8 | |
| Codex (single) | 51.1 | 50.4 | |
| Codex (ensemble) | 52.1 | 51.2 | |
| All (single) | 45.1 | 44.6 | |
| All (ensemble) | 45.7 | 45.3 | |
textual captions by employing a simple mapping network over the raw encoding, and then fine-tunes a language model to generate a valid caption. The language model we use here is GPT-2. In this paper, we adapt this model by adding question tokens, CoT rationale tokens, and answer choices tokens to the prefix as input, with the target to generate answers instead of captions. We train the mapping network from scratch and also fine-tune GPT-2.
IterPLM Inspired by previous work (Wang et al.,
2022), we use iterative prompting with the same PLM in choice generation for correct answer selection. A snippet of an example prompt is shown in Figure 5. We use 8-shot in-domain examples with the temperature set to 0.001 and max token set to 5.
Table 5: Ablation study on how different inputs influence the answer selection result using IterPLM: iterative prompting using the same PLM, on OK-VQA. All results are in accuracy scores. Both setting use all the answer choices.
| Type | GPT-J | UL2 | OPT | Codex | |
|----------|---------|-------|-------|---------|------|
| DG | 23.5 | | | | |
| ViT-L_14 | w/o cot | 28.7 | 30.3 | 29.1 | 33.4 |
| w/ cot | 29.5 | 33.1 | 31.3 | 35.3 | |
| DG | 21.6 | | | | |
| RN50x64 | w/o cot | 29.3 | 30.3 | 28.6 | 34.5 |
| w/ cot | 29.6 | 32.6 | 31.4 | 36.4 | |
| GPT-J | UL2 | OPT | Codex | |
|---------|-------|-------|---------|------|
| w/o cot | 28.5 | 29.1 | 31.6 | 45.6 |
| w/ cot | 28.1 | 32.3 | 33.5 | 44.9 |
UnifiedQA (Khashabi et al., 2022, 2020) is a multiple-choice question answering (QA) model that performs well across 20 QA datasets, using the T5ForConditionalGeneration model. We load UnifiedQA v2 (Khashabi et al., 2022) checkpoint unifiedqa-v2-t5-large-1251000.
## 4.5 End-Task Vqa Results
As illustrated in Table 3, we compare our proposed pipeline against several standard baseline approaches: MUTAN+AN (Ben-Younes et al., 2017),
ConceptBERT (Gardères et al., 2020), KRISP
(Marino et al., 2021), MAVEx (Wu et al., 2022),
PiCA (Yang et al., 2022), and KAT (Gui et al.,
2022), on the OK-VQA data test set. RASO outperforms the previous SOTA by an absolute 4%
margin, achieving the new SOTA.
Comparing different answer selectors, it is surprising that the two transformer-based text-only models: UnifiedQA and KAT significantly outperform the multi-modal ClipCap model by around 20% on average, even though their sizes (T5 large)
are much smaller than that of GPT-2. We believe this phenomenon is because the Clip image embeddings trained using image captions do not have enough granularity to support reasoning over the image, question, and answer choices for answer selection, compared to T5 models. Besides, IterPLM
has much worse scores than we imagined. While many papers (Wang et al., 2022) show that iterative prompting should boost the performance, our experiments suggest that asking the PLMs to select between their own output at the highest confidence is indeed a very difficult problem for them.
In Table 3, we also compare single PLM answer choices with ensembled choices by all four PLMs, with the latter showing lower scores. We believe this is because the answer selectors we experiment on are not good enough, and thus increasing choice numbers turns out to hurt the performance.
## 4.6 Implementation Details
In the answer choice generation step, we use 16shot in-context examples on the test data. On the training data, because we have ten gold answers with repetitions, we use 4-shot in-context learning for faster generation. The temperature for PLM
generation is set to be 0.001. The generation max token length is set to be 15. All experiments of selection models have been run in 8 NVIDIA V100 Tensor Core GPUs with 32 GiB of memory each, 96 custom Intel Xeon Scalable (Skylake) vCPUs, and 1.8 TB of local NVMe-based SSD storage. The running times for KAT, UnifiedQA and ClipCap are less than 4, 2 and 1 hours, respectively. OPT-175b model is locally set up in 32 NVIDIA V100 Tensor Core GPUs to make inferences. The learning rates for KAT, UnifiedQA and Clipcap are set as 3e-5, 5e-5 and 2e-5, respectively, for all experiments.
Optimizer AdamW (Loshchilov and Hutter, 2017)
is used for all selection models.
## 5 Ablation Studies
We perform qualitative and quantitative analysis on the answer selection results to better understand whether the expanded external knowledge coverage benefits the end-task VQA much. As illustrated in Tables 4 to 6, we investigate the impact of various inputs on the answer selection results, with answer choices representing the retrieved knowledge.
CoT Rationale Impact From the experiments results in Tables 4 to 6 where we compare the settings: "w/cot" and "w/o cot", input with CoT rationales consistently boosts the answer selection performance of KAT, UnifiedQA, and ClipCap. However, this conclusion fails for iterative prompting
- adding CoT hurts the performance of IterPLM
when we use GPT-J and Codex. We believe this can result from the difference in CoT qualities, and different pre-training methods and data.
Choice Number Impact As shown in Table 4, larger knowledge coverage, represented by using choices from all four PLMs versus a single PLM,
can not consistently increase the performance of KAT or UnifiedQA. As we compare the results on Codex choices and that on all PLMs choices, more choices always lead to lower accuracy scores.
This is somehow against our instinct, and we believe it is because our answer selectors are not good enough. Digging deeper into the problem, we further compare the difference between using Top1 choices and all choices in KAT as in the top table.
Note that the Top1 results here are not the same as the Top1 accuracy in Table 2 because KAT uses Wikipedia knowledge by design so it further expands knowledge coverage. We can see that using all choices is consistently better than using Top 1 choice. However, the improvements are too small
(0.4-1.9 %) considering that their knowledge coverages differ by at least 20% as in Table 1, suggesting that KAT, while being the best, is still not the ideal selection model, and motivating future research in this direction.
Multi-modal Selector Impact As demonstrated in Table 6, we experiment with the two versions of CLIP embedding: "ViT-L_14" and "RN50x64" and the difference between direct generation (DG)
and answer selection is constantly large - providing answer choices definitely helps ClipCap to generate the correct answer.
Ensemble Impact Our answer choice generation step is indeed ensembling on PLMs results.
Previous VQA methods that retrieve from PLMs also conduct ensembling but in a different way
(Yang et al., 2022). They usually request the same prompt (see example in Figure 4) multiple times and take the majority-voted answer. This process is called multi-query ensemble, and could boost the GPT-3 performance by about 5%. We argue that our proposed RASO prompting is superior to multi-query ensemble in that we allow more diversity in the output and provide VQA systems more explainability by separating the choice-generation and selection steps, without additional API request cost or longer inference time.
## 6 Conclusion
In this paper, we propose RASO: a new VQA
pipeline following a generate-then-select strategy guided by world knowledge. RASO proposes a new prompting method that largely increases the external knowledge coverage by a margin of more than 20% compared to previous approaches on the OK-VQA benchmark. Our pipeline achieves the new SOTA 58.5% on the end-task performance ,
encouraging avenues for future studies.
## 7 Limitations
While the previous VQA methods that retrieve from PLMs all use GPT-3, we do not experiment with GPT-3 in this paper due to the additional cost. We only focus on applying text-generation models as answer selectors, while classification models could also potentially be good answer selectors. The multi-modal CLIP embedding has already been surpassed by several recent studies (Alayrac et al.,
2022; Singh et al., 2022; Lu et al., 2022) and therefore ClipCap cannot represent the performance of multi-modal answer selectors.
## 8 Ethical Considerations
The authors of this paper acknowledge the significance of responsible NLP in research and development. The objective of this research is to enhance the capabilities of visual question answering models, guided by human values-based world knowledge. We strive to ensure that the model is not only accurate and efficient, but also fair and unbiased.
We recognize that the VQA technology may have a substantial impact on society and pledge to be transparent in sharing our findings and progress with relevant users and stakeholders.
## Acknowledgments
The authors would like to thank researchers at AWS AI Labs who commented on or otherwise supported throughout the course of this project, including Simeng Han, Donghan Yu, Sijia Wang, and Shuaichen Chang.
## References
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang.
2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering.
In Proceedings of the IEEE international conference on computer vision, pages 2425–2433.
Hedi Ben-Younes, Rémi Cadene, Matthieu Cord, and Nicolas Thome. 2017. Mutan: Multimodal tucker fusion for visual question answering. In *Proceedings* of the IEEE international conference on computer vision, pages 2612–2620.
Ali Furkan Biten, Ron Litman, Yusheng Xie, Srikar Appalaraju, and R. Manmatha. 2022. Latr: Layoutaware transformer for scene-text vqa. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 16548–
16558.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. 2022. Pali: A jointly-scaled multilingual language-image model. *arXiv preprint* arXiv:2209.06794.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311.
Xingyu Fu, Ben Zhou, Ishaan Chandratreya, Carl Vondrick, and Dan Roth. 2022. There's a time and place for reasoning beyond the image. In Proceedings of the 60th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 1138–1149, Dublin, Ireland. Association for Computational Linguistics.
Feng Gao, Qing Ping, Govind Thattai, Aishwarya Reganti, Ying Nian Wu, and Prem Natarajan. 2022.
Transform-retrieve-generate: Natural languagecentric outside-knowledge visual question answering.
In *CVPR 2022*.
François Gardères, Maryam Ziaeefard, Baptiste Abeloos, and Freddy Lecue. 2020. ConceptBert:
Concept-aware representation for visual question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 489–498, Online. Association for Computational Linguistics.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA
matter: Elevating the role of image understanding in Visual Question Answering. In *Conference on* Computer Vision and Pattern Recognition (CVPR).
Liangke Gui, Borui Wang, Qiuyuan Huang, Alexander Hauptmann, Yonatan Bisk, and Jianfeng Gao.
2022. KAT: A knowledge augmented transformer for vision-and-language. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 956–968, Seattle, United States. Association for Computational Linguistics.
Yushi* Hu, Hang* Hua, Zhengyuan Yang, Weijia Shi, Noah A Smith, and Jiebo Luo. 2022. Promptcap:
Prompt-guided task-aware image captioning. arXiv preprint arXiv:2211.09699.
Huaizu Jiang, Ishan Misra, Marcus Rohrbach, Erik Learned-Miller, and Xinlei Chen. 2020. In defense of grid features for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10267–10276.
Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. *arXiv preprint* arXiv:2202.12359.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In *Findings of the Association for Computational Linguistics:*
EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
2023. BLIP-2: bootstrapping language-image pretraining with frozen image encoders and large language models. In *ICML*.
Leroy Lin, Yujia Xie, Dongdong Chen, Yichong Xu, Chenguang Zhu, and Lu Yuan. 2022. REVIVE: Regional visual representation matters in knowledgebased visual question answering. In Advances in Neural Information Processing Systems.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In *European conference on computer vision*, pages 740–755. Springer.
Weizhe Lin and Bill Byrne. 2022. Retrieval augmented visual question answering with outside knowledge.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 11238–11254, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Hugo Liu and Push Singh. 2004. Conceptnet—a practical commonsense reasoning tool-kit. *BT technology* journal, 22(4):211–226.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee.
2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32.
Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022. Unifiedio: A unified model for vision, language, and multimodal tasks. *arXiv preprint arXiv:2206.08916*.
Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh.
2016. Hierarchical question-image co-attention for visual question answering. *Advances in neural information processing systems*, 29.
Kenneth Marino, Xinlei Chen, Devi Parikh, Abhinav Gupta, and Marcus Rohrbach. 2021. Krisp: Integrating implicit and symbolic knowledge for opendomain knowledge-based vqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14111–14121.
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195–3204.
Ron Mokady, Amir Hertz, and Amit H Bermano. 2021.
Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734.
Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, and Yejin Choi. 2020. Visualcomet: Reasoning about the dynamic context of a still image. In European Conference on Computer Vision, pages 508–524. Springer.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763.
PMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Ander Salaberria, Gorka Azkune, Oier Lopez de Lacalle, Aitor Soroa, and Eneko Agirre. 2023. Image captioning for effective use of language models in knowledge-based visual question answering. *Expert* Systems with Applications, 212:118669.
Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. 2022.
A-okvqa: A benchmark for visual question answering using world knowledge. *arXiv*.
Kevin J Shih, Saurabh Singh, and Derek Hoiem. 2016.
Where to look: Focus regions for visual question answering. In *Proceedings of the IEEE conference* on computer vision and pattern recognition, pages 4613–4621.
Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2022. Flava: A foundational language and vision alignment model. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 15638–15650.
Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*,
pages 8317–8326.
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022. Unifying language learning paradigms. *arXiv preprint* arXiv:2205.05131.
Ben Wang. 2021. Mesh-Transformer-JAX: ModelParallel Implementation of Transformer Language Model with JAX. https://github.com/
kingoflolz/mesh-transformer-jax.
Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/
mesh-transformer-jax.
Boshi Wang, Xiang Deng, and Huan Sun. 2022. Shepherd pre-trained language models to develop a train of thought: An iterative prompting approach. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
Peng Wang, Qi Wu, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. 2017. Fvqa: Fact-based visual question answering. *IEEE transactions on pattern analysis and machine intelligence*, 40(10):2413–
2427.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems.
Wikipedia contributors. 2004. Plagiarism - Wikipedia, the free encyclopedia. [Online; accessed 22-July2004].
Jialin Wu, Jiasen Lu, Ashish Sabharwal, and Roozbeh Mottaghi. 2022. Multi-modal answer validation for knowledge-based vqa. In *Proceedings of the AAAI*
Conference on Artificial Intelligence, pages 2712–
2721.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. 2022.
An empirical study of gpt-3 for few-shot knowledgebased vqa. In *AAAI*.
Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian.
2019. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE/CVF
conference on computer vision and pattern recognition, pages 6281–6290.
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Making visual representations matter in vision-language models. *CVPR 2021*.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
## A Appendix A.1 Cot Prompts
For our CoT generation experiments, we use a predesigned fixed prompt as partly shown in Figure 6.
## A.2 Additional Experiments
We conduct additional experiments for RASO on an augmented successor dataset of OK-VQA: AOKVQA (Schwenk et al., 2022) to prove its effectiveness. Since we do not have the baseline results or any intermediate outputs on A-OKVQA as the paper was written, we only compare with PiCA
(Yang et al., 2022) with a simpler setting: without using image tagging or chain-of-thought and only using GPT-J. The captions we use are generated using BLIP-2 (Li et al., 2023), following the default example in the paper.
Please answer the questions according to the above context.
Context: Two people are holding their martini glasses together.
===
Question: How old do you have to be in canada to do this?
Answer: From the context, because two people are holding their martini glasses together and martini is alcohol, so 'this' means drinking alcohol. So the question is asking how old you have to be in canada to drink alcohol. So the answer is 18.
Context: A dog stands behind a wire door outside
===
Question: Which wild animal that hunts in packs is related to this animal seen here?
Answer: From the context, because a dog stands behind a wire door outside, this animal seen here is the dog. So the question is asking which wild animal that hunts in packs is related to dog. So the answer is wolf.
Figure 6: The fixed prompt we use to generate chainof-thought style rationales. We randomly select seven examples in the prompt and show two of them here. We set the temperature as 0.7 and max token as 80 during inference for all PLMs.
![11_image_0.png](11_image_0.png)
Table 7: Additional comparison of RASO versus PiCA on A-OKVQA dataset. |
wu-etal-2023-hence | Hence, Socrates is mortal: A Benchmark for Natural Language Syllogistic Reasoning | https://aclanthology.org/2023.findings-acl.148 | Syllogistic reasoning, a typical form of deductive reasoning, is a critical capability widely required in natural language understanding tasks, such as text entailment and question answering. To better facilitate research on syllogistic reasoning, we develop a benchmark called SylloBase that differs from existing syllogistic datasets in three aspects: (1) Covering a complete taxonomy of syllogism reasoning patterns; (2) Containing both automatically and manually constructed samples; and (3) Involving both the generation and understanding tasks. We automatically construct 50k template-based syllogism samples by mining syllogism patterns from Wikidata and ConceptNet. To improve our dataset{'}s naturalness and challenge, we apply GPT-3 to paraphrase the template-based data and further manually rewrite 1,000 samples as the test set. State-of-the-art pre-trained language models can achieve the best generation ROUGE-L of 38.72 by T5 and the best multi-choice accuracy of 72.77{\%} by RoBERTa on SylloBase, which indicates the great challenge of learning diverse syllogistic reasoning types on SylloBase. Our datasets are released at \url{https://github.com/casually-PYlearner/SYLLOBASE}. | # Hence, Socrates Is Mortal**: A Benchmark For Natural Language Syllogistic** Reasoning
Yongkang Wu1, Meng Han1, Yutao Zhu2, Lei Li1, Xinyu Zhang1**, Ruofei Lai**1, Xiaoguang Li3, Yuanhang Ren1, Zhicheng Dou4 **and Zhao Cao**1∗
1Huawei Poisson Lab, China 2University of Montreal, Montreal, Quebec, Canada 3Huawei Noah's Ark Lab, China 4Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
{wuyongkang7,zhangxinyu35,caozhao1}@huawei.com
## Abstract
Syllogistic reasoning, a typical form of deductive reasoning, is a critical capability widely required in natural language understanding tasks, such as text entailment and question answering. To better facilitate research on syllogistic reasoning, we develop a benchmark called SYLLOBASE that differs from existing syllogistic datasets in three aspects: (1) Covering a complete taxonomy of syllogism reasoning patterns; (2) Containing both automatically and manually constructed samples; and (3) Involving both the generation and understanding tasks.
We automatically construct 50k template-based syllogism samples by mining syllogism patterns from Wikidata and ConceptNet. To improve our dataset's naturalness and challenge, we apply GPT-3 to paraphrase the templatebased data and further manually rewrite 1,000 samples as the test set. State-of-the-art pretrained language models can achieve the best generation ROUGE-L of 38.72 by T5 and the best multi-choice accuracy of 72.77% by RoBERTa on SYLLOBASE, which indicates the great challenge of learning diverse syllogistic reasoning types on SYLLOBASE. Our datasets are released at https://github.com/
casually-PYlearner/SYLLOBASE.
## 1 Introduction
Reasoning, as a typical way for human beings to obtain new knowledge and understand the world, is also an ultimate goal of artificial intelligence (Newell and Simon, 1956; Lenat et al., 1990).
Reasoning skills, *i.e.*, examine, analyze, and critically evaluate arguments as they occur in ordinary language, have been required by many natural language processing tasks, such as machine reading comprehension (Liu et al., 2020; Yu et al., 2020),
open-domain question answering (Kwiatkowski et al., 2019; Huang et al., 2019), and text generation (Dinan et al., 2019).1 According to different mental processes, reasoning can be categorized as deductive, inductive, abductive, etc (Copi et al.,
2016). In Piaget's theory of cognitive development (Huitt and Hummel, 2003), these logical reasoning processes are necessary to manipulate information, which is required to use language and acquire knowledge. Therefore, the study of logical reasoning is worthy of our attention because it is so prevalent and essential in our daily lives.
In this study, we focus on syllogism, which is a typical form of reasoning and has been studied for a long time (it was initially defined in Aristotle's logical treatises *Organon*, composed around 350 BCE). As shown in Table 1, a syllogism often contains two premises and a conclusion, where the conclusion can be inferred based on the given premises through a deductive reasoning process.2 Though reasoning-required tasks (such as question answering) have been widely studied, the thorough study to test the deductive reasoning capabilities of a model or system is rare. In the study of syllogism, there are only a few datasets, and they have several limitations: (1) They focus merely on categorical syllogism (shown in Figure 1) (Dames et al., 2020; Dong et al., 2020; Aghahadi and Talebpour, 2022).
Even though it is the most common type, syllogisms come in a variety of forms. They involve different reasoning processes and are also beneficial. (2) Some datasets (Dames et al., 2020; Dong et al., 2020) are not in natural language, which are difficult to adapt to inference requirements in real natural language scenarios. (3) More severely, all of them have less than 10k samples, which are insufficient for training deep neural networks.
To support further study on syllogistic reasoning, in this work, we build a new natural language
| ConceptNet An open, multilingual knowledge graph | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|
| (1) Triplet Extraction (Section 3.1) | |
| ① (human, capable of, mortal), (Socrates, is a, | ② (meteoritics, subclass of, astronomy), (astronomy, |
| human) | subclass of, exact science) |
| (2) Syllogism Construction (Section 3.2) | |
| Premise 1: All human are mortal. | Premise 1: Some astronomy are meteoritics. |
| Premise 2: All Socrates are human. | Premise 2: All astronomy are exact science. |
| Conclusion: All Socrates are mortal. | Conclusion: Some exact science are meteoritics |
| Pattern: All � are �, all � are m ⟶All � are �. | Pattern: Some � are �, all � are � ⟶ Some � are �. |
| (3) Paraphrasing (GPT-3) (Section 3.3) | |
| Premise 1: It is a fact that all human beings are | Premise 1: Meteoritics, which is a specific type of astronomy, |
| mortal. | involves the study of meteors and meteorites. |
| Premise 2: Socrates was a classic Greek philosopher | Premise 2: Astronomy is an exact science meaning precise |
| credited as one of the founders of Western philosophy. | observation and mathematical calculations are implemented. |
| Conclusion: It is true that Socrates was mortal as well. Conclusion: Some precise sciences are used to further expand upon knowledge of meteorites and meteors. | |
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
benchmark—SYLLOBASE, which has the following features (some examples are shown in Table 2):
First, it is a more complete benchmark that covers five types of syllogisms. Therefore, it can support more fine-grained research on certain types, their interrelationships, and their combined effect on other tasks. **Second**, all premises and conclusions are written in natural language. It more closely resembles real-world application settings in which natural language descriptions rather than categorized inputs are provided. In addition, the power of large-scale pre-trained language models can also be harnessed effectively. **Third**, with our proposed automatic construction process, we collect a large number of samples (50k in total). They can support the training of deep neural networks. In order to validate the performance on actual human syllogism, we also manually annotate 1,000 samples as the test set. This test set may also be used independently to assess the reasoning capability of models in a zero-/few-shot manner. **Finally**, to promote a more comprehensive investigation of syllogistic reasoning, we organize both a generation and an understanding task.
The experimental results indicate that there is a great deal of room for improvement in the syllogistic reasoning capabilities of existing models. Our additional experiments demonstrate the efficacy of transferring knowledge learned from our automatically constructed syllogism to actual human reasoning.
## 2 Background And Related Work 2.1 Syllogism
Syllogism is a common form of deductive reasoning. Basic syllogism can be categorized as categorical syllogism, hypothetical syllogism, and disjunctive syllogism. They can be further combined into polysyllogisms. In this section, we use the most common categorical syllogism to introduce the term and structure of syllogism. Other types of syllogism will be introduced in Section 3.
The left side of Figure 1 shows a well-known categorical syllogism about "Socrates is mortal".
We can see a categorical syllogism usually contains two premises and a conclusion. A common term (*e.g.*, "human") links two premises, and the premises respectively define the relationship between "human" and "mortal" or "Socrates". The reasoning process is to draw a conclusion based on the premises. A syllogism can also be described by a pattern, as shown in the middle side of Figure 1.
## 2.2 Related Work
Syllogistic Reasoning Dataset Several syllogistic reasoning datasets have been introduced to promote the development of this field. CCOBRA (Dames et al., 2020) is a dataset with around 10k triplets (major premise, minor premise, conclusion). The task is formed as a single-choice question, and the ground-truth conclusion is shuffled with several distractors. ENN (Dong et al., 2020)
is another similar dataset, but the syllogism is constructed from WordNet (Miller, 1995). SylloFigure (Peng et al., 2020) and Avicenna (Aghahadi
| Dataset | #Types | Natural Language | Complete Patterns | Source | Size |
|-----------------|-----------------|--------------------|---------------------|--------------------------------|--------|
| CCOBRA | 1 (Categorical) | ✗ (Triplet) | ✓ | Crowdsourcing | 10k |
| ENN | 1 (Categorical) | ✗ (Triplet) | ✓ | WordNet | 7k |
| SylloFigure | 1 (Categorical) | ✓ | ✗ | SNLI | 8.6k |
| Avicenna | 1 (Categorical) | ✓ | ✗ | Crowdsourcing | 6k |
| SYLLOBASE (Our) | 5 | ✓ | ✓ | Knowledge Base & Crowdsourcing | 51k |
Table 1: Comparison of existing syllogism datasets. Our SYLLOBASE is the largest one covering all five types.
and Talebpour, 2022) are two natural language textbased syllogism reasoning datasets, but they are designed for different tasks. SylloFigure annotates the data in SNLI (Bowman et al., 2015), restores the missing premise, and transforms each syllogism into a specific figure.3 The target is to predict the correct figure type of a syllogism. Avicenna is a crowdsourcing dataset, and the syllogism is extracted from various sources, such as books and news articles. These syllogisms are used for both natural language generation and inference tasks.
Different from existing datasets that focus only on categorical syllogism, our SYLLOBASE covers more types and patterns of syllogism and is significantly larger than existing datasets. More detailed comparisons are shown in Table 1.
Logic Reasoning in NLP There are several tasks and datasets related to logical reasoning in NLP.
The task of natural language inference (NLI) (Bos and Markert, 2005; Dagan et al., 2005; MacCartney and Manning, 2009; Bowman et al., 2015; Williams et al., 2018), also known as recognizing textual entailment, requires model to classify the relationship types (*i.e.*, contradicted, neutral, and entailment)
between a pair of sentences. However, this task only focuses on sentence-level logical reasoning, and the relationships are constrained to only a few types. Another NLP task related to logical reasoning is machine reading comprehension (MRC).
There are several MRC datasets designed specifically for logical reasoning, such as LogiQA (Liu et al., 2020) and ReClor (Yu et al., 2020). A paragraph and a corresponding question are given, and the model is asked to select a correct answer from four options. This task requires models to conduct paragraph-level reasoning, which is much more difficult than NLI.
The above logic reasoning NLP tasks attempt to improve models' general logic reasoning capability, but they pay little attention to different types of reasoning processes, such as deductive reasoning or 3Figures in syllogism, https://en.wikipedia.org/
wiki/Syllogism.
Table 2: Examples of syllogisms from our test set.
Categorical Syllogism
Premise 1: Carbon dioxide is a chemical compound.
Premise 2: Chemical compounds are considered pure substances.
Conclusion: Pure substances include carbon dioxide.
Hypothetical Syllogism
Premise 1: When you make progress in your project, you
may want to celebrate.
Premise 2: Having a party is a good choice if you want to
celebrate.
Conclusion: You may want to have a party if you achieve
great progress in your project. Disjunctive Syllogism
Premise 1: Newspapers are generally published daily or
weekly.
Premise 2: Some newspapers are not published weekly.
Conclusion: Some newspapers are daily newspapers.
Polysyllogism
Premise 1: Some movies are not cartoon movies.
Premise 2: Science fiction animations belong to animated
films.
Premise 3: Remake films are also films.
Conclusion: Some remakes are out of scope of science
fiction cartoons. Complex Syllogism
Premise 1: If Jack has computer skills and programming
knowledge, he could write programs.
Premise 2: Jack cannot write computer programs, but he
can use computers.
Conclusion: Jack does not have programming knowledge.
inductive reasoning. In this work, we study a specific form of deductive reasoning, *i.e.*, syllogism.
We hope our benchmark can support more in-depth studies on the reasoning process.
## 3 Data Construction
Our target is to develop a large-scale benchmark and support research on several typical kinds of syllogistic reasoning. It is straightforward to collect data through human annotation, as most existing datasets have explored (Dames et al., 2020; Aghahadi and Talebpour, 2022). However, this method is impracticable for obtaining large-scale data due to the high cost of human annotation.
Therefore, we propose constructing a dataset automatically from existing knowledge bases and manually rewriting 1,000 samples as the test set.
## 3.1 Data Source
Inspired by existing studies (Dong et al., 2020)
that collect data from knowledge bases, we choose Wikidata (Vrandecic and Krötzsch, 2014) and ConceptNet (Speer et al., 2017) as our data sources because they contain large-scale high-quality entities and relations.
Wikidata is an open-source knowledge base, serving as a central storage for all structured data from Wikimedia projects. The data model of Wikidata typically consists of two components: *items* and properties. Items represent things in human knowledge. Each item corresponds to a identifiable concept or object, or to an instance of a concept or object. We use entities in the top nine categories, including human, taxon, administrative territorial, architectural structure, occurrence, chemical compound, film, thoroughfare, and astronomical object.4 Then, we use the relationship of instance of, subclass of, and *part of* to extract triplets.
ConceptNet is another open-source semantic network. It contains a large number of knowledge graphs that connect words and phrases of natural language with labeled edges (relations). Its knowledge is collected from many sources, where two entities are connected by a closed class of selected relations such as IsA, *UsedFor*, and *CapableOf*.
We use ConceptNet to extract the descriptive attributes of the entities obtained from Wikidata. By this means, we can obtain another group of triplets, which are also used for constructing syllogism.
## 3.2 Data Processing
In this section, we introduce the construction process of five types of syllogism data, respectively.
Some examples are shown in Table 2.
## 3.2.1 Categorical Syllogism
As shown in Table 1, a categorical syllogism is composed of a major premise, a minor premise, and a corresponding conclusion. We first construct premises and then use them to infer the conclusion and form syllogisms.
The premise in a categorical syllogism can be summarized as four propositions according to different quantifiers and copulas:
(1) All S are P; (2) No S are P;
(3) Some S are P; (4) Some S are not P;
4The full list and the statistics are available at: https:
//www.wikidata.org/wiki/Wikidata:Statistics.
where S and P are two entities. With different combinations of the four propositions, categorical syllogisms can be categorized into 24 valid patterns. The first part of Table 2 shows an example of Dimatis syllogism, which is one of the valid patterns.5 To construct premises, we use the extracted triplets from Wikidata and ConceptNet. To obtain a proposition which contains negative relationship, we can use the *Antonym* and *DistinctFrom* relationship in ConceptNet to construct it. Taking the triplets (chemical compound, subclass of, *pure substance*) and (chemical compound, Antonym, *mixture*) as an example, we have:
(1) All chemical compounds are pure substances;
(2) No chemical compounds are mixture;
(3) Some pure substances are chemical compounds;
(4) Some pure substances are not mixture.
By this means, we can obtain various premises, which will be used for constructing syllogisms.
Considering the example in Table 2, which is a Dimatis syllogism, we first sample a triplet (carbon dioxide, IsA, *chemical compound*). Then, we use the middle term *chemical compound* to sample another triplet (chemical compound, *subclass of*, pure substance), which forms the minor premise.
Finally, we can generate a conclusion based on the pattern definition. All other different patterns of syllogisms can be constructed in a similar way.
## 3.2.2 Hypothetical Syllogism
Similar to categorical syllogism, a hypothetical syllogism has two premises and a conclusion. The difference is that the premises have one or more hypothetical propositions. A hypothetical Syllogism has three valid patterns (the full list is in Appendix A), and we use five relations (i.e., Causes, HasSubevent, HasPrerequisite, *MotivatedByGoal*,
and *CausesDesire*) in ConceptNet to construct hypothetical propositions.
The following pattern is used as an example to illustrate the data construction process:
Premise 1: If P is true, then Q is true. Premise 2: If Q is true, then R is true. Conclusion: If P is true, then R is true.
Specifically, we extract a triplet pair where the tail entity of one triplet is the head entity of another triplet, *e.g.*, (success, CausesDesire, *celebrate*) and
(celebrate, CausesDesire, *have a party*). This triplet pair can construct premises as *success makes* 5Other patterns can be referred to in Appendix A.
you want to celebrate, and *celebration makes you* want to have a party. Then, we can build a hypothetical syllogism according to the pattern, and the corresponding conclusion is success makes you want to have a party. Hypothetical syllogism with other patterns can be constructed in a similar way.
## 3.2.3 Disjunctive Syllogism
A disjunctive syllogism has two premises: One of them is a compound proposition, which tells that at least one proposition is true; The other premise tells that one proposition in the former premise is false. Then, we can infer another proposition in the former premise is true. For example, if P and Q
are two propositions, a disjunctive syllogism can be described as:
Premise 1: P is true or Q is true; Premise 2: P is not true; Conclusion: Q is true.
According to whether the two propositions can be both true, a disjunctive syllogism can be categorized as compatible or incompatible.
We use ten relations in ConceptNet to construct disjunctive syllogism, where eight of them (such as *PartOF* and *HasA*) are used for compatible disjunctive syllogism, and the rest two (i.e., *Antonym* and *DistinctFrom*) are used for incompatible disjunctive syllogism (all relations we used are listed in Appendix B). Here, we use the incompatible disjunctive syllogism as an example to illustrate the construction process.
We first sample a triplet for an entity, such as
(newspapers, CapableOf, *come weekly*) and (newspapers, CapableOf, *come daily*). Then, we can construct a premise as *newspapers can come weekly or* come daily. Next, we obtain another premise, such as *some newspapers cannot come weekly*. Finally, we can have the conclusion as *some newspapers* come daily. In this way, we can automatically construct various disjunctive syllogisms based on the triplets in ConceptNet.
## 3.2.4 Polysyllogism
A polysyllogism is a combination of a series of syllogisms. It usually contains three or more premises and a conclusion. We construct polysyllogisms based on categorical syllogisms, and the construction process can be summarized as the following steps:
(1) We sample a categorical syllogism from our categorical syllogism repository (built in Section 3.2.1).
(2) According to the form of the conclusion, we can get its predicate term and subject term.
(3) We use these terms to traverse the repository and select a premise/conclusion that contains them.
(4) We use the conclusion obtained in the second step and the selected premise/conclusion in the third step as two new premises. Then, we can infer the conclusion and check if the generated syllogism follows a valid pattern.
(5) Repeat the above process, and we can obtain a series of syllogisms.
(6) We use both premises in the first syllogism and the minor premise in all other syllogisms as the premises of the polysyllogism. The conclusion is obtained from the last syllogism's conclusion. By this means, we can construct a polysyllogism.
We provide an example in the fourth row of Table 2 to illustrate the construction process.
## 3.2.5 Complex Syllogism
In addition to constructing the previous four types of syllogism, we investigate another new type of syllogism, which is called complex syllogism. A
complex syllogism contains two premises and a conclusion, and the premises and conclusion are compound propositions, which contain one or more logical connectives (i.e., not, and, or, and *if-then*).
These logical connectives significantly increase the difficulty of the syllogism. An example of a complex syllogism is shown in the last row of Table 2.
The construction steps can be summarized as:
(1) We randomly sample a pattern from hypothetical and disjunctive syllogism as a basic pattern.
(2) We replace the simple propositions in the basic pattern (such as P, Q, and R) by a compound proposition with the logical connectives not, and, and or, (e.g., not P, *P or Q*, and *P and Q*).
(3) After the replacement, we can infer the conclusion (according to the pattern we derived, as shown in Appendix A) and construct a complex syllogism.
Rule of Replacement To replace a simple proposition by a compound proposition, we use the *Synonyms* relation in ConceptNet. For example, considering the proposition *something that might happen as a consequence of eating ice cream is pleasure*, we use the synonym of the entity *ice cream*,
i.e., *cone*, and construct a compound proposition as *something that might happen as a consequence* of eating ice cream and cone is pleasure.
## 3.3 Rewriting
With the above process, we obtain a large number of syllogisms. However, these syllogisms are constructed based on predefined patterns, which have fixed structures and may contain grammar faults. In our preliminary study, we find that models trained on such pattern-based data have a poor robustness, potentially because the models are overfitting to the patterns rather than learning the real reasoning process. To alleviate this problem, we apply GPT3 (Brown et al., 2020) for rewriting, which has been shown to be effective (Ding et al., 2022). Specifically, we use a prompt with some human-rewritten examples to ask GPT-3 to change the expression of the syllogism but keep its original meaning and pattern. The generated results have good quality in fluency, diversity, and logic, which are suitable for training models (some examples are shown in the bottom of Figure 1, and the detailed process is described in Appendix C).
Furthermore, to test the models' performance on (real) syllogisms and facilitate future in-depth research, we manually rewrite 1,000 samples from our collected data as a test set. The rewriting process includes filtering the noise, correcting the grammar faults, and paraphrasing (details process is described in Appendix D). Our experiments (see Section 4.4) will show that the test data are very challenging, whereas training on our automatically collected data is still effective.
As yet, we have obtained 50k samples by GPT-3 rewriting, which are used for training and validation, and 1k samples by further human annotation, which are used for testing. All of them are **equally**
distributed over the five types.
## 4 Experiments 4.1 Task Formalization
Based on our collected data, we design two tasks:
Conclusion Generation It is a natural language generation task. The model should generate the correct conclusion based on two given premises.
Premises and conclusions are natural language text, which can be represented as sequences of tokens. Formally, given two premises P1 =
{w P1 1
, · · · , wP1 m } and P2 = {w P2 1
, · · · , wP2 n },
the model is asked to generate the conclusion C = {w C
1
, · · · , wC
l}, where w is a token. Similar to other text generation tasks, the generation probability of the conclusion is determined by the product of the probability of each word, which can be described as: P(C|P1, P2) =
QP(w C
i|w C
<i, [P1; P2]), where [;] is concatenation operation. More premises can be handled by concatenating all of them as a long sequence.
Conclusion Selection It is a natural language understanding task. The model is asked to select a correct conclusion from four options, where three of them are distractors. Detailed construction process is given in Appendix F. With the above notations of premises and conclusion, we can define the conclusion selection task as:
$$S(C_{i},[P_{1};P_{2}])=\frac{\exp(M(C_{i},[P_{1};P_{2}]))}{\sum_{j=1}^{4}\exp(M(C_{j},[P_{1};P_{2}]))},$$
where S(Ci, [P1; P2]) is the predicted probability of Ci as a correct conclusion, and M(·, ·) is the output logit of the model.
The statistics of our dataset for both tasks are given in Appendix G.
## 4.2 Baseline And Evaluation Metrics
We compare the performance of several models.
For the conclusion generation task, we consider Transformer (Vaswani et al., 2017) and several pretrained models, including GPT-2 (Radford et al.,
2019), T5 (Raffel et al., 2020), and BART (Lewis et al., 2020). For the conclusion selection task, we employ BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), and ELECTRA (Clark et al., 2020) as baseline methods. For all pre-trained models, we use the base version.
As for evaluation metrics, following previous studies (Aghahadi and Talebpour, 2022), we use ROUGE-1/2/L (Lin, 2004), BLEU-1/2 (Papineni et al., 2002), and BERT-Score (Zhang et al., 2020)
to evaluate the performance of the conclusion generation task. ROUGE and BLEU are commonly used metrics for text generation, and they measure the n-grams overlap between the generated text and the ground-truth text. BERT-Score is a recently proposed model-based metric. It leverages the pre-trained contextual embeddings from BERT
and matches words in generated and ground-truth texts by cosine similarity. For the conclusion selection task, we use Accuracy to evaluate the models' performance. The implementation details are provided in Appendix H.
| R-1 | R-2 | R-L | B-1 | B-2 | BS | R-1 | R-2 | R-L | B-1 | B-2 | BS | R-1 | R-2 | R-L | B-1 | B-2 | BS |
|-----------------------------------|-------------------------------------------------------------------------------------------------------------|----------------------------------------------|-------------|-------|------|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|------|
| Model | Categorical | Hypothetical | Disjunctive | | | | | | | | | | | | | | |
| Transformer 15.75 2.80 14.32 5.76 | 0.92 82.44 19.39 2.76 18.16 12.83 2.23 86.31 18.03 2.24 16.68 8.67 | 1.13 83.93 | | | | | | | | | | | | | | | |
| GPT-2 | 30.98 7.07 26.12 19.86 4.26 88.22 27.93 6.65 25.38 18.54 3.93 89.63 36.68 13.32 34.83 26.21 7.51 90.72 | | | | | | | | | | | | | | | | |
| T5 | 39.03 11.45 29.55 23.26 6.43 89.15 34.45 12.37 31.77 24.71 8.49 90.20 50.11 27.67 47.14 37.55 18.44 92.43 | | | | | | | | | | | | | | | | |
| BART | 35.19 8.88 26.86 20.73 4.18 88.93 34.77 13.03 32.22 24.21 9.27 90.22 49.07 27.10 46.14 36.36 18.56 92.52 | | | | | | | | | | | | | | | | |
| Model | Polysyllogism | Complex | All | | | | | | | | | | | | | | |
| Transformer 22.05 5.56 19.13 8.00 | 1.78 83.69 17.42 2.38 16.89 8.04 | 1.01 85.28 22.29 4.53 20.32 14.23 2.74 86.28 | | | | | | | | | | | | | | | |
| GPT-2 | 41.28 16.22 36.37 28.26 9.02 89.40 31.68 10.62 30.51 23.89 6.33 89.79 34.38 11.07 30.42 23.24 6.58 89.52 | | | | | | | | | | | | | | | | |
| T5 | 45.61 20.15 40.46 34.27 14.02 90.21 42.65 21.58 40.75 35.93 17.29 91.12 43.21 19.13 38.72 31.02 13.01 90.82 | | | | | | | | | | | | | | | | |
| BART | 46.50 21.18 41.15 33.42 12.91 90.37 41.96 20.63 39.58 33.69 15.77 90.91 41.85 18.20 37.59 29.09 11.83 90.69 | | | | | | | | | | | | | | | | |
Table 3: Results of conclusion generation task. "R-1/2/L" stands for Rouge-1/2/L, "B-1/2" stands for BLEU-1/2, and "BS" denotes BERT-Score.
Table 4: Accuracy of conclusion selection task.
| Type | BERT | RoBERTa | XLNet | ELECTRA |
|---------------|--------|-----------|---------|-----------|
| Categorical | 27.50 | 33.00 | 35.00 | 36.50 |
| Hypothetical | 69.12 | 75.00 | 73.53 | 77.94 |
| Disjunctive | 97.51 | 97.51 | 98.01 | 97.51 |
| Polysyllogism | 65.02 | 67.49 | 66.50 | 76.35 |
| Complex | 68.32 | 70.79 | 71.78 | 72.28 |
| All | 64.06 | 72.77 | 72.67 | 70.89 |
## 4.3 Experimental Results
The results of all models on the conclusion generation task are shown in Table 3, while those on the conclusion selection task are reported in Table 4.
For the conclusion generation task, we can see that the overall performance in terms of wordoverlap metrics (such as ROUGE and BLEU) is poor. Given that conclusions are often brief (11.84 tokens on average), these results show that the task is fairly challenging. In contrast, the BERT-Score is high, indicating that models are able to generate some semantically correct contents but cannot organize them into a reasonable conclusion. Furthermore, the pre-trained language models perform significantly better than the vanilla Transformer. We attribute this to the natural language nature of our dataset, and these results suggest that our dataset can help future research on leveraging pre-trained language models to generate logically reasonable texts. Finally, we notice that the performance on the human-written test set and the automatically generated validation set (in Table 15) is close, reflecting the good quality of GPT-3 rewriting.
For the conclusion selection task, the overall accuracy is around 70%, showing a significant deviation from perfection. In Table 4, the model for a single type of syllogism is trained solely on the corresponding type of data. Therefore, the result of type "All" is not the *average* result of the five types of syllogisms. We notice that almost all results for ELECTRA are highest, but it has only 70.89 for the type "ALL". We speculate the reason is that the ELECTRA model is not robust when trained with mixed data, and the data in different types of syllogism might confuse it. Intriguingly, the performance on categorical syllogisms is extremely bad. A potential reason is that this type of syllogisms contains more patterns (*e.g.*, categorical syllogisms have 24 valid patterns). As a comparison, the performance on hypothetical syllogisms is significantly higher since there are only three patterns. We also notice that the performance on polysyllogisms is higher than that on categorical syllogisms, despite the fact that the former is derived from the latter. We speculate the reason is that the polysyllogisms have more abundant information in premises (*i.e.*, multiple premises), which is helpful for pre-trained language models to conduct reasoning.
## 4.4 Further Analysis
We also explore the following research questions.
To save space, we report the results of the conclusion generation task, while similar trends can be observed on the conclusion selection task, which is shown in Appendix.
Effect of Automatically Constructed Data In our benchmark, the training data are automatically constructed from knowledge bases, while the test data are human annotated.6 To reveal the relationship between them, we conduct an additional experiment: we split the test set as new training, vali-
| Model | w/o Automatic data | w/ Automatic data |
|---------|-----------------------|-----------------------|
| GPT-2 | 35.35 / 11.42 / 31.75 | 42.39 / 15.92 / 38.25 |
| T5 | 39.24 / 17.32 / 34.10 | 53.47 / 26.30 / 48.37 |
| BART | 42.49 / 18.41 / 38.76 | 50.61 / 25.51 / 47.26 |
Table 5: Results (ROUGE-1/2/L) of the conclusion generation task with or without pre-training on automatic training data.
| ID | Pre-training → | Fine-tuning | R-1 | R-2 | R-L |
|------|------------------|---------------|-------|-------|-------|
| (1) | Categorical | Hypothetical | 34.36 | 12.92 | 31.53 |
| (2) | Categorical | Disjunctive | 48.92 | 27.19 | 45.93 |
| (3) | Categorical | Polysyllogism | 48.17 | 23.09 | 43.00 |
| (4) | Categorical | Complex | 43.95 | 22.46 | 42.14 |
| (5) | Polysyllogism | Categorical | 38.20 | 10.76 | 28.00 |
| (6) | Disjunctive | Complex | 42.77 | 21.34 | 40.42 |
| (7) | Hypothetical | Complex | 43.09 | 21.50 | 40.66 |
| (8) | Complex | Disjunctive | 49.53 | 28.51 | 47.10 |
| (9) | Complex | Hypothetical | 34.44 | 11.98 | 31.68 |
| (10) | None | Categorical | 35.19 | 8.88 | 26.86 |
| (11) | None | Hypothetical | 34.77 | 13.03 | 32.22 |
| (12) | None | Disjunctive | 49.07 | 27.10 | 46.14 |
| (13) | None | Polysyllogism | 46.00 | 21.18 | 41.15 |
| (14) | None | Complex | 41.96 | 20.63 | 39.58 |
| (15) | SYLLOBASE | Avicenna | 79.71 | 69.80 | 77.42 |
| (16) | None | Avicenna | 76.73 | 66.83 | 74.91 |
Table 6: Results of transfer learning. Results in **bold**
indicate improvement over non-transfer learning.
dation, and test sets with a ratio of 8:1:1 (*i.e.*, 800, 100, and 100 samples respectively). Then, we train models on the new training data and test their performance on the new test data. As a comparison, we also train models that have been pre-trained on the original training data (automatically constructed). The results are illustrated in Table 5.
It is clear to see that training on automatically constructed data is beneficial for learning manually rewritten data. This is due to the fact that the original dataset is large and contains sufficient training signals. This also validates the benefit of our dataset—the knowledge acquired from large-scale data can be transferred to more difficult problems.
Transfer Learning SYLLOBASE supports study on five types of syllogisms. We explore their internal relationships through a transfer learning experiment. Besides, we also investigate if the knowledge learned on SYLLOBASE can improve other syllogism datasets (*e.g.*, Avicenna). The results are shown in Table 6. In this experiment, we first train a BART model on one dataset (denoted as "pretraining"), then further train it on another dataset
(denoted as "fine-tuning") and report the results.
In the first group of experiments (the first two Table 7: Impact of context for conclusion generation
(ROUGE-1/2/L).
rows), we can see learning categorical syllogisms contributes less to learning hypothetical and disjunctive syllogisms. This confirms our concern that merely studying categorical syllogisms is not enough, and it proves our contribution to syllogism study. In terms of the results in rows (3)-(9), we can generally conclude that learning basic syllogisms is beneficial for learning combined syllogisms, and vice versa. One exception is the result in the row (9), and it indicates that the knowledge learned from the complex syllogisms does not help for learning hypothetical syllogisms. We speculate the reasons are: (a) complex syllogisms have significantly more patterns than hypothetical syllogisms
(42 vs. 3), and (b) the premise/conclusion of complex syllogisms is too complicated to form effective knowledge for hypothetical syllogisms. Finally, comparing the results in the row (15) and (16),
we can see models trained on SYLLOBASE have good generalizability on other syllogism datasets, demonstrating once again the value of our SYL-LOBASE on general syllogism research.
Effect of Context in Premises Existing machine reading comprehension datasets often provide a paragraph for reasoning. Inspired by these tasks, we expand the premises in our generated syllogisms by adding more informative context so as to validate the models' capability of extracting effective clues and inferring conclusions. Specifically, for each premise in the manually rewritten dataset, we ask the annotators to further collect some relevant information through search engines and add it as the context. After this step, both premises are hidden in paragraphs, which makes it more difficult to infer a correct conclusion (as shown in Table 13).
Results of both tasks shown in Table 7 indicate: (1)
Existing models are still far from tackling reasoning problems in real life; and (2) Extracting clues
(such as premises in our case) before reasoning is a promising solution for reasoning tasks, which could be explored in the future.
Appendix I shows a case study with some modelgenerated conclusions of syllogisms.
| Model | w/o Context | w/ Context |
|---------|-----------------------|----------------------|
| GPT-2 | 34.38 / 11.07 / 30.42 | 22.33 / 5.16 / 19.44 |
| T5 | 43.21 / 19.13 / 38.72 | 27.19 / 8.30 / 24.08 |
| BART | 41.85 / 18.20 / 37.59 | 25.71 / 8.02 / 22.71 |
## 5 Conclusion
In this work, we built a large-scale benchmark for natural language syllogistic reasoning. It covers five types of syllogism. The data were automatically constructed from knowledge bases by our proposed construction methods. To evaluate the models' performance on real human syllogism, we manually rewrite 1,000 samples as the test set. Experiments show that syllogistic reasoning is a very challenging task for existing pre-trained language models. Moreover, our further study indicates that existing models are even farther from tackling syllogistic reasoning in real scenarios.
## Ethical Statement
This work constructs a new benchmark for syllogistic reasoning. The main dataset is automatically constructed using entities and their relations from Wikidata and ConceptNet. The construction template is predefined and manually reviewed, so the ethical concerns are avoided. For the human rewriting process, we hire five annotators and require them to avoid any social bias and privacy issues in the rewritten material. The results are randomly shuffled and sent back to them for an ethical review.
We pay them roughly $15 per hour for annotation.
## Limitations
We build a new benchmark for syllogistic reasoning. The limitations are mainly in the experiments part: (1) Due to the limited human resources, our test set is quite small, which may not support training large models directly. (2) We evaluate all models by comparing their predictions with the groundtruth conclusions, but human performance is not evaluated. As a benchmark, it may be better to provide human performance and show the performance gap of existing models. (3) We have not tested the performance of pre-trained models in terms of logical correctness. This kind of automatic metrics has been rarely studied, which can be a potential direction of our future work.
## References
Zeinab Aghahadi and Alireza Talebpour. 2022. Avicenna: a challenge dataset for natural language generation toward commonsense syllogistic reasoning.
Journal of Applied Non-Classical Logics, 0(0):1–17.
Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical inference. In
HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada, pages 628–635. The Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 632–642. The Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In *8th International Conference on* Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Irving Copi, Carl Cohen, and Victor Rodych. 2016. *Introduction to logic*. Routledge.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The PASCAL recognising textual entailment challenge. In *Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First* PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, volume 3944 of *Lecture* Notes in Computer Science, pages 177–190. Springer.
Hannah Dames, Clemens Schiebel, and Marco Ragni.
2020. The role of feedback and post-error adaptations in reasoning. In Proceedings of the 42th Annual Meeting of the Cognitive Science Society - Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020, virtual, July 29 - August 1, 2020. cognitivesciencesociety.org.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander H. Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander I. Rudnicky, Jason Williams, Joelle Pineau, Mikhail S. Burtsev, and Jason Weston. 2019. The second conversational intelligence challenge (convai2).
CoRR, abs/1902.00098.
Bosheng Ding, Chengwei Qin, Linlin Liu, Lidong Bing, Shafiq R. Joty, and Boyang Li. 2022. Is GPT-3 a good data annotator? *CoRR*, abs/2212.10450.
Tiansi Dong, Chengjiang Li, Christian Bauckhage, Juanzi Li, Stefan Wrobel, and Armin B. Cremers. 2020. Learning syllogism with Euler neuralnetworks. *CoRR*, abs/2007.07320.
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2391–
2401. Association for Computational Linguistics.
William Huitt and John Hummel. 2003. Piaget's theory of cognitive development. Educational psychology interactive, 3(2).
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *Trans. Assoc. Comput. Linguistics*, 7:452–
466.
Douglas B. Lenat, Ramanathan V. Guha, Karen Pittman, Dexter Pratt, and Mary Shepherd. 1990. CYC: toward programs with common sense. *Commun. ACM*,
33(8):30–49.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880.
Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In *Proceedings of the TwentyNinth International Joint Conference on Artificial* Intelligence, IJCAI 2020, pages 3622–3628. ijcai.org.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Bill MacCartney and Christopher D. Manning. 2009.
An extended model of natural logic. In *Proceedings* of the Eight International Conference on Computational Semantics, IWCS 2009, Tilburg, The Netherlands, January 7-9, 2009, pages 140–156. Association for Computational Linguistics.
George A. Miller. 1995. Wordnet: A lexical database for English. *Commun. ACM*, 38(11):39–41.
Allen Newell and Herbert A. Simon. 1956. The logic theory machine-a complex information processing system. *IRE Trans. Inf. Theory*, 2(3):61–79.
Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards reducing hallucination in neural surface realisation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2673–2679. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311–318. ACL.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z.
Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035.
Shiya Peng, Lu Liu, Chang Liu, and Dong Yu. 2020. Exploring reasoning schemes: A dataset for syllogism figure identification. In *Chinese Lexical Semantics*
- 21st Workshop, CLSW 2020, Hong Kong, China, May 28-30, 2020, Revised Selected Papers, volume 12278 of *Lecture Notes in Computer Science*, pages 445–451. Springer.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. In *Proceedings of the Thirty-First* AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4444–4451. AAAI Press.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun.
ACM, 57(10):78–85.
Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112–1122.
Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers:
State-of-the-art natural language processing. *CoRR*,
abs/1910.03771.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC,
Canada, pages 5754–5764.
Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng.
2020. Reclor: A reading comprehension dataset requiring logical reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
![11_image_0.png](11_image_0.png)
Table 8: An example of paraphrasing process.
Original premise of a hypothetical syllogism Premise: Something that might happen as a consequence of attending a classical concert is going to sleep.
Retrieval and manual check Premise: I probably spend more concert time asleep than awake.
Rewriting Premise: When attending classical concerts, people probably spend more concert time asleep than awake.
## A Patterns In Syllogism
We list all valid patterns in categorical (shown in Table 9), hypothetical (shown in Table 10), and complex syllogisms (shown in Table 11).
## B Relations From Wikidata And Conceptnet
We list all relations that are used for constructing syllogisms in Table 12. For Wikidata, we use 16 relations, which are all used for constructing categorical syllogisms. As for ConceptNet, we use 15 relations, and they are used for constructing categorical, hypothetical, and disjunctive syllogisms.
## C Gpt-3 Rewriting
GPT-3 is a well-known pre-trained language model, which has demonstrated impressive few-shot performance on a wide range of natural language processing (NLP) tasks. Recently, researchers has tried to use GPT-3 to annotate data for NLP
tasks (Ding et al., 2022). Inspired by this, we choose GPT-3 to complete the rewriting task. In our case, we use a prompt to ask GPT-3 to change the expression of the syllogism but keep its original meaning and pattern. We also append some humanrewritten examples in the prompt as few-shot input.
The generated results have good quality in fluency, diversity, and logic, which are suitable for training models. The prompts used for rewriting are listed in Table 16-20.
## D Human Rewriting
First, 500 samples are randomly collected from each type of syllogism, respectively. Then, we examine the semantics and filter out illogical syllogisms. Next, for the remaining ones, we correct the grammatical problems (if any). Finally, for each premise/conclusion, the language is painstakingly paraphrased. The paraphrasing process is illustrated in Algorithm 1, and an example is given in Table 8. After rewriting, the sample is more diverse, fluent, and closer to real human language.
## E Annotation Of Automatic Data
To evaluate the quality of our automatically generated data, we conduct a human annotation for 100 random samples (20 for each type of syllogisms). The annotators are asked to label whether the samples have grammatical faults and incorrect logic. The overall accuracy is 73%. Concretely, the accuracy is 70%, 90%, 70%, 65%, and 70%
for categorical syllogisms, hypothetical syllogisms, disjunctive syllogisms, polysyllogisms, and complex syllogisms, respectively. This result reflects:
(1) Our automatic data have fairly good quality.
Our experiments in Section 4.4 also validates this.
(2) The polysyllogism is hard to construct as it concerns multiple syllogisms.
## F Distractor Construction In Conclusion Selection Task
In the conclusion selection task (introduced in Section 4.1), we mix the correct conclusion with three distractors. Basically, these distractors are generated from the ground-truth conclusion by changing its quantifier, adding negative words, or exchanging its subject and object. Specifically, for different kinds of syllogisms, we show the distractor generation process by some examples.
Categorical Syllogism For a syllogism as follows:
Premise 1: All m are p. Premise 2: All s are m.
Conclusion: All s are p.
| Pattern | Figure | Major premise | Minor premise | Conclusion |
|------------------|----------|------------------|------------------|------------------|
| Barbara (AAA) | 1 | All m are p | All s are m | All s are p |
| Barbari (AAI*) | 1 | All m are p | All s are m | Some s are p |
| Celarent (EAE) | 1 | No m is p | All s are m | No s is p |
| Celaront (EAO*) | 1 | No m is p | All s are m | Some s are not p |
| Darii (AII) | 1 | All m are p | Some s are m | Some s are p |
| Ferio (EIO) | 1 | No m is p | Some s are m | Some s are not p |
| Camestres (AEE) | 2 | All p are m | No s is m | No s is p |
| Camestros (AEO*) | 2 | All p are m | No s is m | Some s are not p |
| Cesare (EAE) | 2 | No p is m | All s are m | No s is p |
| Cesaro (EAO*) | 2 | No p is m | All s are m | Some s are not p |
| Baroco (AOO) | 2 | All p are m | Some s are not m | Some s are not p |
| Festino (EIO) | 2 | No p is m | Some s are m | Some s are not p |
| Darapti (AAI) | 3 | All m are p | All m are s | Some s are p |
| Felapton (EAO) | 3 | No m is p | All m are s | Some s are not p |
| Datisi (AII) | 3 | All m are p | Some m are s | Some s are p |
| Disamis (IAI) | 3 | Some m are p | All m are s | Some s are p |
| Bocardo (OAO) | 3 | Some m are not p | All m are s | Some s are not p |
| Ferison (EIO) | 3 | No m is p | Some m are s | Some s are not p |
| Bamalip (AAI) | 4 | All p are m | All m are s | Some s are p |
| Calemes (AEE) | 4 | All p are m | No m is s | No s is p |
| Calemos (AEO*) | 4 | All p are m | No m is s | Some s ara not p |
| Fesapo (EAO) | 4 | No p is m | All m are s | Some s are not p |
| Dimatis (IAI) | 4 | Some p are m | All m are s | Some s are p |
| Fresison (EIO) | 4 | No p is m | Some m are s | Some s are not p |
Table 10: Three valid patterns in hypothetical syllogism. P, Q, and R are three propositions.
| Original hypothetical syllogism Premise 1: If P is true, then Q is true. Premise 2: If Q is true, then R is true. Conclusion: If P is true, then R is true. Modus ponens Premise 1: If P is true, then Q is true. Premise 2: P is true. Conclusion: Q is true. Modus tollens Premise 1: If P is true, then Q is true. Premise 2: Q is not true. Conclusion: P is not true. |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
We can generate distractors of the conclusion as:
(1) Some s are p. (*modify quantifiers*)
(2) All s are not p. (*add negative words*)
(3) All p are s. (*exchange subjects and predicates*)
(4) Some p are not s. (*others*)
## Hypothetical Syllogism For A Syllogism As Follows:
Premise 1: If P is true, then Q is true. Premise 2: If Q is true, then R is true.
Conclusion: If P is true, then R is true.
We can generate distractors of the conclusion as:
(1) If R is true, then P is true.
(*exchange propositions*)
(2) If Q is true, then P is true.
(*exchange propositions*)
(3) If R is true, then Q is true.
(*exchange propositions*)
(4) P is true. (*remove a proposition*) (5) Q is true. (*remove a proposition*) (6) R is true. (*remove a proposition*)
(7) If P is true, then R is not true.
(*add negative words*)
Disjunctive Syllogism For a syllogism as follows:
Premise 1: P is true or Q is true; Premise 2: P is not true; Conclusion: Q is true.
Table 11: 42 valid patterns in complex syllogisms.
Table 12: Relations used for syllogisms construction.
| Id | Premise 1 | Premise 2 | Conclusion | Type | Used Relations |
|--------------------------|------------------------------------------------|-------------------|--------------|----------------------|----------------------------------|
| 0 | ¬ p ∨ q | p | q | | |
| 1 | (p ∧ q) ∨ r | ¬ p ∨ ¬ q | r | | |
| 2 | (p ∨ q) ∨ r | ¬ p ∧ ¬ q | r | | |
| 3 | p ∨ ¬ q | ¬ p | ¬ q | | |
| 4 | p ∨ (q ∧ r) | ¬ p ∧ q | r | | |
| 5 | p ∨ (q ∧ r) | ¬ p ∧ r | q | | |
| 6 | p ∨ (q ∨ r) | ¬ p ∧ ¬ r | q | | |
| 7 | ¬ p ∨ q | ¬ q | ¬ p | | |
| 8 | p ∨ (q ∨ r) | ¬ q ∧ ¬ r | p | | |
| 9 | (p ∧ q) ∨ r | p ∧ ¬ r | q | | |
| 10 | (p ∧ q) ∨ r | q ∧ ¬ r | p | | |
| 11 | p ∨ ¬ q | q | p | | |
| 12 | p ∨ (q ∧ r) | ¬ q ∨ ¬ r | p | | |
| 13 | ¬ q → ¬ p | ¬ q | ¬ p | | |
| 14 | (p ∨ q) → r | p ∨ q | r | | |
| 15 | (p ∧ q) → r | p ∧ q | r | | |
| 16 | p → (q ∨ r) | p | q ∨ r | | |
| 17 | p → (q ∨ r) | p ∧ ¬ q | r | | |
| 18 | p → (q ∨ r) | p ∧ ¬ r | q | | |
| 19 | p → (q ∧ r) | p | q ∧ r | | |
| 20 | p → (q ∧ r) | p ∧ q | r | | |
| 21 | p → (q ∧ r) | p ∧ r | q | | |
| 22 | (p ∨ q) → r | ¬ r | ¬ (p ∨ q) | | |
| 23 | (p ∨ q) → r | ¬ p ∧ ¬ r | ¬ q | | |
| 24 | (p ∨ q) → r | ¬ q ∧ ¬ r | ¬ p | | |
| 25 | (p ∧ q) → r | ¬ r | ¬ (p ∧ q) | | |
| 26 | (p ∧ q) → r | p ∧ ¬ r | ¬ q | | |
| 27 | (p ∧ q) → r | q ∧ ¬ r | ¬ p | | |
| 28 | p → (q ∨ r) | ¬ q ∧ ¬ r | ¬ p | | |
| 29 | p → (q ∧ r) | ¬ q ∨ ¬ r | ¬ p | | |
| 30 | ¬ q → ¬ p | ¬ r → ¬ q | ¬ r → ¬ p | | |
| 31 | (p ∨ q) → r | r → s | (p ∨ q) → s | | |
| 32 | (p ∨ q) → r | (r → s) ∧ p | s | | |
| 33 | (p ∨ q) → r | (r → s) ∧ q | s | | |
| 34 | (p ∧ q) → r | r → s | (p ∧ q) → s | | |
| 35 | (p ∧ q) → r | (r → s) ∧ p ∧ q | s | | |
| 36 | p →(q ∨ r) | (q ∨ r) → s | p → s | | |
| 37 | p →(q ∧ r) | (q ∧ r) → s | p → s | | |
| 38 | p → q | q → (r ∨ s) | p → (r ∨ s) | | |
| 39 | p → q | (q → (r ∨ s)) ∧ p | r ∨ s | | |
| 40 | p → q | q → (r ∧ s) | p → (r ∧ s) | | |
| 41 | p → q | (q → (r ∧ s)) ∧ p | r ∧ s | Wikidata Categorical | academic degree subclass (human) |
| Categorical | ethnic subclass (human) | | | | |
| Categorical | field of work subclass (human) | | | | |
| Categorical | genre subclass (human) | | | | |
| Categorical | occupation subclass (human) | | | | |
| Categorical | language subclass (human) | | | | |
| Categorical | instance of (human) | | | | |
| Categorical | instance of (taxon) | | | | |
| Categorical | taxon subclass (taxon) | | | | |
| Categorical | film subclass (film) | | | | |
| Categorical | chemical compound subclass (chemical compound) | | | | |
| Categorical | administrative territorial subclass (administrative territorial) | | | | |
| Categorical | architectural structure subclass (architectural structure) | | | | |
| Categorical | astronomical object subclass (astronomical object) | | | | |
| Categorical | occurrence subclass (occurrence) | | | | |
| Categorical | thoroughfare subclass (thoroughfare) | | | | |
| ConceptNet Categorical / | /r/CapableOf | | | | |
| Disjunctive Categorical | / | /r/HasProperty | | | |
| Disjunctive Categorical | / | /r/Antonym | | | |
| Disjunctive Categorical | / | /r/DistinctFrom | | | |
| Disjunctive Disjunctive | /r/Part of | | | | |
| Disjunctive | /r/HasA | | | | |
| Disjunctive | /r/UsedFor | | | | |
| Disjunctive | /r/SymbolOf | | | | |
| Disjunctive | /r/MannerOf | | | | |
| Disjunctive | /r/MadeOf | | | | |
| Hypothetical | /r/Causes | | | | |
| Hypothetical | /r/HasSubevent | | | | |
| Hypothetical | /r/HasPrerequisite | | | | |
| Hypothetical | /r/MotivatedByGoal | | | | |
| Hypothetical | /r/CausesDesire | | | | |
We can generate distractors of the conclusion as:
(1) Q is not true. *(add negative words)* (2) P is true. *(change a proposition)*
(3) P is true or Q is not true. *(add a proposition)*
Polysyllogism Syllogism This kind of syllogism is built on several categorical syllogisms. Therefore, we can use the same distractor construction method as categorical syllogisms.
Complex Syllogism This kind of syllogism is constructed by adding one or model logical connectives to the original premises and conclusions.
Therefore, to generate the distractors, we can (1)
add or remove the negative connective (i.e., not)
from the original proposition; or (2) replace the connectives in the original proposition by others
(e.g., and → or). For example, given a syllogism as follows:
Premise 1: If P is true or if Q is true, then R is true; Premise 2: If R is true, then S is true; Conclusion: If P is true or if Q is true, then S is true.
Table 13: An example of syllogism with context. The vanilla premises are in orange.
Premise 1: Carbon dioxide is a chemical compound composed of two oxygen atoms covalently bonded to a single carbon atom. CO2 exists in the earth's atmosphere as a gas and in its solid state it known as dry ice.
Premise 2: In a scientific context, "pure" denotes a single type of material. Ostensibly, compounds contain more than one type of material. Therefore, chemical compounds are considered pure substances. Pure compounds are created when elements combine permanently, forming one substance.
Conclusion: Pure substances include carbon dioxide.
![14_image_0.png](14_image_0.png)
We can generate distractors of the conclusion as:
(1) If P is true or if Q is true, then S is not true.
(add negative words)
(2) If P is true or if S is true, then Q is true.
(change a proposition)
(3) If P is true and if S is true, then Q is true.
(change the logical connective words)
## G Dataset Statistics
The statistics of our SYLLOBASE is given in Table 14.
## H Implementation Details
We use PyTorch (Paszke et al., 2019) and Transformers (Wolf et al., 2019) to implement all models.
They are trained on 8 Tesla V100 GPUs with 32GB
memory. All hyperparameters (*e.g.*, learning rate)
are tuned according to the performance (BLEU1/Accuracy) on the validation set.
In the conclusion generation task, for the decoder-only model GPT-2, the major premise and minor premise are concatenated as a long sequence and fed into the model (decoder) to generate the conclusion. For the encoder-decoder structure
(Transformer, T5, and BART), the two premises are concatenated and input to the encoder, while the conclusion is input to the decoder and used for generation. The maximum generation length is set as 128. The training batch size is set as 32. The AdamW (Loshchilov and Hutter, 2019) optimizer is applied with a learning rate of 5e-5. The learning rate decay mechanism is applied. All models are trained by 10 epochs, and the total training time is around 1.22 hours.
In the conclusion selection task, we concatenate two premises as one sequence, use the conclusion as another sequence, and transform them into the text-pair input format, which is commonly supported by pre-trained language models. For example, the input for BERT is: X =
[CLS]P1P2[SEP]C[SEP]. The representation of
[CLS] is used for option selection. The maximum sequence length is set as 256. The training batch size is set as 64. A learning rate of 2e-5 with decay mechanism is used. The optimizer is also AdamW.
All models are trained by ten epochs, and the total training time is around 3.29 hours.
## I Case Study
We show some results of BART in conclusion generation task to make a case study. We have listed a good case and a bad case for each type of syllogism.
They are shown in Table 21. We can see: (1) The model can generate conclusions that are different from the ground-truth but are also correct in logic.
This indicates that pre-trained language models can indeed learn some logic reasoning skills from syllogisms rather than merely "remembering" some fixed patterns. (2) Syllogistic reasoning is still difficult for existing models, and the errors stem from several different aspects. As shown in the hypothetical syllogism, the model generates a semantically correct conclusion, but it is irrelevant to the premises. This problem is identified as "hallucination" of pre-trained language models (Nie et al.,
2019), *i.e.*, the model cannot decide whether to generate a conclusion based on its learned parameters or the given context. We believe our dataset can contribute to the study of hallucinations in logical reasoning. As for the last case, the model generates a conclusion opposite to the ground-truth. This indicates that existing models may need additional reasoning modules to conduct complex reasoning problems.
| Conclusion Generation | Training | Validation | Test (w/o context) | Test (w/ context) |
|----------------------------------------|-------------|--------------|----------------------|---------------------|
| # Premises-Conclusion Pair | 40,000 | 10,000 | 1,000 | 1,000 |
| Avg./Max. # Tokens in Premises | 33.73 / 115 | 33.83 / 105 | 27.59 / 75 | 183.92 / 726 |
| Avg./Max. # Tokens in Conclusion | 11.84 / 66 | 11.91 / 62 | 8.5 / 21 | 8.5 / 21 |
| Conclusion Selection | Training | Validation | Test (w/o context) | Test (w/ context) |
| # Premises-Question Pair | 40,000 | 10,000 | 1,000 | 1,000 |
| Avg./Max. # Tokens in Premises | 33.73 / 115 | 33.83 / 105 | 27.59 / 75 | 183.92 / 726 |
| Avg./Max. # Tokens in Question | 12.39 / 16 | 12.39 / 16 | 12.38 / 16 | 12.38 / 16 |
| Avg./Max. # Tokens in Candidate Answer | 11.53 / 71 | 11.50 / 64 | 9.41 / 26 | 9.41 / 26 |
Table 14: Statistics of SYLLOBASE.
Table 15: Results of conclusion generation task on validation set. "R-1/2/L" stands for Rouge-1/2/L, "B-1/2" stands for BLEU-1/2, and "BS" denotes BERT-Score.
| R-1 | R-2 | R-L | B-1 | B-2 | BS | R-1 | R-2 | R-L | B-1 | B-2 | BS | R-1 | R-2 | R-L | B-1 | B-2 | BS |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|----------------------------------------------------------|-------|------|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|------|
| Model | Categorical | Hypothetical | Disjunctive | | | | | | | | | | | | | | |
| Transformer 16.85 3.63 14.95 7.38 | 1.4 | 83.09 24.02 5.67 22.29 19.36 4.54 87.32 16.74 2.72 15.47 8.99 | 1.34 83.95 | | | | | | | | | | | | | | |
| GPT-2 | 30.36 8.68 27.41 26.87 7.55 89.05 31.51 | 9.8 | 28.96 26.06 7.61 90.68 32.11 9.86 29.53 24.91 7.21 90.34 | | | | | | | | | | | | | | |
| T5 | 34.63 12.12 31.53 31.65 11.21 89.68 36.99 14.92 34.69 32.7 13.04 91.52 40.75 18.2 38.36 35.05 16.24 91.87 | | | | | | | | | | | | | | | | |
| BART | 35 | 12.27 31.69 30.76 10.7 89.78 36.44 14.84 34.32 32.33 13.09 91.56 40.62 18.13 38.26 34.67 15.83 91.79 | | | | | | | | | | | | | | | |
| Model | Polysyllogism | Complex | All | | | | | | | | | | | | | | |
| Transformer 31.23 10.28 29.16 17.77 5.15 87.36 20.36 4.71 19.11 10.14 1.99 85.61 24.59 6.25 22.65 18.68 4.48 87.22 GPT-2 49.23 24.32 46.22 41.37 19.3 91.87 36.25 14.18 33.83 27.87 9.39 90.72 36.16 13.63 33.52 29.4 10.1 90.59 T5 55.79 30.72 53.01 51.99 28.74 92.98 43.93 22.84 41.81 38.01 19.13 91.93 42.86 19.99 40.23 37.94 17.51 91.69 BART 56.49 31.23 53.52 51.78 28.79 93.14 45.21 23.99 42.81 38.35 19.6 92.14 42.85 20.26 40.17 37.3 17.44 91.75 | | | | | | | | | | | | | | | | | |
## Table 16: Gpt-3 Rewriting Prompts For Categorical Syllogisms.
Rewrite the following sentences to standard English. Keep the meaning and pattern of the original sentences, but change the expression of the sentences.
pattern: All m are p. Some s are m. [Therefore], some s are p.
original sentences: All sugar are carbohydrate. Some decay teeth are sugar. [Therefore], some decay teeth are carbohydrate.
rewritten sentences: Sugars are carbohydrates. Somethings that decay your teeth are sugary foods and drinks.
[Therefore], carbohydrate eating can sometimes promote tooth decay.
pattern: Some p are m. All m are s. [Therefore], some s are p.
original sentences: Some visual art are art of painting. All art of painting are activity. [Therefore], some activity are visual art.
rewritten sentences: The visual arts are art forms that create works that are primarily visual in nature, such as painting. Painting is the practice of applying paint, pigment, color or other medium to a solid surface. [Therefore], creativily activties are used to develop new artistic works, such as visual art.
pattern: No p is m. All m are s. [Therefore], some s are not p.
original sentences: No animal is mineral. All mineral are solid. [Therefore], some solid are not animal. rewritten sentences: Evidently, animal and vegetables are living, minerals not. A mineral is a naturally occurring inorganic solid. [Therefore], some substances are solids and are not living beings.
pattern: No p is m. All s are m. [Therefore], No s is p. original sentences: No animal is plant. All rose are plant. [Therefore], no rose is animal.
rewritten sentences: Traditionally, Animals cannot produce their own energy which not like plants. A rose is a woody perennial flowering plant of the genus Rosa. [Therefore], roses and animals are extremely different species in nature.
pattern: No m is p. All s are m. [Therefore], some s are not p. original sentences: No art is clumsiness. All sculpture are art. [Therefore], Some sculpture are not clumsiness.
rewritten sentences: Clumsiness is the lack of gracefulness or skill, whereas Art encompasses a diverse range of skill and techniques. Sculptures are artworks crafted with various media and materials. [Therefore], Sculpture makers are often talented at expressing creativity and not clumsy.
Rewrite the following sentences to standard English. Keep the meaning and pattern of the original sentences, but change the expression of the sentences.
pattern: If P is true, then Q is true. If Q is true, then R is true. [Therefore], if P is true, then R is true.
original sentences: Something you might do while dating is kiss. Something that might happen when you kiss someone is they smile. [Therefore], something that might happen when you dating is they smile.
rewritten sentences: When you are dating your beloved, you might have a sweet kiss. When you kiss your partner, you may find yourself smiling. [Therefore], when you are dating, you may find that you always have a smile on your face.
pattern: If P is true, then Q is true. If Q is true, then R is true. [Therefore], if P is true, then R is true.
original sentences: The effect of diminishing your own hunger is eating. The effect of eating is a full stomach.
[Therefore], the effect of diminishing your own hunger is a full stomach.
rewritten sentences: We are all aware that in order to reduce our hunger, we must consume food. Having a belly stuffed with comforting food can feel like a warm hug from the inside. [Therefore], We may feel full after we have satisfied our appetite.
pattern: If P is true, then Q is true. If Q is true, then R is true. [Therefore], if P is true, then R is true.
original sentences: Because you want to enjoy yourself, you would listen to music. Because you want to listen to music, you would hear singing. [Therefore], because you want to enjoy yourself, you would hear singing.
rewritten sentences: If you want to enjoy yourself after a long day of work, you may listen to music. You want to hear your favorite musician sing because you appreciate music. [Therefore], because you want to have fun, you want to hear some singing.
pattern: If P is true, then Q is true. If Q is true, then R is true. [Therefore], if P is true, then R is true.
original sentences: attending a lecture requires you to listen. If you want to listen then you should not talk so much yourself. [Therefore], If you want to attending a lecture then you should not talk so much yourself.
rewritten sentences: If you are in a lecture, you should focus your attention on listening and be mindful of disrupting the session due to speaking. If you want to devote your time to following the lecture without distractions, you must be aware of reducing your own babble. [Therefore], If you desire to remain in the lecture, it is key to dial down your chatter.
Table 18: GPT-3 rewriting prompts for disjunctive syllogisms.
Rewrite the following sentences to standard English. Keep the meaning and pattern of the original sentences, but change the expression of the sentences.
pattern: P is true or Q is true. P is not true. [Therefore], Q is true.
original sentences: Is the meal hot or cool. The meal are not hot. [Therefore], the meal are cool.
rewritten sentences: The meal is warm or cold when the man gets home from work. The food is not warm when the man stays late at work. [Therefore], the meal is cold when the man comes home late.
pattern: P is true or Q is true. P is not true. [Therefore], Q is true.
original sentences: The ocean is gas or liquid. The ocean is not gas. [Therefore], the ocean is liquid.
rewritten sentences: The ocean can exist in either liquid or gaseous form. The ocean is not gaseous. [Therefore],
oceans do not exist in a gaseous condition, as far as we know.
pattern: P is true or Q is true. P is not true. [Therefore], Q is true.
original sentences: Memories are good or sad. Memories are not good. [Therefore], memories are sad. rewritten sentences: People like being engrossed in memories, whether good or sad. Old memories are not always pleasant. [Therefore], memories of the past may cause sadness.
pattern: P is true or Q is true. P is not true. [Therefore], Q is true.
original sentences: You can use an audience to performing in front of or boost your ego. You can not use an audience to boost your ego. [Therefore], you can use an audience to performing in front of.
rewritten sentences: When you're in front of an audience, you can put on a show or increase your self-esteem. You cannot exaggerate your ego in front of an audience. [Therefore], you can give a performance in front of an audience.
pattern: P is true or Q is true. P is not true, [Therefore], Q is true. original sentences: My flowers are ugly or pretty. My flowers are not ugly. [Therefore], My flowers are pretty.
rewritten sentences: The blooms in my garden are either comely or unappealing. The blooms in my garden are not unsightly. Therefore, These flowers are indeed attractive.
Rewrite the following sentences to standard English. Keep the meaning of the original sentences, but change the expression of the sentences.
original sentences: No hypothesis is fact. Some proposition are hypothesis. Some proposition are not fact. All proposition are abstract object. [Therefore], some abstract object are not fact.
rewritten sentences: A hypothesis is a proposed explanation that differs from fact. Some propositions are hypotheses. Some propositions are proven not to be facts. Every proposition is an abstract object. [Therefore], some abstract objects do not exist as facts.
original sentences: Applied science is science. No Science is art. Human science is science. Some Behavioral genetics are not human science. Behaviour genetics is psychology. Genetics is biology. [Therefore], some applied science are not biology.
rewritten sentences: Applied science is science in every sense of the word. Science and art are two distinct forms of scholarship. Human science is a branch of science. Behavioral genetics does not involve any human science.
Behavioral genetics is a branch of psychology. Genetics is the study of biology. [Therefore], applied science encompasses more than just biology.
original sentences: All feline are animal. no plant are animal. All flowering plants are plants. All tiger are genus Panthera. [Therefore], no Panthera are flowering plants.
rewritten sentences: A feline is an animal belonging to the cat family. There are many obvious differences between plants and animals. Flowering plants are plants that produce flowers and fruits. The tiger is a member of the genus Panthera. [Therefore], Panthera is different from flowering plants.
original sentences: All medication are drug. All hormone are medication. All plant hormone are hormone. Some plant hormone are gibberellins. All drug are useful. All gibberellins are carboxylic acid. [Therefore], Some carboxylic acid are useful.
rewritten sentences: A medication is a type of drug. Hormones are a type of medication. Plant hormones are a subset of hormones. Gibberellins are one type of plant hormone. All drugs have some sort of usefulness.
Gibberellins are carboxylic acids. [Therefore], some carboxylic acids can be useful.
Table 20: GPT-3 rewriting prompts for complex syllogisms.
Rewrite the following sentences to standard English. Keep the meaning of the original sentences, but change the expression of the sentences.
original sentences: If you want to eat then you should open the refrigerator and open the chiller. It is eat, and it is open the refrigerator. [Therefore], it is open the chiller.
rewritten sentences: If you want to eat something after working out, open the refrigerator and make sure the chiller is in good working order. When you are hungry and want to eat anything, you may open the refrigerator to find some food. [Therefore], you must ensure that the chiller is operational.
original sentences: My toes are warmth and affectionateness, or My toes are cold. My toes are not warmth or not affectionateness. [Therefore], my toes are cold.
rewritten sentences: The temperature here varies greatly, my toes can be warm and friendly, or they might be freezing. Because of the low temperature, my toes are not warm or loving. [Therefore], my toes are freezing.
original sentences: Neon lights are dark, or Neon lights are bright and burnished. Neon lights are not dark, and Neon lights are burnished. [Therefore], Neon lights are bright.
rewritten sentences: Neon lights might be dim at times, but most of the time they are dazzling and burnished at night. Neon lights are not dim, and they shine burnished at night. [Therefore], the neon lights are really bright at night.
original sentences: Something you might do while not getting exercise is not get tired. Sometimes you don't getting exercise. [Therefore], You don't get tired.
rewritten sentences: If you do not exercise, you might remain energetic. When you don't workout occasionally, you will not become exhausted. [Therefore], If you are not exercising you will not get tired.
Table 21: Examples of conclusions generated by BART. The good results and bad results are in different colors.
Categorical Syllogism Premise 1: Carbon dioxide is a chemical compound.
Premise 2: Chemical compounds are considered pure substances.
Conclusion: Pure substances include carbon dioxide. Prediction: Some completely natural substances include carbon dioxide.
Categorical Syllogism Premise 1: All machines are not human.
Premise 2: Every truck crane is a type of apparatus.
Conclusion: All truck cranes are not human. Prediction: Some apparatus are not truck cranes.
Hypothetical Syllogism Premise 1: What you might do in class is sleep. Premise 2: A dream is something that can happen to you while you retire for the night. Conclusion: You're most likely daydreaming in class.
Prediction: When you attend class, you may have a dream.
Hypothetical Syllogism Premise 1: When you make progress in your project, you may want to celebrate.
Premise 2: Having a party is a good choice if you want to celebrate. Conclusion: You may want to have a party if you achieve great progress in your project.
Prediction: Celebrating has the effect of bringing people together.
Disjunctive Syllogism Premise 1: Newspapers are generally published daily or weekly.
Premise 2: Some newspapers are not published weekly. Conclusion: Some newspapers are daily newspapers.
Prediction: Some newspapers are generally published daily.
Disjunctive Syllogism Premise 1: There is still controversy about whether there are ghosts in the world.
Premise 2: The man is an atheist. Conclusion: The man thinks that there is no ghost in the world.
Prediction: There are ghosts in the world.
Polysyllogism Premise 1: Some movies are not cartoon movies.
Premise 2: Science fiction animations belong to animated films. Premise 3: Remake films are also films.
Conclusion: Some remakes are out of scope of science fiction cartoons. Prediction: It exists some remakes are out of scope of science fiction cartoon.
Polysyllogism Premise 1: All Pierce Brosnan are human.
Premise 2: No human is a beast.
Premise 3: Some Pierce Brosnan are not machines. Conclusion: Some machines are not beasts.
Prediction: Some Pierce Brosnan are not beasts.
Complex Syllogism Premise 1: Fencing requires wearing a protective mask and gloves.
Premise 2: The woman is fencing and wearing a wiremesh mask. Conclusion: The woman is also wearing gloves.
Prediction: The woman may be wearing gloves.
Complex Syllogism Premise 1: If Jack has computer skills and programming knowledge, he could write programs.
Premise 2: Jack cannot write computer programs, but he can use computers. Conclusion: Jack does not have programming knowledge. Prediction: He can write computer programs.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitation (Page 9).
✓ A2. Did you discuss any potential risks of your work?
We have an ethical statement section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction (Page 1).
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Footnotes in Section 2.2 and Section 3.1 and the References section.
B1. Did you cite the creators of artifacts you used?
No response.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No, because the licenses are well known, which allow use of the artifacts in work like ours.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No, the use of existing artifacts is only for research, not for commercial use.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 3 Data Construction (Page 3).
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We note the language and domains in section 3.1 Data Source (Page 3).
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 Data Construction (Page 3) and Appendix G Data Statistics (Page 15).
## C ✓ **Did You Run Computational Experiments?** Appendix H Implementation Details (Page 15).
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix H Implementation Details (Page 15).
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.3 Experimental Results.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We used the transformers package for training baseline models and the ROUGE, BLEU, BERT-Score packages for evaluation. Since we only used the common functions/interfaces well known in the NLP,
we did not discuss the details.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Appendix D Human Rewriting.
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No, the human annotators are required to just rewrite the automatic samples, which is unnecessary to give a instruction.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section Ethical Statement.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No, the human annotators are required to just rewrite the automatic data, which is unnecessary to give a instruction.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Our benchmark has no social impacts and we just use some open knowledge bases, like ConceptNet and Wikidata. There is no need to get the approval of an ethics review board.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No, we required the annotators to avoid any social bias and privacy issues in the rewritten material, which is discussed in the Section Ethical Statement. |
clark-schuler-2023-categorial | Categorial grammar induction from raw data | https://aclanthology.org/2023.findings-acl.149 | Grammar induction, the task of learning a set of grammatical rules from raw or minimally labeled text data, can provide clues about what kinds of syntactic structures are learnable without prior knowledge. Recent work (e.g., Kim et al., 2019; Zhu et al., 2020; Jin et al., 2021a) has achieved advances in unsupervised induction of probabilistic context-free grammars (PCFGs). However, categorial grammar induction has received less recent attention, despite allowing inducers to support a larger set of syntactic categories{---}due to restrictions on how categories can combine{---}and providing a transparent interface with compositional semantics, opening up possibilities for models that jointly learn form and meaning. Motivated by this, we propose a new model for inducing a basic (Ajdukiewicz, 1935; Bar-Hillel, 1953) categorial grammar. In contrast to earlier categorial grammar induction systems (e.g., Bisk and Hockenmaier, 2012), our model learns from raw data without any part-of-speech information. Experiments on child-directed speech show that our model attains a recall-homogeneity of 0.33 on average, which dramatically increases to 0.59 when a bias toward forward function application is added to the model. | # Categorial Grammar Induction From Raw Data
Christian Clark and **William Schuler**
Department of Linguistics The Ohio State University
{clark.3664,schuler.77}@osu.edu
## Abstract
Grammar induction, the task of learning a set of grammatical rules from raw or minimally labeled text data, can provide clues about what kinds of syntactic structures are learnable without prior knowledge. Recent work (e.g., Kim et al., 2019; Zhu et al., 2020; Jin et al., 2021a)
has achieved advances in unsupervised induction of probabilistic context-free grammars
(PCFGs). However, categorial grammar induction has received less recent attention, despite allowing inducers to support a larger set of syntactic categories—due to restrictions on how categories can combine—and providing a transparent interface with compositional semantics, opening up possibilities for models that jointly learn form and meaning. Motivated by this, we propose a new model for inducing a basic (Ajdukiewicz, 1935; Bar-Hillel, 1953) categorial grammar. In contrast to earlier categorial grammar induction systems (e.g., Bisk and Hockenmaier, 2012), our model learns from raw data without any part-of-speech information. Experiments on child-directed speech show that our model attains a recall-homogeneity of 0.33 on average, which dramatically increases to 0.59 when a bias toward forward function application is added to the model.
## 1 Introduction
One of the core motivating questions of modern linguistics relates to language acquisition: How can a child pick up complex linguistic rules from limited exposure to language? Chomsky (e.g., 1965) introduced the well-known argument from the poverty of the stimulus, which claims that the linguistic input received by children is insufficiently rich to account for the knowledge they acquire—and therefore that humans must be born with prior knowledge about language. In contrast, empiricist accounts of language acquisition argue that statistical cues (Saffran et al., 1996) or other factors such as social interaction (Tomasello, 2005) may provide enough information on their own to support language acquisition.
Computational modeling provides one useful tool for judging between these competing accounts.
Questions about the learnability of linguistic structures can be tested empirically by seeing if a model with minimal prior knowledge can learn these structures from corpus data (Pullum and Scholz, 2002).
Along these lines, a range of studies over several decades have tested whether induction models can acquire probabilistic context-free grammars
(PCFGs) from text data (Lari and Young, 1990; Klein and Manning, 2002). Although PCFG induction is considered a difficult problem (Carroll and Charniak, 1992), recent systems have achieved performance improvements thanks to new types of Bayesian and neural network models. Recent systems have been able to induce grammars with accuracy levels (measured by recall-homogeneity) approaching fifty percent on corpora of child-directed speech (Jin et al., 2018, 2021a,b).
Although PCFGs are a convenient formalism for computational modeling, they are not the only viable option. A second line of research—albeit one less currently active than PCFG modeling—has examined the learnability of categorial grammar formalisms (Bisk and Hockenmaier, 2012; Bisk et al.,
2015), particularly Combinatory Categorial Grammar (CCG; Steedman, 2000). A notable advantage of categorial grammars over PCFGs is their clean mapping between syntactic and semantic composition, which allows them to be used as a tool for predicting lambda calculus encodings of meaning
(Zettlemoyer and Collins, 2005). Categorial grammars also impose constraints regarding which syntactic categories can combine, providing a practical advantage for designing induction systems that support a large set of categories.
Motivated by these advantages, this work presents a neural network–based system that adapts a state-of-the-art PCFG induction model (Jin et al.,
2368 2021a) to instead learn a basic categorial grammar.1 Unlike the previously mentioned categorial grammar induction systems, our model learns from entirely unlabeled data.
An initial version of the model attains an average recall-homogeneity (RH) score of 0.33 on an English corpus of child-directed speech (Experiment 1). A high variance across randomly initialized runs is observed, with a cluster of runs achieving RH on par with state-of-the-art PCFG
inducers and another cluster achieving poor RH.
In Experiment 2, we test a modified version of the model with a bias term encouraging forward function application, which appeared more often in the better-performing runs in Experiment 1. The modified model reaches an average RH of 0.59, surpassing results reported from Jin et al. (2021a)
and other PCFG inducers.
## 2 Related Work
PCFG induction is a longstanding area of interest in computational linguistics (Lari and Young, 1990; Carroll and Charniak, 1992; Klein and Manning, 2002). As neural modeling has made unsupervised induction more feasible, recent work has experimented with learning compound PCFGs
(Kim et al., 2019), simultaneously inducing phrase structure grammars and lexical dependencies (Zhu et al., 2020), and boosting model performance by grounding on multimodal data (Zhao and Titov, 2020; Zhang et al., 2021, 2022), among other innovations.
A somewhat earlier line of research established the potential for learning an alternative type of grammar, a CCG, from data with a small set of broadly defined part-of-speech categories (noun, verb, etc.) (Bisk and Hockenmaier, 2012, 2013; Bisk et al., 2015). Bisk et al. (2015) showed that only a small number of labeled data points with POS tags are needed to induce a CCG. However, induction of CCGs (or other categorial grammars)
has received less recent attention, with CCG research more focused on tasks such as supertagging
(Bhargava and Penn, 2020; Prange et al., 2021) or incremental parsing (Stanojevic et al. ´ , 2021).
A third relevant area of research is work focused on mapping a sentence to its logical form via CCG parsing (Zettlemoyer and Collins, 2005;
![1_image_0.png](1_image_0.png)
Kwiatkowski et al., 2010; Kwiatkowski et al.,
2013). These studies reveal that categorial grammar induction may be useful not only as a method of testing the learnability of syntactic rules, but also as a tool for semantic parsing.
## 3 Background 3.1 Basic Categorial Grammar
The induction models in this paper learn a basic categorial grammar, also known as an AB grammar
(Ajdukiewicz, 1935; Bar-Hillel, 1953). This type of grammar was chosen for its simplicity and its suitability for extension through additional composition operations. A basic categorial grammar uses a set of primitive categories (e.g., S or N for sentences or nouns) as well as the type-combining operators \ and /, which indicate compatibility with an argument preceding or following the category, respectively. These type-combining operators can be used to define complex categories (e.g., N\N or
(S/N)/N).
The models use two composition operations:
backward function application and forward function application. Backward function application occurs when a phrase of category X\Y combines with a phrase of category Y on the left to yield a larger phrase of category X. Forward function application occurs when a phrase of category X/Y
combines with a phrase of category Y on the right to yield a larger phrase of category X. In such cases, X\Y and X/Y are called the *functor* categories, Y
is called the *argument* category, and X is called the result category. See Figure 1 for an example parse using a basic categorial grammar.
## 3.2 The Jin Et Al. **(2021A) Pcfg Induction** Model
Our induction model uses a formulation for sentence probabilities based on the word-level PCFG
model from Jin et al. (2021a), which we summarize in this section.
In unsupervised training, the objective function that the model maximizes is the marginal probability of the sentences in the dataset. For a single sentence σ, each possible parse tree (assumed to be in Chomsky Normal Form) can be divided into a set of of nodes τ undergoing nonterminal expansions cη → cη1 cη2 and a set of nodes τ′ undergoing terminal expansions cη → wη. Here, η ∈ {1, 2}∗is a Gorn address specifying a path of left and right branches from the root node of the parse tree, cη is the nonterminal category at node η, and wη is the word located at node η. C is the set of nonterminal categories. The marginal probability of σ is calculated by summing over all possible parse trees:
$$\mathsf{P}(\sigma)=\sum_{\tau,\tau^{\prime}}\prod_{\eta\in\tau}\mathsf{P}(c_{\eta}\to c_{\eta1}\ c_{\eta2})\cdot\prod_{\eta\in\tau^{\prime}}\mathsf{P}(c_{\eta}\to w_{\eta})\tag{1}$$
A set of Bernoulli distributions are defined to separate the nonterminal and terminal expansion rules:
P(Term $|\,c_{\eta}\rangle=$ softmax(NTerm(E $\delta_{c_{\eta}}$)) (2)
Here, cη is a nonterminal category, δcη is a vector representing a Kronecker delta function with 1 at index cη and 0 elsewhere, and E ∈ R
d×|C|is a matrix of nonterminal category embeddings of size d. NTerm is a residual network with 2 identical blocks. Given the input xb−1,cη
, each residual block computes its output as follows:
$$\begin{array}{c}{{{\bf{x}}_{b,c_{\eta}}=\mathrm{ReLU}({\bf{W}}_{b}^{\prime}\mathrm{ReLU}({\bf{W}}_{b}\,{\bf{x}}_{b-1,c_{\eta}}+{\bf{b}}_{b}))}}\\ {{{}}}\\ {{{}}}\\ {{{}}}\end{array}$$
Fully connected layers are used before and after
the residual blocks:
$$\begin{array}{r}{\mathbf{X}_{0,c_{\eta}}=\mathrm{ReLU}(\mathbf{W}_{0}\,\mathbf{E}\,\delta_{c_{\eta}}+\mathbf{b}_{0}),}\\ {s_{c_{\eta}}=\mathrm{ReLU}(\mathbf{W}_{\mathrm{soft}}\,\mathbf{X}_{B,c_{\eta}}+\mathbf{b}_{\mathrm{soft}})}\end{array}$$
All W's and b's are weight and bias parameters respectively.
Binary-branching nonterminal expansion probabilities are computed as follows:
$$\begin{array}{c}{{\mathsf{P}(c_{\eta}\to c_{\eta1}\;c_{\eta2})=\mathsf{P}(\mathrm{Term=0}\mid c_{\eta})\;.}}\\ {{\mathsf{P}(c_{\eta}\to c_{\eta1}\;c_{\eta2}\mid c_{\eta},\mathrm{Term=0}),}}\end{array}$$
which in turn uses the following distribution over expansion rules:
$$\begin{array}{l}{{{\sf P}(c_{\eta}\to c_{\eta1}\;c_{\eta2}\mid c_{\eta},\mathrm{Term}{=}0)=}}\\ {{\mathrm{softmax}({\bf W}_{\mathrm{nont}}\,{\bf E}\,\delta_{c_{\eta}}+{\bf b}_{\mathrm{nont}}),}}\\ {{\quad c_{\eta1,c_{\eta2}}}}\end{array}\quad\quad(7)$$
where Wnont and bnont are additional model parameters. The categorial grammar induction model presented in this work modifies Equation (7); see Section 4.2.
Finally, lexical unary-expansion rule probabilities are computed as follows:
$$\begin{array}{c}{{\mathsf{P}(c_{\eta}\to w_{\eta})=\mathsf{P}(\mathrm{Term}{=}1\mid c_{\eta})\;.}}\\ {{\mathsf{P}(c_{\eta}\to w_{\eta}\mid c_{\eta},\mathrm{Term}{=}1)}}\end{array}\tag{8}$$
A softmax is taken over words in the vocabulary:
$$\mathrm{P}(c_{\eta}\to w_{\eta}\mid c_{\eta},\mathrm{Term=1})=\mathrm{softmax}(\mathrm{N}^{\prime}(\mathrm{E}\,\delta_{c_{\eta}})),\tag{9}$$
where N
′is another residual network, similar to NTerm except that the output layer's dimension is the size of the vocabulary.
Jin et al. (2021a) also introduce a character-level expansion model as an alternative to Equation (9).
However, they report that the word-level model performs slightly better on English data from the CHILDES corpus. Because the current study works with the same English data, we only test the wordlevel model.
## 4 Induction Model
The model introduced in this paper extends the Jin et al. (2021a) model from Section 3.2 to induce a categorial grammar. This section details how categories and expansion rule probabilities are defined in the new model.
$$({\mathfrak{I}})$$
## 4.1 Categories
$$\begin{array}{l}{(4)}\\ {(5)}\end{array}$$
We define the set of categories C according to a number of primitives P and a maximum category depth D. Primitives in the induction model are labeled as integers 0, 1, 2, . . . . A category's depth is defined according to its tree representation (see Figure 3(a) of Prange et al. (2021) for an example).2 For instance, the primitive category 1 has depth 0, and the category 2/(1\0) has depth 2.
The number of possible categories |CP,D| with P
primitives and maximum depth D can be computed 2The tree representation of a syntactic category should not be confused with the parse tree for an entire sentence.
$$(6)$$
with the following recurrence relation:
$$|C_{P,0}|=P$$ $$|C_{P,i}|=2|C_{P,i-1}|^{2}+P$$
2 + P (11)
Our experiments below use the category set C =
C3,2, the entire set of 885 possible categories with P = 3 and D = 2. This is nearly 10 times the number of categories (90) used by Jin et al. (2021a).
## 4.2 Binary Expansion Rule Probabilities
The categorial grammar induction model modifies Equation (7) from Jin et al. (2021a) to take advantage of constraints imposed by categorial grammar categories. In a PCFG, a parent category cη may expand into any two child categories cη1 and cη2 However, in a basic CG, this expansion is only possible if one child (the functor) can combine with the other child (the argument) to produce the parent
(the result). There are two possibilities:
- The argument is the left child cη1. Then the right child must be the functor, and its category must be cη2 = cη\cη1.
- The argument is the right child cη2. Then the left child must be the functor, and its category must be cη1 = cη/cη2.
If cη2 , cη\cη1 and cη1 , cη/cη2, then it is impossible for cη to expand to cη1 and cη2, and so P(cη → cη1 cη2 | cη, Term=0) = 0. For all other cases, where the binary expansion is possible, the probabilities are calculated as follows:
$$\begin{array}{r}{\mathsf{P}(c_{\eta}\to c_{\eta1}\;c_{\eta2}\mid c_{\eta},\mathrm{Term}{=}0)=}\\ {\mathrm{softmax}}\\ {(c^{\prime},o){\in}C_{\mathrm{arg}}{\times}\{\mathsf{L,R}\}}\end{array}\!\!\left(\begin{array}{r}{\mathsf{W_{L}}}\\ {\mathsf{W_{R}}}\end{array}\right)\,\delta_{c_{\eta}}+\left[\begin{array}{r}{\mathsf{b_{L}}}\\ {\mathsf{b_{R}}}\end{array}\right]\right)$$
#(12)
The model parameters WL,WR ∈ R|Carg|×|Cres| are weights associating each parent category with each possible left-child and right-child argument category; bL, bR ∈ R|Carg|are the corresponding bias vectors. Carg,Cres ⊂ C are the sets of possible argument and result categories respectively, both of which comprise all categories of depth up to D − 1:
$$C_{\mathrm{{arg}}}=C_{\mathrm{{res}}}=\{c\in C\mid\operatorname{depth}(c)\leq D-1\}$$
Argument and result categories cannot have depth D because this would require functor categories to have depth greater than D.
The variable o ∈ {L,R} expresses the location of the argument child relative to the functor child.
$$(10)$$
$$(11)$$
If o = L, then the argument is to the left of the functor, and so cη1 = c′and cη2 = cη\c′. If o = R,
then the argument is to the right of the functor, and so cη1 = cη/c′and cη2 = c′.
Equation (12) results in a considerable space complexity improvement compared to (7). For an induction model using category set C, Equation (7)
requires taking a softmax over |C| 2 possible pairs of children. Equation (12) only requires a softmax over 2|Carg| = O(
√|C|) categories.
The experiment presented in Section 6 uses a modified bias term in Equation (12) in order to encourage the model to prefer forward function application over backward function application. The bias term for left-child arguments bL is replaced with a new bias b′L
= bL − k, where k ∈ R|Carg| has the same constant value in each dimension. As will be explained below, a bias toward forward function application results in a preference for rightbranching structures, which improves the performance of the induction model.
## 5 Experiment 1: Basic Induction Model 5.1 Corpora
The induction model was evaluated on childdirected speech in English from CHILDES
(MacWhinney, 2000), specifically the Adam and Eve sections of the Brown corpus (Brown, 1973).
The Adam section, which was used for hyperparameter optimization, contains interactions between a child and his caretakers, with the child's age ranging from 2 years and 3 months to 5 years and 2 months and a total of 28,780 sentences. The Eve section was used for held-out testing; it contains similar interactions from a child whose age ranges from 1 year and 6 months to 2 years and 3 months, with a total of 14,251 sentences. Syntactic annotations for the Adam and Eve sections came from Pearl and Sprouse (2013).
$$(12)$$
## 5.2 Procedures
The induction model used the Adam optimizer with a learning rate of 0.0001, a category embedding size of 64, and a hidden layer size of 64. Hyperparameters were selected based on a grid search on the Adam corpus. For evaluation on Eve, ten randomly initialized models were run for 20 epochs each with a batch size of 2 sentences.
The evaluation metrics we considered were unlabeled F1 score and recall-homogeneity (RH; Jin et al., 2021b). Recall measures what proportion of (unlabeled) constituents in the annotated trees
![4_image_0.png](4_image_0.png)
are present in the predicted trees. Homogeneity—a commonly used metric in part-of-speech tagging evaluations—measures to what degree a single induced category maps to a single category in the annotations. Specifically, it measures the relative increase in the log of the expected probability of a gold category, given the predicted category that covers the same span. RH is simply the product of unlabeled recall and homogeneity. The RH metric is motivated by assumptions that (a) induced grammars should not be penalized for predicting extra constituents, since flatter trees in the annotations may have been chosen for convenience rather than any theoretical motivation; and (b) induced grammars should not be penalized for making finergrained distinctions between categories (e.g., noun cases) than are present in the annotations, since less granular categories similarly may have been chosen for convenience.
In keeping with Seginer (2007) and Jin et al.
(2021a), punctuation was retained in the input data during training but removed during evaluation. Unary chains were removed from parse trees, with only the top category used for evaluation.
## 5.3 Results
Figure 2 presents the main results. The mean RH
and F1 score across the ten runs were 0.33 and 0.52 respectively. The mean RH value is well below the average of 0.49 reported for the word-level PCFG inducer from Jin et al. (2021a). However, the means alone do not best describe Figure 2, as RH
and F1 both seem to show bimodal distributions.
Six runs produced poor RH and F1 (averaging 0.22 and 0.37 respectively), while the other four runs produced much better values (averaging 0.50 and 0.74 respectively).3 Figure 3 offers another vantage point into the pattern of results by separating recall and homogeneity, the two metrics combined in RH. (Recall also influences F1.) Again, a sharp division is observed between runs with low versus high recall.
However, homogeneity appears to vary somewhat independently from recall.
One possible explanation for this trend would be that the poorly performing runs get caught in local maxima of the objective function. If this were the case, we would expect to see higher log likelihood
![4_image_1.png](4_image_1.png)
in the well-performing runs, which should reach a better maximum. However, Figure 4 shows that this does not occur: The well-performing and poorly performing runs are associated with similar ranges of log likelihood values.
![5_image_1.png](5_image_1.png)
If log likelihood fails to distinguish the clusters of good and poor runs, what else might? One pattern immediately stood out during qualitative inspection of the models' predicted trees: Models with high RH and F1 scores tend to predict trees with frequent forward function application and right branching, while poorly performing models predict trees with backward function application and left branching. This pattern was quantitatively confirmed by counting the proportion of right- and left-branching nodes in the induced trees. The six runs with worse performance use left branching 82% of the time, while the four runs with better performance use right branching 65% of the time.
These values were computed by counting the proportion of branching nodes who appear as left versus right children.
As an illustration, Figure 5 compares predictions from two models on the same sentence from the Eve corpus. Figure 5a shows the prediction from a model that performed poorly overall, containing a left-branching pattern combined with the use of backward function application. Figure 5b shows the prediction from a model that performed well, which has opposite patterns. In both cases, the dispreferred type-combining operator (e.g., / in 5a)
appears in complex categories but is rarely applied, so that a category such as 1/2 in 5a is treated like a primitive.
Figure 6 contains confusion matrices relating the induced categories with the human-annotated
![5_image_0.png](5_image_0.png)
categories for the run that produced the highest RH
and F1 score in Experiment 1. The recall table
(a) shows that most of the annotated categories are only represented by one or two different induced categories, and the precision table (b) shows that induced categories are seldom crossing brackets.
## 6 Experiment 2: Induction Model With Forward Function Application Bias
To try to produce more consistent induction results with English-like branching behavior, our second experiment biased the induction model toward using forward function applications (i.e., the / operator). While in principle it is possible for rightbranching structures to use backward function ap-
![6_image_4.png](6_image_4.png)
![6_image_1.png](6_image_1.png)
![6_image_3.png](6_image_3.png)
![6_image_2.png](6_image_2.png)
![6_image_0.png](6_image_0.png)
![6_image_6.png](6_image_6.png)
A
0.00 0.00 0,00 0.00 0.29 0.00 0.01 0.01 0.00
![6_image_8.png](6_image_8.png)
![6_image_9.png](6_image_9.png)
![6_image_7.png](6_image_7.png)
![6_image_5.png](6_image_5.png)
plication and left-branching structures to use forward function application, this did not often occur in the results from Experiment 1 and seems less likely in general given the available categories in C .
## Procedures 6.1
This experiment used the same corpora and procedures as Experiment 1, with the exception of the modified bias term mentioned at the end of Sec-
![7_image_0.png](7_image_0.png)
tion 4.2. Each dimension in the vector k was set to 100; this value was large enough to ensure that forward function application and right-branching tree structures were exclusively used. Smaller values for k were also tested on the Adam corpus but achieved slightly worse log likelihoods.
## 6.2 Results
Figure 7 shows the RH across the 10 runs in this experiment. Because the models invariably predicted right-branching structures, all had the same F1 score of 0.76. Compared to Experiment 1, RH
scores showed much more consistency, with a relatively uniform spread of values within the narrow range of 0.57 to 0.62. (Since recall did not vary, all variation in RH was due to differences in homogeneity between models.) Despite the models' inflexibility in assigning tree structures, these RH scores surpassed those reported by Jin et al.
(2021a,b).
## 7 Discussion And Conclusion
We introduce an induction model that learns a basic categorial grammar from unlabeled data. The original version of the model, tested in Experiment 1, shows promising results in several runs, but inconsistent performance in general. A modified version of the model that consistently uses forward function application far outperforms the original model.
In general, the experimental results appear to support the empiricist claim that syntactic structure is learnable with relatively simple prior knowledge.
While results from the biased model achieve an impressive RH compared to Jin et al. (2021a,b),
they leave open several questions. One obvious question is whether it is possible to consistently achieve comparable results to the biased runs without removing the model's ability to do backward function application, since this operation is a core ingredient of basic categorial grammars and is regularly used in hand-labeled parses of English sentences, e.g., to combine the NP and N\NP in Figure 1. Although the log likelihood objective on its own appears to be insufficient to ensure stable behavior (similar to behavior reported in earlier PCFG studies such as Johnson et al. 2007), it may be possible to find a middle ground between Experiments 1 and 2 with a modified objective function or a weaker form of bias.
Another question is whether the induction model can support more complex operations, such as the forward and backward composition operations defined by CCG. This seems possible in principle; additional weight matrices could be added to Equation (12), so that probabilities for additional operations could be learned. We are excited to explore this possibility in future work.
## Limitations
More work is needed to uncover the causes of the inconsistent performance across randomly initialized models in Experiment 1. Although the bias toward forward function application implemented in Experiment 2 was effective in our experiments, it is unlikely to work as a general-purpose method, since languages vary in their branching characteristics and in the contexts in which they apply forward and backward function application.
## Ethics Statement
We foresee no ethical issues arising from this research. The reader may refer to Brown (1973)
for information on how the Adam and Eve childdirected speech corpora were collected.
## Acknowledgements
We thank the anonymous reviewers for their helpful comments. This work was supported by the National Science Foundation grant \#1816891. All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation.
## References
Kazimierz Ajdukiewicz. 1935. Die syntaktische konnexitat. In S. McCall, editor, *Polish Logic 1920-1939*,
pages 207–231. Oxford University Press. Translated from Studia Philosophica 1: 1–27.
Yehoshua Bar-Hillel. 1953. A quasi-arithmetical notation for syntactic description. *Language*, 29:47–58.
Aditya Bhargava and Gerald Penn. 2020. Supertagging with ccg primitives. In *Proceedings of the 5th Workshop on Representation Learning for NLP*, pages 194–204.
Yonatan Bisk, Christos Christodoulopoulos, and Julia Hockenmaier. 2015. Labeled grammar induction with minimal supervision. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 870–876.
Yonatan Bisk and Julia Hockenmaier. 2012. Simple robust grammar induction with combinatory categorial grammars. In *Twenty-Sixth AAAI Conference on* Artificial Intelligence.
Yonatan Bisk and Julia Hockenmaier. 2013. An hdp model for inducing combinatory categorial grammars.
Transactions of the Association for Computational Linguistics, 1:75–88.
R. Brown. 1973. *A First Language*. Harvard University Press, Cambridge, MA.
Glenn Carroll and Eugene Charniak. 1992. Two Experiments on Learning Probabilistic Dependency Grammars from Corpora. Working Notes of the Workshop on Statistically-Based {NLP} Techniques, (March):1–
13.
Noam Chomsky. 1965. *Aspects of the Theory of Syntax*.
MIT Press, Cambridge, Mass.
Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2018. Depth-bounding is effective: Improvements and evaluation of unsupervised PCFG induction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2721–2731.
Lifeng Jin, Byung-Doh Oh, and William Schuler. 2021a.
Character-based PCFG induction for modeling the syntactic acquisition of morphologically rich languages. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4367–4378, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Lifeng Jin, Lane Schwartz, Finale Doshi-Velez, Timothy Miller, and William Schuler. 2021b. Depth-Bounded Statistical PCFG Induction as a Model of Human Grammar Acquisition. *Computational Linguistics*,
47(1):181–216.
Mark Johnson, Thomas Griffiths, and Sharon Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139–146, Rochester, New York. Association for Computational Linguistics.
Yoon Kim, Chris Dyer, and Alexander Rush. 2019.
Compound probabilistic context-free grammars for grammar induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2369–2385.
Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In *Proceedings of the 40th Annual* Meeting of the Association for Computational Linguistics.
Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In *Proceedings of the* 2013 conference on empirical methods in natural language processing, pages 1545–1556.
Tom Kwiatkowski, Luke S. Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In *EMNLP*, pages 1223–1233.
Karim Lari and Steve J Young. 1990. The estimation of stochastic context-free grammars using the insideoutside algorithm. Computer speech & *language*,
4(1):35–56.
Brian MacWhinney. 2000. *The CHILDES project:*
Tools for analyzing talk, third edition. Lawrence Elrbaum Associates, Mahwah, NJ.
Lisa Pearl and Jon Sprouse. 2013. Syntactic islands and learning biases: Combining experimental syntax and computational modeling to investigate the language acquisition problem. *Language Acquisition*, 20:23–
68.
Jakob Prange, Nathan Schneider, and Vivek Srikumar.
2021. Supertagging the long tail with tree-structured decoding of complex categories. Transactions of the Association for Computational Linguistics, 9:243–
260.
Geoffrey K Pullum and Barbara C Scholz. 2002. Empirical assessment of stimulus poverty arguments. The linguistic review, 19(1-2):9–50.
Jenny R Saffran, Richard N Aslin, and Elissa L Newport.
1996. Statistical learning by 8-month-old infants.
Science, 274(5294):1926–1928.
Yoav Seginer. 2007. Fast unsupervised incremental parsing. In *Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics*,
pages 384–391.
Miloš Stanojevic, Shohini Bhattasali, Donald Dunagan, ´
Luca Campanelli, Mark Steedman, Jonathan Brennan, and John Hale. 2021. Modeling incremental language comprehension in the brain with combinatory categorial grammar. In *Proceedings of the* Workshop on Cognitive Modeling and Computational Linguistics, pages 23–38.
Mark Steedman. 2000. *The syntactic process*. MIT
Press/Bradford Books, Cambridge, MA.
Michael Tomasello. 2005. *Constructing a language: A*
usage-based theory of language acquisition. Harvard university press.
Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Proceedings of the Twenty-First Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-05), pages 658–666, Arlington, Virginia. AUAI Press.
Songyang Zhang, Linfeng Song, Lifeng Jin, Haitao Mi, Kun Xu, Dong Yu, and Jiebo Luo. 2022. Learning a grammar inducer from massive uncurated instructional videos. *arXiv preprint arXiv:2210.12309*.
Songyang Zhang, Linfeng Song, Lifeng Jin, Kun Xu, Dong Yu, and Jiebo Luo. 2021. Video-aided unsupervised grammar induction. arXiv preprint arXiv:2104.04369.
Yanpeng Zhao and Ivan Titov. 2020. Visually grounded compound pcfgs. *arXiv preprint arXiv:2009.12404*.
Hao Zhu, Yonatan Bisk, and Graham Neubig. 2020.
The return of lexical dependencies: Neural lexicalized PCFGs. Transactions of the Association for Computational Linguistics, 8:647–661.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In and following Section 7.
✗ A2. Did you discuss any potential risks of your work?
We see no apparent risks.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 3.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We are not currently sharing any new artifacts for this project. Should we share them in the future, we will discuss the terms of use.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
It seems very clear that our extension of the existing Jin et al. 2021 model is being used for the intended purpose, to model grammar induction.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Sections 5 And 6
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
These are relatively small-scale models, so there are few concerns about others having the resources to run them The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 5.3 and 6.2 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-etal-2023-attribute | Attribute Controlled Dialogue Prompting | https://aclanthology.org/2023.findings-acl.150 | Prompt-tuning has become an increasingly popular parameter-efficient method for adapting large pretrained language models to downstream tasks. However, both discrete prompting and continuous prompting assume fixed prompts for all data samples within a task, neglecting the fact that inputs vary greatly in some tasks such as open-domain dialogue generation. In this paper, we present a novel, instance-specific prompt-tuning algorithm for dialogue generation. Specifically, we generate prompts based on instance-level control code, rather than the conversation history, to explore their impact on controlled dialogue generation. Experiments on popular open-domain dialogue datasets, evaluated on both automated metrics and human evaluation, demonstrate that our method is superior to prompting baselines and comparable to fine-tuning with only 5{\%}-6{\%} of total parameters. | # Attribute Controlled Dialogue Prompting
Runcheng Liu1,2∗, Ahmad Rashid1,2∗**, Ivan Kobyzev**3 Mehdi Rezagholizadeh3**, Pascal Poupart**1,2 1David R. Cheriton School of Computer Science, University of Waterloo 2Vector Institute, Canada 3Huawei Noah's Ark Lab, Canada
{ireneliu,a9rashid,ppoupart}@uwaterloo.ca
{ivan.kobyzev,mehdi.rezagholizadeh}@huawei.com
## Abstract
Prompt-tuning has become an increasingly popular parameter-efficient method for adapting large pretrained language models to downstream tasks. However, both discrete prompting and continuous prompting assume fixed prompts for all data samples within a task, neglecting the fact that inputs vary greatly in some tasks such as open-domain dialogue generation. In this paper, we present a novel, instancespecific prompt-tuning algorithm for dialogue generation. Specifically, we generate prompts based on instance-level control code, rather than the conversation history, to explore their impact on controlled dialogue generation. Experiments on popular open-domain dialogue datasets, evaluated on both automated metrics and human evaluation, demonstrate that our method is superior to prompting baselines and comparable to fine-tuning with only 5%-6% of total parameters.
## 1 Introduction
Fine-tuning has been frequently used when deploying generative pretrained language models (PLMs)
to downstream tasks since the advent of GPT (Radford et al.) and BERT (Devlin et al., 2019). However, this requires storing a full copy of parameter states for every downstream task, which is memoryconsuming and expensive to serve when working with large-scale models with billions of parameters like GPT-3 (Brown et al., 2020).
In this work, we design a lightweight prompting module for adapting pretrained language models for attribute controlled dialogue generation. More precisely, for each attribute such as persona, intention, emotion etc. we only save an additional prompt module. Since the prompting module is a fraction of the size of the pretrained dialogue model, this allows many controlled dialogue systems to be stored on a device without too much
∗Work done during an internship at Huawei.
overhead. We present results on both intent and persona controlled dialogue.
## 2 Related Work
GPT-3 (Brown et al., 2020) introduces *prompting*,
a method to steer a frozen PLM by transforming inputs into cloze-style phrases with task description and some task examples. Though it is memoryefficient since one single copy of the PLM can be shared across different tasks, the model's performance is largely restricted by the maximum conditional input length, the model size and manual guesswork for prompts (Zhao et al., 2021; Schick and Schütze, 2021a,b; Jiang et al., 2020). Other works focus on automatically searching for better discrete prompts (Jiang et al., 2020; Shin et al.,
2020; Gao et al., 2021; Ben-David et al., 2021).
Recently, there has been an increased interest in continuous prompts / prompt-tuning, which bridges the gap between prompting and fine-tuning, while remaining efficient during training (Lester et al., 2021; Li and Liang, 2021; Liu et al., 2021, 2022). Continuous prompts extend prompt selection to the entire space of embeddings, including vector embeddings that do not correspond to any humaninterpretable natural language tokens. Hence, soft prompts are more expressive than discrete prompts.
However, both deep prompts and shallow prompts assume a *static prompt / task-level prompt* for all samples within a task, neglecting the fact that samples might vary greatly, especially in the field of conversation generation. There are recent papers exploring possible *instance-specific* prompts. For instance, Control-prefixes (Clive et al., 2021) generates attribute-level prompts for input labels, but its expressiveness is limited to four labels. IPL (Jin et al., 2022) includes a lookup module to reweight prompt tokens before passing the updated embedding-only prompt into the transformer, but IPL updates all model parameters, which loses the efficiency benefits of prompt-
![1_image_0.png](1_image_0.png)
ing. IDPG (Wu et al., 2022) consumes inputs in a two-layer perceptron module to generate instancedependent prompts in classification tasks rather than generation tasks. In addition, (Gu et al., 2021)
proposes DialogPrompt which performs instancespecific prompting for dialogue generation by conditioning the prompt on the entire dialogue history.
However, their prompting module consists of GPT2, which is a full-fledged language model, and the approach is as costly as storing an entire fine-tuned base model. Recent works Contrastive prefixes
(Qian et al., 2022) and Tailor (Yang et al., 2022)
both propose *attribute-based prompts*, instead of instance-specific, to include either single-attribute or multi-attribute prompts into controlled text generation tasks, which reveal the powerful potential of controllability of continuous prompts.
In contrast to previous work, we propose Controlled DialogPrompt for applying prompt-tuning in controlled dialogue generation, which optimizes prompts based on provided control codes rather than the previous conversation history and we further explore the controllability of prompts at the instance level. The size of the prompt encoder is strictly limited and we freeze the pretrained transformer during training in order to preserve memory efficiency. In addition, we would like to highlight that our work focuses more on open-ended text generation rather than natural language understanding, such as entailment, paraphrase detection, extractive QA, as seen in other parameter-efficient fine-tuning methods (He et al., 2022; Guo et al., 2021; Wu et al., 2022). We posit that generating high-quality text is a more challenging task that requires a more nuanced approach to prompt tuning.
## 3 Controlled Dialogprompt
In this section, we present Controlled DialogPrompt (Controlled DP) for dialogue generation, which is expected to provide attribute information such as the dialogue intention or the user's persona within the prompt and steer the pretrained model efficiently.
Soft Prompt-tuning (Lester et al., 2021; Liu et al.,
2021) learns soft tokens for different tasks and then prepends them to the conversation context as well as control attributes. This approach yields a *static* shallow prompt since the soft tokens are static (i.e.,
fixed for a task) and shallow (only added as an input to the language model).
In contrast, Prefix-tuning proposes a more effective technique that adds soft tokens in the form of key-value pairs at every attention block of the transformer (Li and Liang, 2021; Liu et al., 2022).
This allows the soft tokens to influence each stage of the language model and therefore it is referred to as a *static deep* prompt.
Figure 1(bottom right) shows our proposed controlled dialogue prompt (Deep version). Instead of training static soft tokens for the dialogue task, we train a lightweight prompt module that takes as input a control attribute, either an intention label or persona sentences, and outputs key-value pairs that are prepended to each layer of the language model. Since the soft token embeddings change depending on the control attribute, this corresponds to an *instance-specific* prompt. For the shallow prompt (Figure 1 bottom left), we follow Soft Prompt-tuning which adds an additional trainable embedding layer to encode the attribute. For the deep prompt module, we consider two architectures: i) a simple multilayer perceptron (two fully connected layers of size 512 with tanh activation) applied to each token of the control attribute, and ii) a two-layer transformer decoder with embedding size of 256. The embedding size of each architecture was chosen to yield roughly the same number of parameters. This number of parameters is about 5%-6% of the number of parameters of the language model. For a given domain, training the prompt module is done as follows. An intention label or persona sentences are fed to the prompting module, which outputs key-value pairs added at each layer of the frozen pretrained dialogue system.
Gradients to maximize the likelihood of response tokens are back-propagated through the dialogue system and prompting module, but only the weights of the prompting module are updated.
## 4 Experiments 4.1 Datasets And Baseline Models
We evaluate the proposed method on two publicly available datasets: Dailydialog (Li et al.) for label control and FoCus (Jang et al., 2021) for document control. Dailydialog (Li et al.) is a widely used daily conversation dataset that provides a dialogue act for every sentence that indicates the communication function of each utterance. There are 4 types of dialogue acts in total. FoCus(Jang et al.,
2021) is a new persona-grounded dataset that aims to provide informative answers based on the user's persona about the geographical landmark. We provide the detailed dataset setups in Appendix A.1.
To demonstrate better performance of Controlled DialogPrompt, we compare our model with other competitive prompt-tuning techniques. The backbone model is DialoGPT-Large (Zhang et al., 2020).
Details are provided in Appendix A.2.
## 4.2 Evaluation Methods
We use both automatic evaluation metrics and human evaluation to measure the performance.
Automated metrics For controllability, we follow (Du and Ji, 2021) to evaluate whether models can customize responses based on specified control attributes. Details about controllability measures are provided in Appendix B.1 Regarding response quality, we use n-gram based metrics such as BLEU
(B-2, B-4) (Papineni et al., 2002), NIST (N-2, N-4)
(Doddington, 2002), ROUGE-L (Lin, 2004), METEOR (Agarwal and Lavie, 2007) to evaluate fluency and adequacy and distinct n-gram distribution metrics such as Dist (D-1, D-2) (Li et al., 2016)
and Entropy (E-4) (Zhang et al., 2018) to measure the diversity of the response.
Human Evaluation Human evaluation on the other hand is used to measure consistency between dialogue context and response and attribute controllability. We adopt single-turn pairwise evaluations to prevent annotator bias in numerical score evaluation. Details on question settings and annotators are provided in Appendix B.2
## 5 Result And Analysis 5.1 Dialogact / Intention
Table 1 summarizes the automatic evaluation results on the DialogAct label control task. Compared to static task prompts, instance-level controlled prompts achieve better performance consistently on both deep and shallow prompt levels. Since the controlled attribute is injected independently through the prompts, it does not affect the understanding and generation ability of the pretrained transformer. Both Controlled DP
deep methods show higher controllability and response quality than Controlled DP embedding, in line with (Li and Liang, 2021; Liu et al., 2022; Qin and Eisner, 2021) indicating the expressiveness of 1 Controlled DP (Embedding) involves training an embedding layer in a size of (prompt_vocab_size *
base_model_n_embd). In DialogAct control, we use only 4 labels, resulting in a size of 4 * 1280. In User's Persona, since there are many words in the corpus, we adopt the base model vocab size as the prompt vocab size and the embedding layer is 50257 * 1280. Therefore, the proportion of tunable parameters is higher in User's Persona Control.
Method ϕ% Controllability BLEU ↑ NIST ↑ ROUGE-L ↑ METEOR ↑ Dist ↑ **Entropy** ↑
Accuracy B-2 B-4 N-2 N-4 D-1 D-2 E-4
Pretrained (Zhang et al., 2020) 0% 58.30% 10.31% 1.73% 0.18 0.18 19.43% 7.30% 7.61% **40.00**% 10.03 Fine-tuning 100% 80.25% 21.03% **5.70**% 0.96 0.98 34.38% **13.05**% 6.02% 34.51% 10.21 Soft Prompt-tuning (Lester et al., 2021) 0.008% 70.51% 18.15% 4.08% 0.56 0.57 31.58% 11.46% 5.33% 30.82% 10.02 Prefix-tuning (Li and Liang, 2021) 3.1% 75.02% 19.94% 5.12% 0.91 0.93 33.29% 12.54% 5.59% 32.46% 10.17
Controlled DialogPrompt (Embedding) 0.001%1 69.06% **20.11**% 4.91% 0.71 0.73 32.80% 12.19% 5.18% 30.07% 10.03 Controlled DialogPrompt (MLP) 3.1% 78.36% 19.92% **5.43**% 0.98 1.01 33.12% 12.61% 5.71% 32.42% 10.20
Controlled DialogPrompt (2-layer Transformer) 3.3% **78.58**% 19.86% 5.26% 1.01 1.04 33.35% 12.64% 5.82% 33.16% **10.23**
Table 1: **DialogAct label** control performance under Dailydialog multi-reference evaluation. ϕ% denotes the % of tunable parameters to the frozen-LM parameters required at training time. Red number is the best value in every metric on all methods. Blue number is the best value in every metric among prompting methods.
Table 2: **User's Persona** control performance under FoCus validation dataset. ϕ% denotes the % of tunable parameters to the frozen-LM parameters required at training time. Red number is the best value in every metric on all methods. Blue number is the best value in every metric among prompting methods.
deep prompts. Also, Controlled DP deep methods show performance close to fine-tuning and even outperform on some metrics such as NIST. This is because NIST is weighted-BLEU with higher weights on rarer words and fine-tuning tends to generate from a more limited vocabulary whereas Controlled DialogPrompt sometimes generates less frequent words and can attain a better NIST score.
Human evaluation (Table 3) also shows that Controlled DP deep has a significantly higher winning rate than other prompting techniques on both control attribute relevancy and conversation consistency.
Methods Persona Controllability Consistency Controlled DP (Deep) 41.3% 44.0%
Soft Prompt-tuning 5.3% 13.3% Neutral 53.3% 42.7%
| Method | ϕ% | Controllability | BLEU ↑ | NIST ↑ | ROUGE-L ↑ | METEOR ↑ | Dist ↑ | Entropy ↑ | | |
|-----------------------------------------------|--------|-------------------|--------------------|----------|-------------|------------|--------------|-------------|--------------|-------|
| Similarity | B-2 | B-4 | N-2 | N-4 | D-1 | D-2 | E-4 | | | |
| Pretrained (Zhang et al., 2020) | 0% | 51.40% | 1.63% | 0.42% | 0.02 | 0.02 | 6.62% | 3.67% | 7.62% 34.44% | 10.15 |
| Fine-tuning | 100% | 75.21% | 37.38% 25.77% 5.80 | 6.30 | 27.71% | 24.43% | 7.93% 38.20% | 11.28 | | |
| Soft Prompt-tuning (Lester et al., 2021) | 0.008% | 62.69% | 18.01% | 9.50% | 2.72 | 2.87 | 16.53% | 13.29% | 6.77% 32.19% | 10.96 |
| Prefix-tuning (Li and Liang, 2021) | 6.2% | 66.89% | 27.18% 16.73% 4.35 | 4.63 | 21.38% | 18.56% | 7.60% 36.88% | 11.25 | | |
| Controlled DialogPrompt (Embedding) | 8.3%1 | 61.16% | 13.01% | 5.12% | 1.89 | 1.96 | 14.84% | 10.28% | 5.21% 26.45% | 10.82 |
| Controlled DialogPrompt (MLP) | 6.2% | 64.96% | 26.82% 17.09% 4.25 | 4.54 | 21.40% | 18.47% | 7.85% 37.58% | 11.18 | | |
| Controlled DialogPrompt (2-layer Transformer) | 5.0% | 66.34% | 31.85% 21.67% 5.00 | 5.40 | 24.20% | 21.16% | 7.85% 37.86% | 11.24 | | |
Controlled DP (Deep) 22.7% 28.0%
Prefix-tuning 26.7% 8.0%
Neutral 50.7% 64.0%
Controlled DP (Deep) 29.3% 41.3% Controlled DP (Shallow) 21.3% 9.3% Neutral 49.3% 49.3%
Table 4: Human evaluation on Focus dataset. "Controlled DP (Deep)" represents Controlled DialogPrompt with 2-layer transformer decoder as the prompt module.
"Controlled DP (Shallow)" represents Controlled DialogPrompt on the embedding layer. "Neutral" means that there is no preference between the two answers according to the annotators.
| Methods | Attribute Relevancy | Consistency |
|-------------------------|-----------------------|---------------|
| Controlled DP (Deep) | 30.7% | 32.0% |
| Soft Prompt-tuning | 20.0% | 20.0% |
| Neutral | 49.3% | 48.0% |
| Controlled DP (Deep) | 25.3% | 37.3% |
| Prefix-tuning | 16.0% | 16.0% |
| Neutral | 58.7% | 46.7% |
| Controlled DP (Deep) | 34.7% | 38.7% |
| Controlled DP (Shallow) | 9.3% | 25.3% |
| Neutral | 56.0% | 36.0% |
## 5.2 User'S Persona
Table 2 shows that our model displays advantages over other prompting methods in terms of response quality, which shows a promising sign that controlled DP can be adapted to more challenging document control scenarios. Note that the difference in BLEU-2 is more pronounced for Focus compared to DailyDialog, as Focus is more complicated and uses sentences as the attribute rather than labels.
Although controlled DP methods perform slightly lower than Prefix-tuning on the similarity scores with given user's persona and Entropy-4 values, we find it to be highly consistent with the previous conversation history upon human evaluation (Table 4).
Similar results are observed with FoCus (Jang et al.,
2021) where models with high generation abilities do not always ensure high grounding abilities.
In addition, the difference between static/instancespecific deep prompts and static/instance-specific shallow prompts emphasizes the direct impact of deep prompts in complex tasks. Fine-tuning performs the best, but with approximately 20X more tunable parameters.
## 6 Conclusion And Future Work
In summary, we presented a novel prompting technique, conditioned on a dialogue attribute (persona or intent), for controlled dialogue generation. The prompting module requires only 5%-6% of the total number of parameters, which allows the storage of several fined-tuned prompting modules for different dialogue generation tasks at a fraction of the cost of a full dialogue model.
However, Controlled DialogPrompt currently studies conditioning on simple control attribute sentences like the user's persona and the work can be extended to more extensive and complex sentences such as background knowledge documents to further evaluate the controlled prompt's encoding capabilities. Additionally, combining multiple Controlled DialogPrompts on several control attributes and automatically triggering various dialogue skills is an interesting and unexplored direction.
## Limitations
In our current experiments, prompt-based methods are primarily storage-efficient or parameterefficient solutions. Since these methods all require backpropagation to the bottom layer, the training time of prompt-based methods are closely resembles that of traditional fine-tuning approach.
## Acknowledgements
This research was funded by Huawei Canada and the National Sciences and Engineering Research Council of Canada. Resources used in preparing this research at the University of Waterloo were provided by the province of Ontario and the government of Canada through CIFAR and companies sponsoring the Vector Institute.
## References
Abhaya Agarwal and Alon Lavie. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. Proceedings of WMT-08.
Eyal Ben-David, Nadav Oved, and Roi Reichart. 2021.
Pada: A prompt-based autoregressive approach for adaptation to unseen domains. *arXiv preprint* arXiv:2102.12206.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Jordan Clive, Kris Cao, and Marek Rei. 2021. Control prefixes for text generation. *arXiv preprint* arXiv:2110.08329.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pages 138–145.
Wanyu Du and Yangfeng Ji. 2021. Sidecontrol: Controlled open-domain dialogue generation via additive side networks. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2175–2194.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830.
Xiaodong Gu, Kang Min Yoo, and Sang-Woo Lee.
2021. Response generation with context-aware prompt learning. *arXiv preprint arXiv:2111.02643*.
Demi Guo, Alexander M Rush, and Yoon Kim. 2021.
Parameter-efficient transfer learning with diff pruning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4884–4896.
Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, and Jeffrey P Bigham. 2019.
Investigating evaluation of open-domain dialogue systems with human generated multiple references.
In *Proceedings of the 20th Annual SIGdial Meeting* on Discourse and Dialogue, pages 379–391.
Yun He, Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi Aribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Metzler, et al. 2022. Hyperprompt: Prompt-based task-conditioning of transformers. In International Conference on Machine Learning, pages 8678–8690.
PMLR.
Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk Oh, Suhyune Son, Yeonsoo Lee, Donghoon Shin, Seungryong Kim, and Heuiseok Lim. 2021. Call for customized conversation: Customized conversation grounding persona and knowledge. *arXiv preprint* arXiv:2112.08619.
Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? *Transactions of the Association for* Computational Linguistics, 8:423–438.
Feihu Jin, Jinliang Lu, Jiajun Zhang, and Chengqing Zong. 2022. Instance-aware prompt learning for language understanding and generation. *arXiv preprint* arXiv:2201.07126.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B Dolan. 2016. A diversity-promoting objective function for neural conversation models. In *Proceedings of the 2016 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119.
Margaret Li, Jason Weston, and Stephen Roller. 2019.
Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597.
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. Dailydialog: A manually labelled multi-turn dialogue dataset.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning:
Prompt tuning can be comparable to fine-tuning across scales and tasks. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. *arXiv preprint arXiv:2103.10385*.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Jing Qian, Li Dong, Yelong Shen, Furu Wei, and Weizhu Chen. 2022. Controllable natural language generation with contrastive prefixes. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 2912–2924.
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts.
In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, et al. 2021.
Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325.
Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269.
Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also fewshot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352.
Taylor Shin, Yasaman Razeghi, Robert L Logan IV,
Eric Wallace, and Sameer Singh. 2020. Autoprompt:
Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235.
Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, VG Vydiswaran, and Hao Ma. 2022. Idpg:
An instance-dependent prompt generation method.
arXiv preprint arXiv:2204.04497.
Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie.
2022. Tailor: A prompt-based approach to attributebased controlled text generation. arXiv preprint arXiv:2204.13362.
Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018.
Generating informative and diverse conversational responses via adversarial information maximization.
Advances in Neural Information Processing Systems, 31.
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B Dolan. 2020. Dialogpt: Largescale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278.
Tony Z Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. *arXiv* preprint arXiv:2102.09690.
## A Experimental Setups A.1 Datasets A.1.1 Label Control
Dailydialog (Li et al.) is a widely used daily conversation dataset that provides a dialogue act for every sentence. Dialogue acts indicate the communication function of each utterance and there are 4 types of dialogue acts: inform, questions, directives, and commissives. We follow the standard split of the original Dailydialog dataset, limit the conversation context to a maximum of four sentences, and remove any sentence that has more than 25 words to maintain computation efficiency. As a result, we obtain 61,669 training samples, 5769 validation samples, and 5453 testing samples.
We additionally use the Dailydialog multireference dataset from (Gupta et al., 2019) during metrics computation to mitigate the one-to-many possible response problem.
## A.1.2 Document Control
FoCus(Jang et al., 2021) is a persona-grounded dataset. Unlike DailyDialog, FoCus aims to build a dialogue agent that provides informative answers based on the user's persona about the geographical landmark; therefore, it is more content-rich and challenging. The selected knowledge candidate sentence is prepended to the conversation and regarded as part of the input.
The input to the base model has the template:
"*Knowledge: [Selected knowledge sentence] Conversation: [Previous utterances]*". The persona sentences are given as the input to the prompt encoder. In fine-tuning (no prompt encoder) and static prompt methods (the prompt encoder does not take attribute information), the persona sentences are concatenated together with the knowledge and previous utterances and form the input to base model as "*Knowledge: [Selected knowledge sentence]*
Persona: [User's Personas] Conversation: [Previous utterances]"
Since the grounded answer of the test set has not been released, we shuffle and split the original training set to construct our training samples and validation samples (70% training and 30% validation) and the original validation set as our testing samples. We further restrict conversation context to at most three sentences because the bot's utterances are much longer than human's utterances. In total, we have 49,198 samples for training, 21,134 samples for validation, and 5,639 samples for testing.
## A.2 Baseline Models
To demonstrate better performance of Controlled DialogPrompt, we compare our model with other competitive prompt-tuning techniques.
- **Pretrained DialoGPT** (Zhang et al., 2020):
DialoGPT-large has shown its superiority for a wide range of open-domain dialogue generation tasks by pretraining on a massive corpus.
- **Fine-tuning**: Fine-tuning, though memoryconsuming, is the most straightforward and prevalent adaptation technique to downstream tasks. Fine-tuning has been considered as the benchmark for all light-weight fine-tuning methods including prompt-tuning.
- **Soft Prompt-tuning (static shallow prompt)**
(Lester et al., 2021): The method applies a static task prompt to the embedding of every input. We experiment with different lengths
(length 10 and length 50) of the static shallow prompt and use the better length 50.
- **Prefix-tuning (static deep prompt)** (Li and Liang, 2021): Prefix prompts are added to every layer during computation. We experiment with different lengths (length 10 and length 50) and we report the better prompt result with length 10.
- **Controlled DP - Embedding (instancespecific shallow prompt)**: The shallow version of our method with controlled prompts added only in the embedding layer. It is used to demonstrate the expressiveness of the deep Controlled DialogPrompt.
- **Controlled DP - MLP / 2-layer Transformer**
(instance-specific deep prompt): We explore different prompt encoder structures, among which MLP prompt encoder shares the frozen pretrained transformer embedding layer to reduce tunable parameters.
During our experiments, we utilize DialoGPTlarge as the frozen backbone model and train all models on two Nvidia V100 32G GPUs. We train models for 10 epochs with training batch size 2 per GPU and learning rate of 1e-4 except for finetuning, which is set to 5e-5 in the FoCus dataset and 1e-5 in the Dailydialog dataset. Models that achieve the lowest validation losses are saved during the training. We perform optimization with the AdamW optimizer with maximum gradient clipping set to 1. For decoding, we choose top-k sampling provided in Huggingface where k=10 and temperature T=0.9. The result is generated with random seed=42.
## B Evaluation Methods B.1 Automated Metrics
For controllability, we follow (Du and Ji, 2021) to evaluate whether models can customize responses based on specified control attributes. (1) For label control, we fine tune an independent BERT classifier (Devlin et al., 2019) which can take a sentence and predict its dialogue intention. We train the classifier on the same training set and achieve 83.23%
accuracy on the test set. (2) For document control, we also compute the cosine similarity between the Glove embedding of the generated responses and grounded persona documents. As FoCus dataset contains human-annotated labels for used persona sentences, only those that are actually used are evaluated. Detailed training information is provided in
(Du and Ji, 2021).
Regarding response quality, we utilize different variants of n-gram based metrics such as BLEU
(B-2, B-4) (Papineni et al., 2002), NIST (N-2, N-4)
(Doddington, 2002), ROUGE-L (Lin, 2004), METEOR (Agarwal and Lavie, 2007) to evaluate fluency and adequacy and distinct n-gram distribution metrics such as Dist (D-1, D-2) (Li et al., 2016)
and Entropy (E-4) (Zhang et al., 2018) to measure the diversity of the response. We follow the metrics setting in (Zhang et al., 2020).
## B.2 Human Evaluation
Human evaluation on the other hand is used to measure consistency between dialogue context and response and attribute controllability. Similar to ACUTE-Eval in (Li et al., 2019; Roller et al., 2021),
we adopt single-turn pairwise evaluations to prevent annotator bias in numerical score evaluation.
We compare Controlled DialogPrompt with every other prompt-tuning methods, covering static shallow prompt, static deep prompt and instancespecific shallow prompt. In each comparison group, there are two questions designed separately to assess response's dialogact/personality controllability as well as consistency to the previous conversation context. For dialogact controllability, we have the question: *Which response do you think is more* related to the given dialog act (intention)?. For personality controllability, we set the question as Which response do you think is more related to the personality?. For the consistency to the previous conversation context, we set the question as *Which* response do you think is more consistent to the above conversation context? We sample 15 conversations from each comparison group and there are 5 conversations overlapped across different groups.
Annotators are industrial NLP researchers and NLP
graduate students. We collected 900 annotations in total.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Code from pretrained model and evaluation metrics; Pretrained models;
✓ B1. Did you cite the creators of artifacts you used?
Section 4 and Appendix
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4 and Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We mentioned we use the datasets and models following the existing papers. Section 4 and Appendix.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A.1
## C ✓ **Did You Run Computational Experiments?** Section 4 And Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5 and table result The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appedix A.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4.2
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix B.2
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix B.2
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix B.2 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
maheshwari-etal-2023-open | Open-World Factually Consistent Question Generation | https://aclanthology.org/2023.findings-acl.151 | Question generation methods based on pre-trained language models often suffer from factual inconsistencies and incorrect entities and are not answerable from the input paragraph. Domain shift {--} where the test data is from a different domain than the training data - further exacerbates the problem of hallucination. This is a critical issue for any natural language application doing question generation. In this work, we propose an effective data processing technique based on de-lexicalization for consistent question generation across domains. Unlike existing approaches for remedying hallucination, the proposed approach does not filter training data and is generic across question-generation models. Experimental results across six benchmark datasets show that our model is robust to domain shift and produces entity-level factually consistent questions without significant impact on traditional metrics. | # Open-World Factually Consistent Question Generation
Himanshu Maheshwari, Sumit Shekhar, Apoorv Saxena, Niyati Chhaya Adobe Research, India
{himahesh, sushekha, apoorvs, nchhaya} @ adobe.com
## Abstract
Question generation methods based on pretrained language models often suffer from factual inconsistencies and incorrect entities and are not answerable from the input paragraph.
Domain shift - where the test data is from a different domain than the training data - further exacerbates the problem of hallucination. This is a critical issue for any natural language application doing question generation. In this work, we propose an effective data processing technique based on de-lexicalization for consistent question generation across domains. Unlike existing approaches for remedying hallucination, the proposed approach does not filter training data and is generic across question-generation models. Experimental results across six benchmark datasets show that our model is robust to domain shift and produces entity-level factually consistent questions without significant impact on traditional metrics.
## 1 Introduction
Question generation is the task of generating a question that is relevant to and answerable by a piece of text (Krishna and Iyyer (2019), Chen et al. (2020),
Zhu and Hauff (2021), Ushio et al. (2022), ). It is an important task in language generation (Fabbri et al. (2020), Yu et al. (2020b)), education (Wang et al. (2022)), and information retrieval (Yu et al.
(2020a)). A critical metric for question generation is factual consistency, i.e., the question has facts that are derivable from the input paragraph. This work proposes novel methods to improve entitylevel factual consistency while agnostic to model and underlying training data. Nan et al. (2021) and Xiao and Carenini (2022) solve a similar problem for summarization. However, to the best of our knowledge, no work addresses the issue of entitylevel factual inconsistency for question generation.
Nema and Khapra (2018) have shown that name entities are essential for a question's answerability. The presence of wrong entities may make the 2390 question nonsensical and unanswerable. Table 1 shows entity-level factual inconsistency in question generation by a fine-tuned PEGASUS (Zhang et al., 2019) model. In the first entity, "Kim Jong Un", and in the second example, "Chicago" are hallucinated.
Unlike previous work in the summarization field
(Nan et al. (2021), Liu et al. (2021a), Xiao and Carenini (2022)), our work is independent of the model or training process. We also do not reduce dataset size by filtering. Instead, we preprocess datasets to force the model to generate questions faithful to the input using strategies of de-lexicalization and multi-generation and recommend the best strategy. The proposed method improves the factual consistency by 84−100% across multiple datasets while having minimal impact on traditional performance metrics.
We experimented with two popular language models viz. PEGASUS-large and BART-large
(Lewis et al., 2020). Our proposed approach consistently performs better for both the language models than normal finetuning. We also compare our approach to recent methods for addressing hallucination in summarization, and our results showed significantly better performance.
## 2 Related Work
Early work proposed rule-based approaches to generate questions (Labutov et al. (2015)). Recent work using neural models for high-quality generation (Du and Cardie (2017), Du et al. (2017), Wang et al. (2022)). Several works have aimed to control hallucination in different NLP tasks like summarization (Cao et al. (2018), Song et al. (2020), Zhu et al. (2021)), Wu et al. (2021)), generative question answering (Bi et al. (2019), Fan et al. (2019a), Su et al. (2022)), data to text (Nie et al. (2019), Wang et al. (2020), Wang et al. (2021)), and machine translation (Xu et al. (2019), Wang and Sennrich
(2020), Zhou et al. (2021)).
Input: As the president of South Korea, she turned out to be heavily influenced by a cult and giving them access to political documents and basically letting that cult run the country through her. One of her "speech-writers" was a religious figure in this cult that totally exploited the president, who pretty much only got elected because her father had been president, using her for money and power and wealth. The people of South Korea got upset when they learned that the real power that was running their country was the religious group that their president belonged to.
Normal Finetuning: What did Kim Jong Un do wrong? Rare word delexicalization + Multiple (Ours): Why did the people of South Korea vote out the president of the South Korea?
Input: As I understand it, there's no established *process* for removing the mayor from his post. If he doesn't resign, there's no mechanism for removing him. Really, I don't understand how this guy hasn't lost his job yet.
Normal Finetuning: Why hasn't the mayor of Chicago been fired yet? Rare word delexicalization + Multiple (Ours): Why hasn't the mayor been fired yet?
Table 1: Qualitative Examples. For detailed analysis refer to section 6.4.
Previous work has explored entity-based delexicalization in settings like adapting parser for a new language (Zeman and Resnik, 2008), valid reasoning chains in multi-hop question answering
(Jhamtani and Clark, 2020), and eliminating diachronic biases in fake news detection (Murayama et al., 2021).
## 3 Methodology
The objective is to generate relevant and entitylevel factually consistent questions which generalise across domains. For this, we propose novel de-lexicalization strategies combined with a multigeneration strategy. De-lexicalization involves replacing named entities with a special token or rare words during training/inference and replacing the original word after generation. The model's vocabulary is expanded to account for the special tokens used in the de-lexicalization strategies.
De-lexicalization Strategies During Training
[Name i] Token: This strategy replaces the named entity with a token [Name i], where i represents the order of the first appearance of the entity in the paragraph and in the question.
[Name i] Token with Push: This strategy is similar to the previous one. The difference is that if the question has a named entity that is not present in the input paragraph, we replace it with [Name j],
where the j is a random number between 0 and the total number of named entities in the input paragraph. The intuition here is that we are pushing or explicitly asking the model to generate a named entity already present in the input paragraph.
[Multiple i] Token: The previous two strategies treat all the named entities as similar. In contrast, in this approach, the entity is replaced with its corresponding semantic tags, followed by an integer representing its order of appearance in the paragraph followed by the question. A semantic tag specifies if an entity is name, organization, loca-
## Tion, Cardinal, Etc.
[Multiple i] Token with Push and Delete: This approach is similar to *[Name i] Token with Push* approach with multiple entity types. However, if the question consists of a named entity type not present in the paragraph, it is deleted.
Rare Word token: This strategy de-lexicalizes only the questions. Here we replace the named entities in questions that do not occur in the input paragraph with a rare word. A rare word is a word that occurs 2 to 5 times in the entire training corpus.
If an entity occurs in the input paragraph, it is left as it is.
Examples showing different de-lexicalization strategies are present in the Appendix.
Entity Replacement: During testing, from the generated questions, the entities are replaced using a dictionary look-up of the special token. We treat a output as hallucinated if the special token has no corresponding named entity.
Multi-generation: Here, we generate multiple questions during inference by selecting the top five beams from the output of the language model and selecting the one that is factually consistent and has the least perplexity. If no questions are consistent, the generation with the least perplexity is chosen.
| Dataset | Train | Dev | Test |
|-------------------|---------|-------|--------|
| ELI5 | 150,000 | 6,925 | 10,000 |
| AskEconomics | - | - | 10,067 |
| AskLegal | - | - | 98 |
| MS Marco | - | - | 1,043 |
| Natural Questions | - | - | 5,000 |
| SciQ | - | - | 884 |
Table 2: Statistics for different datasets
## 4 Example Of Different De-Lexicalization Strategies
Table 3 illustrates different de-lexicalization strategies proposed in the paper. The question contains the named entity "U.S.," which is not present in the
Original
Input: One way would be to allow unlimited deductions of savings and tax withdrawals as income . So if you buy $ 50,000 in bonds in 2017 , you deduct all that from your income . Then you sale those bonds for $ 55,000 in 2018 , you would add that $ 55,000 to your 2018 income and it 's taxed like any other
income . The simplest way to implement that would be to eliminate penalities and caps on IRA accounts .Said my whole question , do n't know what else to say .
Question: How can the U.S. tax system be reformed?
[Name i] Token
Input: [Name 0] way would be to allow unlimited deductions of savings and tax withdrawals as income . So if you buy $ [Name 1] in bonds in [Name 2] ,
you deduct all that from your income . Then you sale those bonds for $ [Name 3] in [Name 4] , you would add that $ [Name 3] to your [Name 4] income and
it 's taxed like any other income . The simplest way to implement that would be to eliminate penalities and caps on IRA accounts .Said my whole question , do n't know what else to say .
Question: How can the [Name 5] tax system be reformed?
[Name i] Token with Push
Input: [Name 0] way would be to allow unlimited deductions of savings and tax withdrawals as income . So if you buy $ [Name 1] in bonds in [Name 2] ,
you deduct all that from your income . Then you sale those bonds for $ [Name 3] in [Name 4] , you would add that $ [Name 3] to your [Name 4] income and
it 's taxed like any other income . The simplest way to implement that would be to eliminate penalities and caps on IRA accounts .Said my whole question , do
n't know what else to say .
Question: How can the [Name 3] tax system be reformed?
[Multiple i] Token
Input: [CARDINAL 0] way would be to allow unlimited deductions of savings and tax withdrawals as income . So if you buy $ [MONEY 0] in bonds in
[DATE 0] , you deduct all that from your income . Then you sale those bonds for $ [MONEY 1] in [DATE 1] , you would add that $ [MONEY 1] to your [DATE 1] income and it 's taxed like any other income . The simplest way to implement that would be to eliminate penalities and caps on IRA accounts .Said my whole question , do n't know what else to say .
Question: How can the [GPE 0] tax system be reformed?
[Multiple i] Token with Push and Delete
Input: [CARDINAL 0] way would be to allow unlimited deductions of savings and tax withdrawals as income . So if you buy $ [MONEY 0] in bonds in
[DATE 0] , you deduct all that from your income . Then you sale those bonds for $ [MONEY 1] in [DATE 1] , you would add that $ [MONEY 1] to your
[DATE 1] income and it 's taxed like any other income . The simplest way to implement that would be to eliminate penalities and caps on IRA accounts .Said
my whole question , do n't know what else to say .
Question: How can the tax system be reformed?
Rare word Token
Input: One way would be to allow unlimited deductions of savings and tax withdrawals as income . So if you buy $ 50,000 in bonds in 2017 , you deduct all that from your income . Then you sale those bonds for $ 55,000 in 2018 , you would add that $ 55,000 to your 2018 income and it 's taxed like any other
income . The simplest way to implement that would be to eliminate penalities and caps on IRA accounts .Said my whole question , do n't know what else to
say .
Question: How can the aster tax system be reformed?
Table 3: Examples of different de-lexicalization strategies. For details refer to section 4
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned PEGASUS 0.6748 29.4926 11.5852 27.5309 86.0441 26.6800 42.8036 0.3779 0.4067 0.3918 [Name i] Token 0.6504 28.7351 11.1658 26.8213 97.1283 18.1800 35.4235 0.2289 0.2885 0.2553
[Name i] Token with Push 0.6544 28.8616 11.2306 27.0018 104.0066 21.0700 17.8927 0.2862 0.3578 0.3180
[Multiple i] Token 0.6523 28.8050 11.1888 26.9392 96.2436 21.7700 35.1860 0.2718 0.3491 0.3056 [Multiple i] Token with Push and Delete 0.6564 28.8258 11.1455 26.9559 97.1164 19.2800 20.2282 0.2962 0.3788 0.3325
Rare Word Token **0.6773 29.7333 11.8060 27.7603** 85.4832 19.4300 10.1390 **0.4477** 0.5107 0.4771
Approach with Multi-Generation
Fine-tuned PEGASUS 0.6672 28.9704 10.7617 26.8001 41.7799 23.0500 5.1600 0.3986 0.4368 0.4168
[Name i] Token 0.6444 28.1856 10.2253 26.0171 46.2771 14.8200 2.3200 0.2552 0.3300 0.2878 [Name i] Token with Push 0.6495 28.4518 10.3465 26.2775 44.5343 17.2600 1.4500 0.3084 0.3957 0.3466
[Multiple i] Token 0.6502 28.4184 10.3503 26.2616 45.5624 16.9100 3.0200 0.2977 0.3879 0.3369
[Multiple i] Token with Push and Delete 0.6513 28.5909 10.4508 26.4515 43.1499 15.6600 1.2600 0.3206 0.4137 0.3613
Rare Word Token 0.6691 29.1550 10.8146 26.9616 **40.2198** 18.4300 **0.6700 0.4477 0.5179 0.4802**
Spancopy (Base model: PEGASUS)
Without global relevance 0.6643 29.2871 11.3873 27.4839 94.9375 23.2300 27.1201 0.3775 0.4343 0.4039 With global relevance 0.6732 27.4178 10.0934 26.3062 93.6223 22.2900 28.5913 0.2466 0.6777 0.3617
Table 4: Results of various approaches on ELI5 dataset for PEGASUS model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity. For detailed analysis refer to section 6.4.
## Input.
In the *[Name i] Token* strategy, we replace all named entities with [Name i]. Do note that name entity, 55,000, and 2018 occur twice. Each occurrence is replaced with the same token, i.e., both occurrence of 55,000 is replaced with [Name 3]. Since the "U.S." does not occur in the input, we replace it with [Name 5]. Contrary to this, in the
[Name i] Token with Push strategy, we replace the U.S. with [Name 3], thereby pushing the model to be faithful to the source.
In the *[Multiple i] Token* strategy, instead of replacing named entities with a common [Name]
token, we replace them with their semantic token.
Thus, 55,000 is replaced with [MONEY 1] and so on. Like before, each occurrence is replaced with the same token. The U.S. is replaced with [GPE
0] as no entity of type GPE occurs in the input.
Contrary to this, the *[Multiple i] Token with Push* and Delete strategy deletes the entity "U.S." as no GPE-type entity exists in the input. If there were a GPE entity in input (not necessarily "U.S."), it would have been replaced with [GPE 0].
In the *Rare Word Token* strategy, the input is unchanged. Since the U.S. does not occur in input, it is replaced with a rare word (aster).
## 5 Datasets
We use the supervised ELI5 dataset (Fan et al.,
2019b) for training. To ensure that the data is of high quality, we remove all the samples where the answer is short (having less than 50 words), or the question does not have a question mark.
We use three publicly available datasets for evaluation across different domains, viz. MS
Marco (Bajaj et al., 2016), Natural Questions
(Kwiatkowski et al., 2019) and SciQ (Welbl et al., 2017). We also scraped r/AskLegal 1, and r/AskEconomics2for testing on finance and legal domains. Table 2 shows the statistics of the dataset.
## 6 Experiment And Analysis 6.1 Implementation Details
We use publicly available checkpoints of the language models and fine-tune them for 100k steps with a batch size of 12 and using the Adam optimizer (Kingma and Ba, 2014). The learning rate is set to 10−5, and the models are tested on the dev set every 10k steps. The best-performing model on the dev set is used. The model training takes approximately 6 hours on an Nvidia A100 40 GB
GPU. Following Nan et al. (2021) we use the Spacy library3to identify named entities.
## 6.2 Evaluation Metrics
We evaluate both the quality and factual consistency of the generated question. The quality is reported using Rouge-1, Rouge-2, Rouge-L (Lin, 2004) scores and cosine similarity between embedding (from *all-mpnet-base-v2* sentence transformer model (Reimers and Gurevych, 2019)) of generated questions and ground truth. We use the perplexity value suggested by Liu et al. (2021b), using a GPT-2 (Radford et al., 2019). To evaluate factual consistency, we use two metrics. The first metric quantifies the degree of hallucination with respect to the ground truth question. We use the precision, recall, and F1 score proposed by Nan et al. (2021).
More details about the exact implementation are in the appendix or in their paper. The second metric quantifies the degree of hallucination with respect to the input paragraph. This metric measures, out of all the questions that have named entities, what percentage of questions have named entities not present in the input. Let Nhne represent the number of generated questions with a named entity, and Nwne represent the number of generated questions with a wrong named entity. N*total* represents the total number of questions. Do note Ntotal ̸= Nhne, as we can have questions with no named entity in them. Then Nhne/N*total* ∗ 100 represents the percentage of questions having a named entity (Pne),
and Nwne/Nhne ∗ 100 represents the percentage of questions having the wrong named entity (Pwne). A system with a low Pwne value and a high F1 score reflects the system is not hallucinating. We want a system with high factual consistency without **significantly** affecting the quality of the questions as measured by the proposed metrics.
## 6.3 Baseline
We compare our results with the Spancopy method proposed by Xiao and Carenini (2022) for the summarization. We test with and without global relevance in Spancopy having PEGASUS as the base language model.
## 6.4 Results And Analysis
Due to space constraints, we only present results for PEGASUS-large in the main text. Results for BART-large can be found in the appendix.
Table 4 shows the results of the test set of the ELI5 dataset. The results indicate that the rare word de-lexicalization plus multiple generation approach performs much better than other methods. Compared to a normal fine-tuned PEGASUS model, the Pwne score decreases by about 98%, implying that the generated questions are faithful to the input text. Similarly, the F1 score increases by approximately 21%, implying that all the generated questions are faithful to ground truth. In contrast, decrements in other metric scores are less than 6.7%. Overall, rare word de-lexicalization plus multiple generation performs the best in terms of factual consistency and is comparable in other metrics.
| Approach | C.S. ↑ | R-1 ↑ | R-2 ↑ | R-L ↑ | PPL ↓ | Pne | Pwne ↓ | Recall ↑ | Precision ↑ | F1 ↑ |
|-----------------------------------------------------|----------|---------|---------|---------|----------|---------|----------|------------|---------------|--------|
| Dataset: MS Marco Normal Finetuned PEGASUS | 0.6844 | 37.5444 | 19.2335 | 36.0351 | 79.8135 | 30.9684 | 41.1765 | 0.3923 | 0.3097 | 0.3462 |
| Rare word delexicalization + Multiple | 0.6759 | 36.0823 | 17.0396 | 34.2419 | 29.6060 | 21.3806 | 0.6711 | 0.5391 | 0.5085 | 0.5234 |
| Spancopy without global relevance | 0.6959 | 37.8456 | 19.0003 | 36.5207 | 98.0016 | 27.5168 | 29.9652 | 0.3934 | 0.3153 | 0.3501 |
| Dataset: Natural Questions Normal Finetuned PEGASUS | 0.5230 | 27.0457 | 10.8578 | 25.8031 | 100.3024 | 72.2200 | 46.5522 | 0.2253 | 0.2089 | 0.2168 |
| Rare word delexicalization + Multiple | 0.5181 | 27.0811 | 10.7338 | 25.5182 | 39.3770 | 59.6000 | 7.3200 | 0.2739 | 0.2707 | 0.2723 |
| Spancopy without global relevance | 0.6305 | 12.2695 | 3.9743 | 11.0423 | 128.6204 | 73.3200 | 68.2488 | 0.0821 | 0.5031 | 0.1412 |
| Dataset: SciQ Normal Finetuned PEGASUS | 0.5469 | 18.2400 | 4.7044 | 16.4770 | 101.3655 | 10.0679 | 35.9551 | 0.2292 | 0.2083 | 0.2183 |
| Rare word delexicalization + Multiple | 0.5346 | 20.5115 | 4.5767 | 17.9713 | 31.9291 | 5.4299 | 0.1131 | 0.4500 | 0.4500 | 0.4500 |
| Spancopy without global relevance | 0.5613 | 18.8779 | 4.9120 | 17.0202 | 140.2532 | 8.1448 | 18.0556 | 0.3400 | 0.4400 | 0.3836 |
| Dataset: AskEconomics Normal Finetuned PEGASUS | 0.6250 | 34.3724 | 13.1196 | 32.1552 | 149.8675 | 36.4160 | 39.6890 | 0.3642 | 0.3860 | 0.3748 |
| Rare word delexicalization + Multiple | 0.6260 | 33.5555 | 12.3312 | 31.0241 | 62.5596 | 26.5223 | 0.6854 | 0.4555 | 0.4976 | 0.4756 |
| Spancopy without global relevance | 0.6222 | 27.3520 | 10.6528 | 25.3469 | 86.8229 | 35.0949 | 25.2194 | 0.3775 | 0.4114 | 0.3937 |
| Dataset: AskLegal Normal Finetuned PEGASUS | 0.5963 | 32.0084 | 9.7201 | 29.2130 | 104.9676 | 29.5918 | 41.3793 | 0.4583 | 0.4000 | 0.4272 |
| Rare word delexicalization + Multiple | 0.5943 | 29.8136 | 8.8872 | 26.8056 | 65.7854 | 18.3674 | 1.0204 | 0.6061 | 0.5818 | 0.5937 |
| Spancopy without global relevance | 0.5936 | 26.2488 | 9.2795 | 23.9778 | 102.6717 | 29.5918 | 27.5862 | 0.3698 | 0.4375 | 0.4008 |
The rare word de-lexicalization with multigeneration approach consistently performs better than all the other approaches for all the datasets.
Table 5 compares rare word delexicalization + multiple generation with a normal finetuned PEGASUS and Spancopy without global relevance across different datasets. Detailed results for all the approaches across all the datasets are in the appendix.
From the table, it can be seen that rare word delexicalization with multiple generations solves the issue of entity-level inconsistency without negative impact on different metrics. The model was just trained for the ELI5 dataset and was directly used for other datasets. Domain shift exacerbates the issue of entity hallucination, as shown by the Pwne value for a normal fine-tuned PEGASUS model, which is usually higher in the presence of domain shift. Thus, our proposed approach works across domains without re-training.
We see that the Pne value decreases across all the datasets for rare word delexicalization with multiple generations. However, this is not wrong. A
question without a named entity can still be a valid question (Nema and Khapra, 2018).
Table 1 shows qualitative examples. In the first example, the fine-tuned PEGASUS produces the entity Kim Jong Un that is unfaithful to the source and is entirely unrelated to South Korea. Chicago is hallucinated in the second example. In both examples, our proposed approach generates meaningful and faithful questions. Our approach produces a question with no named entity in the second example, yet the question is meaningful and faithful to the source. This further reinforces our claim that a question without a named entity can still be valid.
More outputs can be found in the appendix.
Our approach performs better than the Spancopy architecture (both with and without global relevance). This shows that simple de-lexicalization with multiple generations is better than sophisticated architecture.
## 7 Conclusion
In this paper, we study the entity-level factual inconsistency in question generation. Our proposed strategy, rare-word de-lexicalization with multigeneration, improve consistency without significantly affecting traditional metrics across data domains. Extensive experimental results further reinforce our claim.
## 8 Limitations
The Pne value decreased in all datasets. While this is not problematic for question generation, where the presence of a named entity is not always necessary, it does pose an issue for NLG tasks where the inclusion of named entities is important. In these cases, we recommend using alternative techniques that we have proposed. Additionally, using delexicalization and over-generation in our approach leads to a high training and inference time.
## References
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated machine reading comprehension dataset. *arXiv preprint* arXiv:1611.09268.
Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, and Chenliang Li. 2019. Incorporating external knowledge into machine reading for generative question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 2521–2530, Hong Kong, China. Association for Computational Linguistics.
Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018.
Faithful to the original: Fact-aware neural abstractive summarization. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press.
Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2020.
Reinforcement learning based graph-to-sequence model for natural question generation. In Proceedings of the 8th International Conference on Learning Representations.
Xinya Du and Claire Cardie. 2017. Identifying where to focus in reading comprehension for neural question generation. In *Proceedings of the 2017 Conference* on Empirical Methods in Natural Language Processing, pages 2067–2073, Copenhagen, Denmark. Association for Computational Linguistics.
Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. pages 1342–1352.
Alexander Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Template-based question generation from retrieved sentences for improved unsupervised question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4508–4513, Online. Association for Computational Linguistics.
Angela Fan, Claire Gardent, Chloé Braud, and Antoine Bordes. 2019a. Using local knowledge graph construction to scale Seq2Seq models to multi-document inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4186–4196, Hong Kong, China. Association for Computational Linguistics.
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019b. ELI5:
Long form question answering. In Proceedings of
the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics.
Harsh Jhamtani and Peter Clark. 2020. Learning to explain: Datasets and models for identifying valid reasoning chains in multihop question-answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 137–150, Online. Association for Computational Linguistics.
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. International Conference on Learning Representations.
Kalpesh Krishna and Mohit Iyyer. 2019. Generating question-answer hierarchies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2321–2334, Florence, Italy.
Association for Computational Linguistics.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Igor Labutov, Sumit Basu, and Lucy Vanderwende.
2015. Deep questions without deep understanding.
In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 889–898, Beijing, China. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Wei Liu, Huanqin Wu, Wenjing Mu, Zhen Li, Tao Chen, and Dan Nie. 2021a. Co2sum:contrastive learning for factual-consistent abstractive summarization.
Yixin Liu, Graham Neubig, and John Wieting. 2021b.
On learning text style transfer with direct rewards. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
pages 4262–4273, Online. Association for Computational Linguistics.
Taichi Murayama, Shoko Wakamiya, and Eiji Aramaki.
2021. Mitigation of diachronic bias in fake news detection dataset. In *Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)*,
pages 182–188, Online. Association for Computational Linguistics.
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entitylevel factual consistency of abstractive text summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727–2733, Online. Association for Computational Linguistics.
Preksha Nema and Mitesh M. Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3950–3959, Brussels, Belgium. Association for Computational Linguistics.
Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards reducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2673–
2679, Florence, Italy. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics.
Kaiqiang Song, Logan Lebanoff, Qipeng Guo, Xipeng Qiu, X. Xue, Chen Li, Dong Yu, and Fei Liu. 2020.
Joint parsing and generation for abstractive summarization. In *AAAI*.
Dan Su, Xiaoguang Li, Jindi Zhang, Lifeng Shang, Xin Jiang, Qun Liu, and Pascale Fung. 2022. Read before generate! faithful long form question answering with machine reading. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 744–
756, Dublin, Ireland. Association for Computational Linguistics.
Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2022. Generative language models for paragraph-level question generation.
Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 3544–3552, Online. Association for Computational Linguistics.
Peng Wang, Junyang Lin, An Yang, Chang Zhou, Yichang Zhang, Jingren Zhou, and Hongxia Yang.
2021. Sketch and refine: Towards faithful and informative table-to-text generation. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4831–4843, Online. Association for Computational Linguistics.
Xu Wang, Simin Fan, Jessica Houghton, and Lu Wang.
2022. Towards process-oriented, modular, and versatile question generation that meets educational needs.
Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, and Changyou Chen. 2020. Towards faithful neural table-to-text generation with content-matching constraints. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 1072–1086, Online. Association for Computational Linguistics.
Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017.
Crowdsourcing multiple choice science questions.
ArXiv, abs/1707.06209.
Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, and Bill Dolan. 2021. A controllable model of grounded response generation. Proceedings of the AAAI Conference on Artificial Intelligence, 35:14085–
14093.
Wen Xiao and Giuseppe Carenini. 2022. Entity-based spancopy for abstractive summarization to improve the factual consistency.
Weijia Xu, Xing Niu, and Marine Carpuat. 2019. Differentiable sampling with flexible reference word order for neural machine translation. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2047–2053, Minneapolis, Minnesota. Association for Computational Linguistics.
Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam, and Luo Si. 2020a. Review-based question generation with adaptive instance transfer and augmentation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 280–290, Online. Association for Computational Linguistics.
Wenhao Yu, Lingfei Wu, Yu Deng, Ruchi Mahindru, Qingkai Zeng, Sinem Guven, and Meng Jiang. 2020b.
A technical question answering system with transfer learning. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing: System Demonstrations, pages 92–99, Online.
Association for Computational Linguistics.
Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In *Proceedings of the IJCNLP-08 Workshop on NLP for* Less Privileged Languages.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.
Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Francisco Guzmán, Luke Zettlemoyer, and Marjan Ghazvininejad. 2021. Detecting hallucinated content in conditional neural sequence generation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 1393–1404, Online.
Association for Computational Linguistics.
Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual consistency of abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 718–733, Online.
Association for Computational Linguistics.
Peide Zhu and Claudia Hauff. 2021. Evaluating bertbased rewards for question generation with reinforcement learning. In *Proceedings of the 2021 ACM*
SIGIR International Conference on Theory of Information Retrieval, ICTIR '21, page 261–270, New York, NY, USA. Association for Computing Machinery.
## A Processing Publicly Available Datasets
This section describes our processing for MS
Marco, Natural Questions, and SciQ datasets.
Since these datasets are used exclusively for testing, we can even use their training set for testing. For MS Marco, we use their train set due to the small size of the test set. Since MS Marco is a sentencebased dataset, we usually see small input contexts.
So we only include those data points where the answer has at least 40 words, and the question ends with a question mark. We also use the training set for Natural questions as it is a well-defined JSON
file. We randomly select five thousand questions from the training set. We also ensure that the answer is not from the table. We use the test set for the SciQ dataset; however, we filter out all the documents for which supporting text is missing. This supporting text is the input to the model.
## B Precision, Recall And F1 Scores
Let qgt and qgen be the ground truth and generated questions, respectively. Let N(qgt ∩ qgen)
represent the number of named entities common between ground truth and generated question. Similarly, N(qgt) and N(qgen) represent the number of names entities in the ground truth and generated question, respectively. Thus, the precision is: N(qgt ∩ qgen)/N(qgen) and recall is:
N(qgt ∩ qgen)/N(qgt). The F1 score is the harmonic mean of recall and precision.
## C Results Across Multiple Datasets
This section presents the results of different delexicalization strategies across different datasets. Table 6, 7, 8, 9, and 10 present the results for MS
Marco, natural questions, SciQ, AskEconomics, and AskLegal datasets for the PEGASUS model.
Table 11, 12, 13, 14, 15, and 16 present the results for ELI5, MS Marco, natural questions, SciQ,
AskEconomics, and AskLegal datasets for the BART model.
## D More Qualitative Examples
Table 17 shows some more qualitative examples.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned PEGASUS 0.6844 37.5444 19.2335 36.0351 79.8135 30.9684 41.1765 0.3923 0.3097 0.3462
[Name i] Token 0.6361 36.1754 18.1748 34.9726 93.6765 23.5858 35.3659 0.2347 0.2303 0.2324 [Name i] Token with Push 0.6466 36.0127 18.3969 34.8516 94.1234 27.0374 15.9574 0.2663 0.2732 0.2697
[Multiple i] Token 0.6385 35.8072 18.1312 34.6703 90.9868 29.3384 32.0261 0.3035 0.3158 0.3095
[Multiple i] Token with Push and Delete 0.6530 36.0869 17.9565 34.8825 90.9744 24.4487 18.0392 0.3406 0.3261 0.3332
Rare Word Token 0.6903 38.0898 **19.6516 36.6146** 74.4608 23.1064 20.3320 0.4783 0.4237 0.4493
Approach with Multi-Generation
Fine-tuned PEGASUS 0.6725 35.3435 16.9061 33.7758 34.8961 26.6539 5.0815 0.4354 0.3874 0.4100 [Name i] Token 0.6301 34.3547 16.0744 32.8485 35.6354 17.8332 1.9175 0.1936 0.2258 0.2084
[Name i] Token with Push 0.6311 34.4746 16.2056 33.0859 34.7968 21.2848 0.7670 0.2922 0.2963 0.2942
[Multiple i] Token 0.6293 34.4290 16.3977 33.0232 37.1887 22.9147 3.7392 0.3333 0.3546 0.3437 [Multiple i] Token with Push and Delete 0.6377 34.5643 16.4035 33.2079 32.2173 19.4631 1.0546 0.3862 0.4024 0.3941
Rare Word Token 0.6759 36.0823 17.0396 34.2419 **29.6060** 21.3806 **0.6711 0.5391 0.5085 0.5234**
Spancopy (Base model: PEGASUS)
Without global relevance 0.6959 **37.8456** 19.0003 36.5207 98.0016 27.5168 29.9652 0.3934 0.3153 0.3501
With global relevance **0.6961** 37.8070 18.8587 36.4055 93.6907 27.4209 26.9231 0.3836 0.3506 0.3664
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned PEGASUS 0.5230 27.0457 10.8578 25.8031 100.3024 72.2200 46.5522 0.2253 0.2089 0.2168 [Name i] Token 0.3936 22.9647 8.1260 21.9904 171.2617 60.9400 40.3676 0.0825 0.0911 0.0866
[Name i] Token with Push 0.4290 24.1552 8.8400 23.1948 208.5758 62.7400 16.4170 0.1362 0.1505 0.1430
[Multiple i] Token 0.4103 23.3620 8.4394 22.3829 180.8732 69.7400 37.4821 0.1547 0.1766 0.1650 [Multiple i] Token with Push and Delete 0.4146 23.4967 8.4898 22.5391 200.1246 54.0400 17.6906 0.1779 0.2050 0.1905
Rare Word Token 0.5249 **27.5742 11.3235 26.4109** 98.1137 63.4400 25.9142 0.2507 0.2483 0.2495
Approach with Multi-Generation Fine-tuned PEGASUS 0.5201 26.8340 10.5054 25.2617 51.1872 66.5600 19.3600 0.2587 0.2418 0.2500 [Name i] Token 0.3938 23.0861 7.9811 21.9169 73.3829 54.7000 11.6800 0.0915 0.1070 0.0987
[Name i] Token with Push 0.4201 24.0322 8.6477 22.8173 64.4276 57.3400 **2.6800** 0.1316 0.1512 0.1407
[Multiple i] Token 0.4199 23.6249 8.3804 22.4047 65.6907 62.5600 11.3000 0.1558 0.1783 0.1663
[Multiple i] Token with Push and Delete 0.4111 23.4675 8.1350 22.2492 49.5079 47.8800 2.7400 0.1805 0.2120 0.1950
Rare Word Token 0.5181 27.0811 10.7338 25.5182 **39.3770** 59.6000 7.3200 **0.2739 0.2707 0.2723**
Spancopy (Base model: PEGASUS)
Without global relevance **0.6305** 12.2695 3.9743 11.0423 128.6204 73.3200 68.2488 0.0821 0.5031 0.1412
With global relevance 0.6268 12.2730 4.1031 11.1419 117.6533 69.4000 66.6859 0.0755 0.4763 0.1303
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned PEGASUS 0.5469 18.2400 4.7044 16.4770 101.3655 10.0679 35.9551 0.2292 0.2083 0.2183 [Name i] Token 0.5419 18.7286 4.4430 16.8223 102.1005 6.7873 38.3333 0.2000 0.2000 0.2000 [Name i] Token with Push 0.5361 18.2758 4.4104 16.4034 106.3667 8.8235 19.2308 0.2750 0.3000 0.2870 [Multiple i] Token 0.5375 18.5645 4.2903 16.4698 96.3949 8.5973 38.1579 0.4063 0.4375 0.4213 [Multiple i] Token with Push and Delete 0.5431 18.5072 4.2430 16.4152 94.8746 7.2398 23.4375 0.3889 0.4444 0.4148 Rare Word Token 0.5502 18.6446 4.6170 16.5633 108.7372 6.5611 8.6207 0.4318 0.4394 0.4356
Approach with Multi-Generation
Fine-tuned PEGASUS 0.5375 20.4049 4.7434 **18.0692** 34.7327 7.9186 1.1312 0.4318 0.4318 0.4318
[Name i] Token 0.5237 19.7210 4.2975 17.5061 31.5745 4.8643 0.5656 0.2308 0.2308 0.2308
[Name i] Token with Push 0.5264 19.9181 4.3193 17.5872 32.4990 6.4480 0.3394 **0.4500 0.4500 0.4500**
[Multiple i] Token 0.5212 19.7287 4.0927 17.3978 **30.7673** 5.9955 0.7919 0.4375 0.5000 0.4667
[Multiple i] Token with Push and Delete 0.5254 19.5619 4.1523 17.1332 31.3228 5.0905 0.2262 0.3947 0.3947 0.3947
Rare Word Token 0.5346 **20.5115** 4.5767 17.9713 31.9291 5.4299 **0.1131 0.4500 0.4500 0.4500**
Spancopy (Base model: PEGASUS)
Without global relevance **0.5613** 18.8779 **4.9120** 17.0202 140.2532 8.1448 18.0556 0.3400 0.4400 0.3836
With global relevance 0.5566 18.2197 4.5123 16.4520 128.5693 7.6923 19.1177 0.3636 0.4091 0.3850
Table 8: Results of various approaches on SciQ dataset for PEGASUS model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Table 6: Results of various approaches on MS Marco dataset for PEGASUS model. C.S.: Cosine Similarity | R-1:
Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Table 7: Results of various approaches on Natural Questions dataset for PEGASUS model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned PEGASUS 0.6250 34.3724 13.1196 32.1552 149.8675 36.4160 39.6890 0.3642 0.3860 0.3748
[Name i] Token 0.6038 34.6158 12.7742 32.4235 161.2746 24.5554 34.2638 0.2237 0.2750 0.2467
[Name i] Token with Push 0.6117 **35.4423** 13.3007 **33.1954** 169.9248 29.9394 17.3855 0.3362 0.3884 0.3604
[Multiple i] Token 0.6028 34.6618 12.8921 32.4284 158.3679 37.3994 49.9602 0.3152 0.3844 0.3464 [Multiple i] Token with Push and Delete 0.6123 34.9912 13.1035 32.7917 157.6627 25.9859 16.6284 0.3465 0.4084 0.3749
Rare Word Token **0.6303** 35.2248 **13.8321** 32.9506 149.4177 27.5057 8.0173 **0.4581 0.5018 0.4790**
Approach with Multi-Generation Fine-tuned PEGASUS 0.6201 32.7750 11.7094 30.3072 69.9868 33.2174 6.8243 0.3979 0.4257 0.4113 [Name i] Token 0.6051 33.3825 11.8935 30.9546 73.2065 21.2178 3.1290 0.2801 0.3316 0.3037
[Name i] Token with Push 0.6085 33.8649 12.1014 31.4369 71.4164 26.0058 2.1158 0.3501 0.4080 0.3768
[Multiple i] Token 0.6048 33.5993 11.8591 31.2375 72.2068 25.1614 4.4999 0.3366 0.4072 0.3685 [Multiple i] Token with Push and Delete 0.6084 33.6636 11.9805 31.3073 68.3048 22.7476 1.3410 0.3631 0.4354 0.3960
Rare Word Token 0.6260 33.5555 12.3312 31.0241 **62.5596** 26.5223 **0.6854** 0.4555 0.4976 0.4756
Spancopy (Base model: PEGASUS) Without global relevance 0.6222 27.3520 10.6528 25.3469 86.8229 35.0949 25.2194 0.3775 0.4114 0.3937
With global relevance 0.6234 27.3804 10.7121 25.3495 93.4049 33.5651 26.0728 0.2855 0.4279 0.4056
Table 9: Results of various approaches on AskEconomics dataset for PEGASUS model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned PEGASUS 0.5963 32.0084 9.7201 29.2130 104.9676 29.5918 41.3793 0.4583 0.4000 0.4272
[Name i] Token 0.5918 32.7445 10.3777 30.1671 156.3077 17.3469 35.2941 0.4242 0.4546 0.4389 [Name i] Token with Push 0.5806 31.7500 10.1226 28.9167 157.8613 20.4083 30.0000 0.2424 0.1591 0.1921
[Multiple i] Token 0.5924 31.6534 9.6734 28.7415 143.8824 23.4694 34.7826 0.1852 0.2593 0.2593
[Multiple i] Token with Push and Delete 0.5832 31.6748 10.2476 29.0825 129.0326 16.3265 25.0000 0.3667 0.3833 0.3748
Rare Word Token **0.6073 34.1803 12.1201 31.3956** 150.7832 22.4490 9.0909 0.5333 0.4933 0.5126
Approach with Multi-Generation
Fine-tuned PEGASUS 0.5945 31.0129 8.8776 28.1593 65.7903 25.5102 7.1429 0.4583 0.4000 0.4272 [Name i] Token 0.5812 30.4870 8.7033 28.0827 62.2575 9.1837 2.0408 0.3333 0.3333 0.3333
[Name i] Token with Push 0.5733 29.7218 8.3545 26.6581 65.7451 19.3878 3.0612 0.3333 0.3939 0.3611
[Multiple i] Token 0.5759 29.9117 7.9798 26.7829 **56.3279** 18.3674 2.0408 0.3333 0.3485 0.3407
[Multiple i] Token with Push and Delete 0.5683 30.7385 8.3835 27.6193 56.9851 18.3674 **1.0204** 0.3205 0.3462 0.3328 Rare Word Token 0.5943 29.8136 8.8872 26.8056 65.7854 18.3674 **1.0204 0.6061 0.5818 0.5937**
Spancopy (Base model: PEGASUS)
Without global relevance 0.5936 26.2488 9.2795 23.9778 102.6717 29.5918 27.5862 0.3698 0.4375 0.4008
With global relevance 0.5992 26.5635 9.4533 24.2483 107.6989 23.4694 39.1304 0.3889 0.4167 0.4023
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned BART 0.6708 **30.3458 12.4110 28.4024** 84.2910 23.8800 26.0888 0.4112 0.4634 0.4358
[Name i] Token 0.6392 29.2884 11.5541 27.4596 104.4070 19.6000 77.4490 0.0670 0.0868 0.0756 [Name i] Token with Push 0.6566 29.6925 11.7600 27.8134 108.5305 20.5900 19.7669 0.2475 0.3175 0.2782
[Multiple i] Token 0.6601 30.2565 10.8062 27.8040 94.4202 19.5700 20.3884 0.3254 0.4031 0.3601
[Multiple i] Token with Push and Delete 0.6681 30.2860 11.9954 28.2523 83.4336 18.4200 18.0239 0.3339 0.4126 0.3691
Rare Word Token **0.6723** 30.3140 12.2587 28.3881 85.8617 19.7900 9.6513 0.4320 0.5073 0.4667
Approach with Multi-Generation Fine-tuned BART 0.6619 29.6183 11.2371 27.3785 40.7329 21.7400 2.7700 0.4323 0.4972 0.4625
[Name i] Token 0.6409 28.8991 10.6597 26.7667 52.8637 14.8600 8.4600 0.1294 0.1661 0.1455
[Name i] Token with Push 0.6501 29.3093 10.9578 27.0919 46.9002 17.5900 1.9400 0.2656 0.3409 0.2986 [Multiple i] Token 0.6543 29.5429 10.9538 27.1540 42.1324 16.2200 1.4800 0.3400 0.4302 0.3798
[Multiple i] Token with Push and Delete 0.6592 29.5740 10.8002 27.1938 **39.1928** 16.0800 1.2000 0.3499 0.4423 0.3907
Rare Word Token 0.6624 29.5117 11.0852 27.2535 39.3360 18.8700 **0.5800 0.4396 0.5149 0.4743**
Table 11: Results of various approaches on ELI5 dataset for BART model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Table 10: Results of various approaches on AskLegal dataset for PEGASUS model. C.S.: Cosine Similarity | R-1:
Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned BART **0.6975** 38.6659 19.6163 37.1822 80.2672 30.0096 28.4345 0.4534 0.3757 0.4109
[Name i] Token 0.6255 36.6308 18.5853 35.5317 103.4031 27.5168 77.7004 0.0264 0.0242 0.0253 [Name i] Token with Push 0.6738 37.3418 19.1817 36.2664 109.8183 29.0508 13.2013 0.2408 0.2209 0.2304 [Multiple i] Token 0.6731 37.6786 18.9069 36.4120 87.9487 24.3528 21.2598 0.2921 0.2832 0.2876 [Multiple i] Token with Push and Delete 0.6721 36.8989 18.3516 35.5843 72.8703 24.9281 23.4615 0.3185 0.2937 0.3056
Rare Word Token 0.6957 **38.6819 19.6355 37.2470** 84.3463 25.0240 16.0920 **0.5015 0.4688 0.4846**
Approach with Multi-Generation
Fine-tuned BART 0.6818 37.1152 18.0769 35.2690 33.8383 26.7498 2.8763 0.4505 0.4189 0.4341
[Name i] Token 0.6267 35.0016 16.4603 33.5250 52.8990 20.5177 12.5599 0.0381 0.0458 0.0416
[Name i] Token with Push 0.6607 35.8644 17.0981 34.4245 39.8618 22.7229 **0.5753** 0.3035 0.3158 0.3095
[Multiple i] Token 0.6551 35.5674 16.6391 33.9338 31.5476 21.7641 1.3423 0.2943 0.2926 0.2934
[Multiple i] Token with Push and Delete 0.6444 34.4571 15.9317 32.7743 **29.9093** 20.6136 1.5340 0.3261 0.3025 0.3139
Rare Word Token 0.6814 36.9171 17.7233 34.9347 30.3011 23.6817 1.1505 0.4877 0.4568 0.4717
Table 12: Results of various approaches on MS Marco dataset for BART model. C.S.: Cosine Similarity | R-1:
Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned BART 0.5355 **28.2737 11.7518 26.9399** 91.7022 71.5200 29.6421 0.2369 0.2159 0.2260
[Name i] Token 0.3888 23.1978 8.3257 22.4231 188.8991 66.5400 63.8714 0.0310 0.0363 0.0334 [Name i] Token with Push 0.4731 26.0937 10.2157 25.1520 243.9011 67.7600 12.3672 0.1571 0.1697 0.1631
[Multiple i] Token 0.4584 25.6700 10.0286 24.6388 200.9676 68.3800 24.7442 0.1750 0.1875 0.1810
[Multiple i] Token with Push and Delete 0.4740 26.7674 10.5108 25.5187 155.8149 60.3600 20.9742 0.1759 0.1821 0.1789
Rare Word Token **0.5358** 27.9430 11.3583 26.5285 88.2666 67.3800 18.7296 0.2477 0.2307 0.2389
Approach with Multi-Generation
Fine-tuned BART 0.5277 27.9300 11.3603 26.1017 42.0300 68.3600 11.1600 0.2413 0.2222 0.2314 [Name i] Token 0.4092 24.0923 8.7261 22.8177 114.3201 57.6600 24.1600 0.0486 0.0579 0.0528
[Name i] Token with Push 0.4641 25.6506 9.6886 24.1816 65.0078 61.8800 **2.2800** 0.1621 0.1771 0.1693
[Multiple i] Token 0.4589 25.8030 9.8690 24.2661 54.9045 62.0000 7.0000 0.1798 0.1867 0.1832 [Multiple i] Token with Push and Delete 0.4607 26.1420 9.8590 24.5867 47.0870 53.7200 4.6200 0.1786 0.1902 0.1842
Rare Word Token 0.5263 27.5103 10.8879 25.6209 **37.3464** 63.8000 5.3200 **0.2539 0.2419 0.2477**
Table 13: Results of various approaches on Natural Questions dataset for BART model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned BART 0.5435 19.0690 5.5486 17.2327 94.8594 8.2579 16.4384 0.5375 0.5250 0.5312 [Name i] Token 0.5529 18.8496 5.3620 16.9887 122.1728 5.6561 68.0000 0.0513 0.0513 0.0513
[Name i] Token with Push **0.5561** 19.6120 **5.8026** 17.7135 124.1907 7.9186 15.7143 0.4423 0.4423 0.4423
[Multiple i] Token 0.5473 19.4473 5.3066 17.3885 100.0570 5.5430 6.1225 0.4000 0.3500 0.3733 [Multiple i] Token with Push and Delete 0.5455 19.4403 5.5812 17.2885 84.3905 4.2986 13.1579 0.4615 0.4615 0.4615
Rare Word Token 0.5405 19.3497 5.6073 17.3604 100.1989 5.8823 9.6154 0.5588 0.5588 0.5588
Approach with Multi-Generation Fine-tuned BART 0.5281 20.5964 4.6589 18.0049 29.1067 7.2398 0.1131 0.5000 0.5000 0.5000 [Name i] Token 0.5386 21.1234 5.2541 18.5160 37.9603 3.3937 0.6787 0.3571 0.3571 0.3571
[Name i] Token with Push 0.5401 **21.4787** 5.4889 **18.7693** 32.2501 4.0724 0.2262 0.5000 0.5625 0.5294
[Multiple i] Token 0.5267 20.6369 4.4584 17.7864 28.1237 3.3937 0.1131 0.6667 0.6667 0.6667
[Multiple i] Token with Push and Delete 0.5267 21.1890 4.9835 18.6015 27.5412 3.0543 **0.0000 0.8333 0.8889 0.8602** Rare Word Token 0.5222 20.6120 4.7786 17.8947 **27.3428** 4.7511 **0.0000** 0.6389 0.6111 0.6247
Table 14: Results of various approaches on SciQ dataset for BART model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned BART 0.6258 34.7086 **14.5584 32.5739** 153.0107 34.4095 28.1467 0.3817 0.4239 0.4017
[Name i] Token 0.5948 33.9577 13.4035 31.9209 178.6685 28.4097 75.2797 0.0733 0.0830 0.0779
[Name i] Token with Push 0.6144 **34.8292** 14.0905 32.6011 176.4246 30.3665 18.8747 0.2566 0.3085 0.2802
[Multiple i] Token 0.6135 34.4133 13.9942 32.0568 147.5120 28.7772 20.4004 0.3750 0.4273 0.3994 [Multiple i] Token with Push and Delete 0.6223 34.3360 13.8667 32.0147 136.4393 27.0587 15.6755 0.3581 0.4171 0.3853
Rare Word Token **0.6306** 34.5594 14.4776 32.4075 141.2237 28.3799 8.0854 0.4410 0.4890 0.4638
Approach with Multi-Generation Fine-tuned BART 0.6200 32.5616 12.9770 30.2733 73.1045 32.6910 4.5793 0.4052 0.4505 0.4267 [Name i] Token 0.5995 32.0092 12.1092 29.8257 87.9262 22.0622 11.9201 0.1520 0.1797 0.1647 [Name i] Token with Push 0.6117 32.2095 12.5313 29.8737 75.1088 26.9892 2.8606 0.2866 0.3486 0.3146 [Multiple i] Token 0.6118 32.6829 12.7997 30.1692 67.0476 25.8170 2.1357 0.3791 0.4355 0.4053
[Multiple i] Token with Push and Delete 0.6188 32.2967 12.6219 29.8401 **60.3746** 24.3667 1.6291 0.3750 0.4314 0.4012
Rare Word Token 0.6243 31.6490 12.7780 29.3689 61.2904 28.2209 **0.6854 0.4486 0.4985 0.4722**
Table 15: Results of various approaches on AskEconomics dataset for BART model. C.S.: Cosine Similarity | R-1:
Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Approach w/o Multi-Generation C.S. ↑ R-1 ↑ R-2 ↑ R-L ↑ PPL ↓ Pne Pwne ↓ Recall ↑ Precision ↑ F1 ↑
Fine-tuned BART **0.5747** 31.4161 11.3146 29.2014 153.9654 25.5102 36.0000 0.3944 0.4222 0.4079
[Name i] Token 0.5577 30.6848 11.8267 29.1010 134.7211 19.3878 78.9474 0.0556 0.0556 0.0556 [Name i] Token with Push 0.5455 29.7544 9.9516 27.4427 155.7539 21.4286 23.8095 0.1218 0.2308 0.1594 [Multiple i] Token 0.5639 30.2565 10.8062 27.8040 115.1768 18.3674 22.2222 0.2250 0.3000 0.2571
[Multiple i] Token with Push and Delete 0.5701 29.7627 9.8476 27.0681 136.1808 20.4082 15.0000 0.3056 0.2639 0.2832
Rare Word Token 0.5719 **32.3624 12.1716 30.3172** 147.6951 22.4490 13.6364 **0.5944 0.6222 0.6080**
Approach with Multi-Generation
Fine-tuned BART 0.5517 27.8485 9.2759 25.6313 119.2233 26.5306 3.0612 0.4444 0.3333 0.3810
[Name i] Token 0.5400 28.7264 9.3187 26.6250 **67.6563** 13.2653 6.1224 0.1000 0.1000 0.1000
[Name i] Token with Push 0.5446 28.7321 10.2599 26.3664 82.5094 18.3674 3.0612 0.4524 0.4643 0.4583
[Multiple i] Token 0.5655 27.5241 9.0784 25.0620 94.9724 18.3674 1.0204 0.3611 0.3889 0.3745
[Multiple i] Token with Push and Delete 0.5598 26.7347 9.7592 24.6835 76.4061 16.3265 **0.0000** 0.3250 0.4000 0.3586
Rare Word Token 0.5548 26.5433 8.6191 24.6013 69.1614 21.4286 1.0204 0.5321 0.5385 0.5352
Table 16: Results of various approaches on AskLegal dataset for BART model. C.S.: Cosine Similarity | R-1: Rouge 1 | R-2: Rouge 2 | R-L: Rouge l | PPL: Perplexity.
Input: Because there is not political profit in it. No one gets elected campaigned to strike down laws about where you can tie your horse. It is hard to enough getting relevant laws pass, now one wants to take the effort to repeal laws that aren't really hurting anyone. Especially when they get no credit for doing so. Examples would be like those found in this article _URL_0_
Normal Finetuning: Why are there so many laws that have been struck down by the Supreme Court?
Rare word delexicalization + Multiple: Why is it so hard to pass laws that are actually beneficial to the public?
Input: Theres a treaty forbidding any state from claiming extraterrestrial land. That being said, Gurdang Murrica owns them. Greatest country in the gurdang universe.
Normal Finetuning: Why is there no extraterrestrial life on Mars?
Rare word delexicalization + Multiple: Why is it illegal for a country to claim extraterrestrial land?
Input: Decriminalization means it's not legal, but you won't go to jail over it, like a speeding ticket. There's still a fine if you're caught doing it. Legalization means you're allowed to do it. I've come to know they are not the same; not sure how.
Normal Finetuning: What is the difference between decriminalization and legalization of marijuana?
Rare word delexicalization + Multiple: What is the difference between decriminalization and legalization? Input: Because percentage growth is easier when you 're starting with a lower GDP . They 're leapfrogging decades of development and people are moving right from subsistence farming in small villages to a modern economy . They 're seeing the benefits of introducing electricity , telecommunications , modern agriculture , medicine , etc ... all at once . They still have large numbers of people with very low income levels that can catapult up the productivity ladder .Their GDP annual growth rates are clearly outperforming the US and UK economies . I am not very educated in economics , but am wondering how this could be so . I remember reading somewhere that it was state investment . Could this be true ? Thank you .
Normal Finetuning: How is China's economy growing so fast ?
Rare word delexicalization + Multiple: How is it possible that a country with such low GDP can have such high growth rates Input: The global economy is n't the same as a couple of generations ago - it is globalised to a fuller extent . Attributes such as the renewed ease of international investment and trade have had a huge impact of how countries grow today .
Normal Finetuning: How did the global economy grow so much faster than it did in the 20th century Rare word delexicalization + Multiple: How did the global economy grow so much faster than it did a couple of generations ago Input: The court wants to avoid "playing to the public" by lawyers and wants to focus the questioning and presentations on purely legal arguments.
Normal Finetuning: Why are Supreme Court Justices sequestered?
Rare word delexicalization + Multiple: Why do we have to ask questions in court if we already know what's going to happen?
Input: It's basically unheard of for yakuza to kill regular people. Like rarely they will kill other yakuza in some power struggle thing but event hat is real real rare compared to the really killing based nature of a lot of US gangs. They aren't even strictly illegal organizations and basically operate openly. Some of them do illegal stuff for sure but they aren't even strictly criminal organizations and a ton of the stuff they do is just sketchy or unseemly rather than law breaking.
Normal Finetuning: Why are yakuza such a big deal in Japan?
Rare word delexicalization + Multiple: How is the yakuza not considered a criminal organization?
Input: It was the cold war, and making sure the food supply was managed properly in an attack situation was important.
Normal Finetuning: Why did the U.S. declare war on Japan in the first place?
Rare word delexicalization + Multiple: What was the importance of having a well-stocked army?
Input: It depends. If the artist self publishes they get about 0.60 cents. However if its under a major label they get as little as 0.08 cents per sale. Normal Finetuning: How much does it cost an artist to release a song on iTunes?
Rare word delexicalization + Multiple: How do artists make money off of their music?
Table 17: Qualitative Examples.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
zhang-etal-2023-contrastive | Contrastive Learning of Sociopragmatic Meaning in Social Media | https://aclanthology.org/2023.findings-acl.152 | Recent progress in representation and contrastive learning in NLP has not widely considered the class of sociopragmatic meaning (i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning task-agnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our model obtains an improvement of 11.66 average F1 on 16 datasets when fine-tuned on only 20 training samples per dataset. We also show that our framework improves uniformity and preserves the semantic structure of representations. Our code is available at: \url{https://github.com/UBC-NLP/infodcl} | # Contrastive Learning Of Sociopragmatic Meaning In Social Media
Chiyu Zhang1 Muhammad Abdul-Mageed1,2 **Ganesh Jawahar**1 1Deep Learning & Natural Language Processing Group, The University of British Columbia 2Department of Natural Language Processing & Department of Machine Learning, MBZUAI
[email protected], [email protected], [email protected]
## Abstract
Recent progress in representation and contrastive learning in NLP has not widely considered the class of *sociopragmatic meaning*
(i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning taskagnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our model obtains an improvement of 11.66 average F1 on 16 datasets when fine-tuned on only 20 training samples per dataset. We also show that our framework improves uniformity and preserves the semantic structure of representations. Our code is available at: https:
//github.com/UBC-NLP/infodcl
## 1 Introduction
Meaning emerging through human interaction such as on social media is deeply contextualized. It extends beyond referential meaning of utterances to involve both information about language users and their identity (the domain of *sociolinguistics* (Tagliamonte, 2015)) and the communication goals of these users (the domain of *pragmatics* (Thomas, 2014)). From a sociolinguistics perspective, a message can be expressed in various linguistic forms, depending on user background. For example, someone might say 'let's watch the soccer game', but they can also call the game 'football'. In real world, the game is the same thing.
While the two expressions are different ways of saying the same thing (Labov, 1972), they do carry information about the user such as their region (i.e.,
where they could be coming from). From a pragmatics perspective, the meaning of an utterance depends on its interactive context. For example,
![0_image_0.png](0_image_0.png)
while the utterance 'it's really hot here' (said in a physical meeting) could be a polite way of asking someone to open the window, it could mean 'it's not a good idea for you to visit at this time' (said in a phone conversation discussing travel plans). We refer to the meaning communicated through this type of socially embedded interaction as *sociopragmatic meaning* (SM).
While SM is an established concept in linguistics (Leech, 1983), NLP work still lags behind.
This issue is starting to be acknowledged in the NLP community (Nguyen et al., 2021), and there has been calls to include social aspects in *representation learning* of language (Bisk et al., 2020). Arguably, pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) learn representations relevant to SM tasks. While this is true to some extent, PLMs are usually pre-trained on standard forms of language (e.g. BookCorpus) and hence miss (i) *variation in language use among different* language communities (social aspects of meaning)
in **(ii)** *interactive* settings (pragmatic aspects). In spite of recent efforts to rectify some of these limitations by PLMs such as BERTweet on casual language (Nguyen et al., 2020), it is not clear whether the masked language modeling (MLM) objective employed in PLMs is sufficient for capturing the 2405 rich representations needed for sociopragmatics.
Another common issue with PLMs is that their sequence-level embeddings suffer from the anisotropy problem (Ethayarajh, 2019; Li et al.,
2020). That is, these representations tend to occupy a narrow cone on the multidimensional space.
This makes it hard for effectively teasing apart sequences belonging to different classes without use of large amounts of labeled data. Work on *contrastive learning* (CL) has targeted this issue of anisotropy by attempting to bring semantic representations of instances of a given class (e.g., positive pairs of the same objects in images or same topics in text) closer and representations of negative class(es) instances farther away (Liu et al.,
2021a; Gao et al., 2021). A particularly effective type of CL is supervised CL (Khosla et al., 2020; Khondaker et al., 2022), but it (i) requires labeled data **(ii)** for each downstream task. Again, acquiring labeled data is expensive and resulting models are task-specific (i.e., cannot be generalized to all SM tasks).
In this work, our goal is to learn effective taskagnostic representations for SM from social data without a need for labels. To achieve this goal, we introduce a novel framework situated in CL that we call **InfoDCL**. InfoDCL leverages sociopragmatic signals such as emojis or hashtags naturally occurring in social media, treating these as distant/surrogate labels.1 Since surrogate labels are abundant (e.g., hashtags on images or videos), our framework can be extended beyond language. To illustrate the superiority of our proposed framework, we evaluate representations by our InfoDCL on 24 SM datasets (such as emotion recognition (Mohammad et al., 2018) and irony detection (Ptácek et al., 2014)) and compare against 11 competitive baselines. Our proposed framework outperforms all baselines on 14 (out of 16) in-domain datasets and seven (out of eight) out-of-domain datasets
(Sec. 4). Furthermore, our framework is *strikingly* successful in few-shot learning: it consistently outperforms baselines by a large margin for different sizes of training data (Sec. 4). Our framework is also *language-independent*, as demonstrated on several tasks from three languages other than English (Sec. E.3).
Our major contributions are as follows: (1) We introduce InfoDCL, a novel CL framework for learning sociopragmatics exploiting surrogate labels. To the best of our knowledge, this is the first work to utilize surrogate labels in language CL to improve PLMs. (2) We propose a new CL loss, Corpus-Aware Contrastive Loss (CCL), to preserve the semantic structure of representations exploiting corpus-level information (Sec. 3.3). (3) Our framework outperforms several competitive methods on a wide range of SM tasks (both *in-domain* and *out-of-domain*, across *general* and *few-shot* settings). (4) Our framework is language-independent, as demonstrated by its utility on various SM tasks in four languages. (5) We offer an extensive number of ablation studies that show the contribution of each component in our framework and qualitative analyses that demonstrate superiority of representation from our models (Sec. 5).
## 2 Related Work
Our work combines advances in representation learning and contrastive learning.
Representation Learning. PLMs encode discrete language symbols into a continuous representation space that can capture the syntactic and the semantic information underlying the text. Since BERT is pre-trained on standard text that is not ideal for social media, Nguyen et al. (2020) propose BERTweet, a model pre-trained on tweets with MLM objective and without intentionally learning SM from social media data. Previous studies (Felbo et al., 2017; Corazza et al., 2020) have also utilized distant supervision (e.g., use of emoji) to obtain better representations for a limited number of tasks.
Our work differs in that we make use of distant supervision *in the context of CL* to acquire rich representations *suited to the whole class of SM tasks*.
In addition, our methods excel not only in the full data setting but also for *few-shot learning* and diverse domains.
Contrastive Learning. There has been a flurry of recent CL frameworks introducing selfsupervised (Liu et al., 2021a; Gao et al., 2021; Cao et al., 2022), semi-supervised (Yu et al.,
2021), weakly-supervised (Zheng et al., 2021), and strongly supervised (Gunel et al., 2021; Suresh and Ong, 2021; Zhou et al., 2022) learning objectives.2 Although effective, existing supervised CL (SCL) frameworks (Gunel et al., 2021; Suresh and Ong, 2021; Pan et al., 2022) suffer from two 2These frameworks differ across a number of dimensions that we summarize in Table 6 in Sec. A in Appendix.
1We use distant label and surrogate label interchangeably.
major drawbacks. The **first drawback** is SCL's dependence on task-specific labeled data (which is required to identify positive samples in a batch).
Recently, Zheng et al. (2021) introduced a weaklysupervised CL (WCL) objective for computer vision, which generates a similarity-based 1-nearest neighbor graph in each batch and assigns weak labels for samples of the batch (thus clustering vertices in the graph). It is not clear, however, how much an WCL method with augmentations akin to language would fare for NLP. We propose a framework that does not require model-derived weak labels, which outperforms a clustering-based WCL
approach. The **second drawback** with SCL is related to how negative samples are treated. Khosla et al. (2020); Gunel et al. (2021) treat all the negatives equally, which is sub-optimal since hard negatives should be more informative (Robinson et al.,
2021). Suresh and Ong (2021) attempt to rectify this by introducing a *label-aware contrastive loss*
(LCL) where they feed the anchor sample to a taskspecific model and assign higher weights to confusable negatives based on this model's confidence on the class corresponding to the negative sample.
LCL, however, is both **narrow** and **costly**. It is narrow since it exploits *task-specific* labels. We fix this by employing surrogate labels generalizable to all SM tasks. In addition, LCL is costly since it requires an auxiliary task-specific model to be trained with the main model. Again, we fix this issue by introducing a *light* LCL framework
(**LCL-LiT**) where we use our main model, rather than an auxiliary model, to derive the weight vector wi from our main model through an additional loss (i.e., weighting is performed end-to-end in our main model). Also, LCL *only* **considers instancelevel information** to capture relationships between individual sample and classes. In comparison, we introduce a novel corpus-aware contrastive loss
(CCL) that overcomes this limitation (Sec. 3.3).
## 3 Proposed Framework
Our goal is to learn rich and diverse representations suited for a wide host of SM tasks. To this end, we introduce our novel **InfoDCL** framework.
InfoDCL is a *distantly supervised* CL (DCL) framework that exploits distant/surrogate label (e.g.,
emoji) as a proxy for supervision and incorporates corpus-level information to capture inter-class relationships.
## 3.1 Contrastive Losses
CL aims to learn efficient representations by pulling samples from the same class together and pushing samples from other classes apart (Hadsell et al., 2006). We formalize the framework now. Let C denote the set of class labels. Let D = {(xi, yi)}
m i=1 denote a randomly sampled batch of size m, where xi and yi ∈ C denote a sample and its label respectively. Many CL frameworks construct the similar (*a.k.a*., positive) sample (xm+i) for an anchor sample (xi) by applying a data augmentation technique (T ) such as back-translation (Fang and Xie, 2020), token masking (Liu et al., 2021a), and dropout masking (Gao et al., 2021) on the anchor sample (xi). Let B =
{(xi, yi)}
2m i=1 denote an augmented batch, where xm+i = T (xi) and ym+i = yi (i = {1*, . . . , m*}).
Self-supervised Contrastive Loss. We consider |C| = N, where N is the total number of training samples. Hence, the representation of the anchor sample xiis pulled closer to that of its augmented
(positive) sample xm+i and pushed away from the representations of other 2m − 2 (negative) samples in the batch. The semantic representation hi ∈ Rd for each sample xiis computed by an encoder, Φ,
where hi = Φ(xi). Chen et al. (2017) calculate the
contractive loss in a batch as follows: $$\mathcal{L}_{SSCL}=\sum_{i=1}^{2m}-\log\frac{e^{sim(h_{i},h_{p(i)})/\tau}}{\sum_{a=1}^{2m}\mathbb{1}_{[a\neq i]}e^{sim(h_{i},h_{a})/\tau}},\tag{1}$$ where $p(i)$ is the index of positive sample of $x_{i}$,
τ ∈ R+ is a scalar temperature parameter, and sim(hi, hj ) is the cosine similarity h⊤
i hj
∥hi∥·∥hj∥
.
Supervised Contrastive Loss. The CL loss in Eq. 1 is unable to handle the case of multiple samples belonging to the same class when utilizing a supervised dataset (|C| < N). Positive samples in SCL (Khosla et al., 2020) is a set composed of not only the augmented sample but also the samples belonging to the same class as xi. The positive samples of xi are denoted by Pi = {ρ ∈ B : yρ = yi ∧ ρ ̸= i}, and |Pi| is its cardinality. The SCL is formulated as:
$$\mathcal{L}_{SCL}=\sum_{i=1}^{2m}\frac{-1}{|P_{i}|}\sum_{p\in P_{i}}\log\frac{e^{\pi im(h_{i},h_{p})/\tau}}{\sum_{a=1}^{2m}\frac{1}{1[a\neq i]}e^{\pi im(h_{i},h_{a})/\tau}}.\tag{2}$$
In our novel framework, we make use of SCL
but employ surrogate labels instead of gold labels to construct the positive set.
3If i ≤ m, p(i) = i + m, otherwise p(i) = i − m.
2407
## 3.2 Label-Aware Contrastive Loss
Suresh and Ong (2021) extend the SCL to capture relations between negative samples. They hypothesize that not all negatives are equally difficult for an anchor and that the more confusable negatives should be emphasized in the loss. They propose LCL, which introduces a weight wi,ya to indicate the confusability of class label ya w.r.t. anchor xi:
$$\mathcal{L}_{LCL}=\sum_{i=1}^{2m}\frac{-1}{|P_{i}|}\sum_{p\in P_{i}}\log\frac{w_{i,y_{i}}\cdot e^{x\text{im}(h_{i},h_{p})/\tau}}{\sum_{a=1}^{2m}1_{[a\neq i]}w_{i,y_{a}}\cdot e^{x\text{im}(h_{i},h_{a})/\tau}}.\tag{3}$$ The weight vector $w_{i}\in\mathbb{R}^{|C|}$ comes from the
class-specific probabilities (or confidence score)
outputted by an auxiliary task-specific supervised model after consuming the anchor xi. LCL assumes that the highly confusable classes w.r.t anchor receive higher confidence scores, while the lesser confusable classes w.r.t anchor receive lower confidence scores. As stated earlier, limitations of LCL include (i) its dependence on gold annotations,
(ii) its inability to generalize to all SM tasks due to its use of task-specific labels, and **(iii)** its ignoring of corpus-level and inter-class information. As explained in Sec. 2, we fix all these issues.
## 3.3 Corpus-Aware Contrastive Loss
In spite of the utility of existing CL methods for text representation, a uniformity-tolerance dilemma has been identified in vision representation model by Wang and Liu (2021): pursuing excessive uniformity makes a model intolerant to semantically similar samples, thereby breaking its underlying semantic structure (and thus causing harm to downstream performance).4 Our learning objective is to obtain representations suited to all SM tasks, thus we hypothesize that preserving the semantic relationships between surrogate labels during pre-training can benefit many of downstream SM
tasks. Since we have a large number of fine-grained classes (i.e., surrogate labels), each class will not be equally distant from all other classes. For example, the class ' ' shares similar semantics with the class ' ', but is largely distant to the class '
'. The texts with ' ' and ' ' belong to same class of 'joy' in downstream emotion detection task.
We thus propose a new CL method that relies on distant supervision to learn general knowledge of all SM tasks and incorporates corpus-level information to capture inter-class relationships, while improving uniformity of PLM and preserving the 4For details see Sec. G in Appendix.
underlying semantic structure. Concretely, our proposed corpus-aware contrastive loss (CCL) exploits a simple yet effective corpus-level measure based on pointwise mutual information (PMI) (Bouma, 2009) to extract relations between surrogate labels
(e.g., emojis) from a large amount of unlabeled tweets.5 The PMI method is cheap to compute as it requires neither labeled data nor model training:
PMI is based only on the co-occurrence of emoji pairs. We hypothesize that PMI scores of emoji pairs could provide globally useful semantic relations between emojis. Our CCL based on PMI can be formulated as:
$$\mathcal{L}_{CCL}=\sum_{i=1}^{2m}\frac{-1}{|P_{i}|}\sum_{p\in P_{i}}\log\frac{e^{x\,im(h_{i},h_{p})/\tau}}{\sum_{a=1}^{2m}\mathbbm{1}_{[a\neq i]}w_{y_{i},y_{a}}\cdot e^{x\,im(h_{i},h_{a})/\tau}},\tag{4}$$ where the weight $w_{y_{i},y_{a}}=1-max(0,npmi(y_{i},y_{a}))$, and $npmi(y_{i},y_{a})\in\mathbb{R}$
max(0, npmi(yi, ya)), and npmi(yi, ya) ∈
[−1, 1] is normalized point-wise mutual information (Bouma, 2009) between ya and yi.
6
## 3.4 Overall Objective
To steer the encoder to learn representations that recognize corpus-level inter-class relations while distinguishing between classes, we combine our LCCL and LLCL.
7 The resulting loss, which we collectively refer to as distantly-supervised contrastive loss LDCL is given by:
$${\mathcal{L}}_{D C L}=\gamma{\mathcal{L}}_{L C L}+(1-\gamma){\mathcal{L}}_{C C L},$$
$$(5)$$
where γ ∈ [0, 1] is a hyper-parameter that controls the relative importance of each of the contrastive losses. Our results show that a model trained with LDCL can achieve sizeable improvements over baselines (Table 1). For a more enhanced representation, our proposed framework also exploits a surrogate label prediction (SLP) objective LSLP
where the encoder Φ is jointly optimized for the emoji prediction task using cross entropy loss. Our employment of an SLP objective now allows us to weight the negatives in LLCL using classification probabilities from our main model rather than training an additional weighting model, another divergence from Suresh and Ong (2021). This new LCL framework is our **LCL-LiT** (for *light* LCL),8 5We experiment with a relatively sophisticated approach that learns class embeddings to capture the inter-class relations in Sec. 5, but find it to be sub-optimal.
6Equation for NPMI is in Appendix B.1.
7Note that LLCL operates over surrogate labels rather than task-specific downstream labels as in (Suresh and Ong, 2021),
thereby allowing us to learn broad SM representations.
8The formula of LCL-LiT is the same as Eq. 3 (i.e., Loss of LCL).
giving us a lighter DCL loss that we call **DCL-LiT**:
$${\mathcal{L}}_{D C L-L i T}=\gamma{\mathcal{L}}_{L C L-L i T}+(1-\gamma){\mathcal{L}}_{C C L}.\,\,\,(6)$$
Our sharing strategy where a single model is trained end-to-end on an overall objective incorporating negative class weighting should also improve our model efficiency (e.g., training speed, energy efficiency). Our ablation study in Sec. 5 confirms that using the main model as the weighing network is effective for overall performance. To mitigate effect of any catastrophic forgetting of token-level knowledge, the proposed framework includes an MLM objective defined by LMLM.
9 The overall objective function of the proposed InfoDCL framework can be given by:
$${\mathcal{L}}_{I n f o D C L}=\lambda_{1}{\mathcal{L}}_{M L M}+\lambda_{2}{\mathcal{L}}_{S L P}\tag{7}$$ $$+(1-\lambda_{1}-\lambda_{2}){\mathcal{L}}_{D C L-L i T},$$
where λ1 and λ2 are the loss scaling factors. We also employ a mechanism for randomly re-pairing an anchor with a new positive sample at the beginning of each epoch. We describe this epoch-wise repairing in Appendix B.4.
## 3.5 Data For Representation Learning
We exploit emojis as surrogate labels using an English language dataset with 31M tweets and a total of 1, 067 unique emojis (TweetEmoji-EN).
In addition, we acquire representation learning data for (1) our experiments on three additional languages (i.e., Arabic, Italian, and Spanish) and to (2) investigate of the utility of hashtags as surrogate labels. More about how we develop TweetEmoji-EN and all our other representation learning data is in Appendix C.1.
## 3.6 Evaluation Data And Splits
In-Domain Data. We collect 16 *English language* Twitter datasets representing eight different SM tasks. These are (1) crisis awareness, (2)
emotion recognition, (3) hateful and offensive language detection, (4) humor identification, (5) irony and sarcasm detection, (6) irony type identification,
(7) sentiment analysis, and (8) stance detection.
We also evaluate our framework on nine Twitter datasets, three from each of *Arabic, Italian, and* Spanish. More information about our English and multilingual datasets is in Appendix C.2.
9The Equations of LSLP and LMLM are listed in Appendix B.2 and B.3, respectively.
Out-of-Domain Data. We also identify eight datasets of SM involving emotion, sarcasm, and sentiment derived from outside the Twitter domain (e.g., data created by psychologists, debate fora, YouTube comments, movie reviews). We provide more information about these datasets in Appendix C.2.
Data Splits. For datasets without Dev split, we use 10% of the respective training samples as Dev.
For datasets originally used in cross-validation, we randomly split into 80% Train, 10% Dev, and 10%
Test. Table 7 in Appendix C describes statistics of our evaluation datasets
## 3.7 Implementation And Baselines
For experiments on English, we initialize our model with the pre-trained English RoBERTaBase.
10 For multi-lingual experiments (reported in Appendix E.3), we use the pre-trained XLM-RoBERTaBase model (Conneau et al., 2020) as our initial checkpoint. More details about these two models are in Appendix D.1. We tune hyper-parameters of our InfoDCL framework based on performance on development sets of downstream tasks, finding our model to be resilient to changes in these as detailed in Appendix D.3. To evaluate on downstream tasks, we fine-tune trained models on each task for *five times* with different random seeds and report the averaged model performance. Our main metric is macro-averaged F1 score. To evaluate the overall ability of a model, we also report an aggregated metric that averages over the 16 in-domain datasets, eight out-of-domain tasks, and the nine multi-lingual Twitter datasets, respectively.
NPMI Weighting Matrix. We randomly sample 150M tweets from our original 350M Twitter dataset, each with at least two emojis. We extract all emojis in each tweet and count the frequencies of emojis as well as co-occurrences between emojis. To avoid noisy relatedness from low frequency pairs, we filter out emoji pairs (yi, ya) whose cooccurrences are less than 20 times. We employ Eq. 8 (Appendix B.1) to calculate NPMI for each emoji pair.
Baselines. We compare our methods to 11 baselines, as described in Appendix *D.2.*
10For short, we refer to the official released English RoBERTaBase as RoBERTa in the rest of the paper.
| Task | RB | MLM | E-MLM | SLP | Mir-B | Sim-S | Sim-D | SCL | LCL | WCL | DCL | InfoD-R | BTw | InfoD-B |
|-------------|-------|-------|---------|-------|---------|---------|---------|-------|-------|-------|-------|-----------|-------|-----------|
| CrisisOltea | 95.87 | 95.81 | 95.91 | 95.89 | 95.79 | 95.71 | 95.94 | 95.88 | 95.87 | 95.83 | 95.92 | 96.01 | 95.76 | 95.84 |
| EmoMoham | 78.76 | 79.68 | 80.79 | 81.25 | 78.27 | 77.00 | 81.05 | 78.79 | 77.66 | 77.65 | 80.54 | 81.34 | 80.23 | 81.96 |
| HateWas | 57.01 | 56.87 | 56.65 | 57.05 | 57.09 | 56.70 | 57.13 | 56.94 | 56.96 | 57.19 | 57.14 | 57.30 | 57.32 | 57.65 |
| HateDav | 76.04 | 77.55 | 77.79 | 75.70 | 75.88 | 74.40 | 77.15 | 77.20 | 75.90 | 76.87 | 76.79 | 77.29 | 76.93 | 77.94 |
| HateBas | 47.85 | 52.56 | 52.33 | 52.58 | 45.49 | 46.81 | 52.32 | 48.24 | 48.93 | 50.68 | 52.17 | 52.84 | 53.62 | 53.95 |
| HumorMea | 93.28 | 93.62 | 93.73 | 93.31 | 93.37 | 91.55 | 93.42 | 92.82 | 93.00 | 92.45 | 94.13 | 93.75 | 94.43 | 94.04 |
| IronyHee-A | 72.87 | 74.15 | 75.94 | 76.89 | 70.62 | 66.40 | 75.36 | 73.58 | 73.86 | 71.24 | 77.15 | 76.31 | 77.03 | 78.72 |
| IronyHee-B | 53.20 | 52.87 | 55.85 | 56.38 | 49.60 | 46.26 | 54.06 | 50.68 | 53.63 | 52.80 | 57.48 | 57.22 | 56.73 | 59.15 |
| OffenseZamp | 79.93 | 80.75 | 80.72 | 80.07 | 78.79 | 77.28 | 80.80 | 79.96 | 80.75 | 79.48 | 79.94 | 81.21 | 79.35 | 79.83 |
| SarcRiloff | 73.71 | 74.87 | 77.34 | 77.97 | 66.60 | 64.41 | 80.27 | 73.92 | 74.82 | 73.68 | 79.26 | 78.31 | 78.76 | 80.52 |
| SarcPtacek | 95.99 | 95.87 | 96.02 | 95.89 | 95.62 | 95.27 | 96.07 | 95.89 | 95.62 | 95.72 | 96.13 | 96.10 | 96.40 | 96.67 |
| SarcRajad | 85.21 | 86.19 | 86.38 | 86.89 | 84.31 | 84.06 | 87.20 | 85.18 | 84.74 | 85.89 | 87.45 | 87.00 | 87.13 | 87.20 |
| SarcBam | 79.79 | 80.48 | 80.66 | 81.08 | 79.02 | 77.58 | 81.40 | 79.32 | 79.62 | 79.53 | 81.31 | 81.49 | 81.76 | 83.20 |
| SentiRosen | 89.55 | 89.69 | 90.41 | 91.03 | 85.87 | 84.54 | 90.64 | 89.82 | 89.79 | 89.69 | 90.65 | 91.59 | 89.53 | 90.41 |
| SentiThel | 71.41 | 71.31 | 71.50 | 71.79 | 71.23 | 70.11 | 71.68 | 70.57 | 70.10 | 71.30 | 71.73 | 71.87 | 71.64 | 71.98 |
| StanceMoham | 69.44 | 69.47 | 70.50 | 69.54 | 66.23 | 64.96 | 70.48 | 69.14 | 69.55 | 70.33 | 69.74 | 71.13 | 68.33 | 68.22 |
| Average | 76.24 | 76.98 | 77.66 | 77.71 | 74.61 | 73.32 | 77.81 | 76.12 | 76.30 | 76.27 | 77.97 | 78.17 | 77.81 | 78.58 |
| EmotionWall | 66.51 | 66.02 | 67.89 | 67.28 | 62.33 | 59.59 | 67.68 | 66.56 | 67.55 | 63.99 | 68.36 | 68.41 | 64.48 | 65.61 |
| EmotionDem | 56.59 | 56.77 | 56.80 | 56.67 | 57.13 | 56.69 | 55.27 | 54.14 | 56.82 | 55.61 | 57.43 | 57.28 | 53.33 | 54.99 |
| SarcWalk | 67.50 | 66.16 | 67.42 | 68.78 | 63.95 | 59.39 | 65.04 | 66.98 | 66.93 | 65.46 | 67.39 | 68.45 | 67.27 | 67.30 |
| SarcOra | 76.92 | 76.34 | 77.10 | 77.25 | 75.57 | 74.68 | 77.12 | 76.94 | 75.99 | 76.95 | 77.76 | 77.41 | 77.33 | 76.88 |
| Senti-MR | 89.00 | 89.67 | 89.97 | 89.58 | 88.66 | 87.81 | 89.09 | 89.14 | 89.33 | 89.47 | 89.15 | 89.43 | 87.94 | 88.21 |
| Senti-YT | 90.22 | 91.33 | 91.22 | 91.98 | 88.63 | 85.27 | 92.23 | 90.29 | 89.82 | 91.07 | 92.26 | 91.98 | 92.25 | 92.41 |
| SST-5 | 54.96 | 55.83 | 56.15 | 55.94 | 54.18 | 52.84 | 55.09 | 55.33 | 54.28 | 55.30 | 56.00 | 56.37 | 55.74 | 55.93 |
| SST-2 | 94.57 | 94.33 | 94.39 | 94.51 | 93.97 | 91.49 | 94.29 | 94.50 | 94.24 | 94.61 | 94.64 | 94.98 | 93.32 | 93.73 |
| Average | 74.53 | 74.55 | 75.12 | 75.25 | 73.05 | 70.97 | 74.48 | 74.24 | 74.37 | 74.06 | 75.37 | 75.54 | 73.96 | 74.38 |
## 4 Main Results
Table 1 shows our main results. We refer to our models trained with LDCL (Eq. 5) and L*Inf oDCL*
(Eq. 7) in Table 1 as DCL and InfoDCL, respectively. We compare our models to 11 baselines on the 16 Twitter (in-domain) datasets and eight out-of-domain datasets.
In-Domain Results. InfoDCL outperforms Baseline (1), i.e., fine-tuning original RoBERTa, on each of the 16 in-domain datasets, with 1.93 average F1 improvement. InfoDCL also outperforms both the MLM and surrogate label prediction (SLP)
methods with 1.19 and 0.46 average F1 scores, respectively. Our proposed framework is thus able to learn more effective representations for SM. We observe that both Mirror-BERT and SimCSE-Self negatively impact downstream task performance, suggesting that while the excessive uniformity they result in is useful for semantic similarity tasks (Gao et al., 2021; Liu et al., 2021a), it hurts downstream SM tasks.11 We observe that our proposed variant of SimCSE, SimCSE-Distant, achieves sizable improvements over both Mirror-BERT and SimCSE-
11The analyses in Sections 5 and E.6 illustrate this behavior.
Self (3.20 and 4.49 average F1, respectively). This further demonstrates effectiveness of our distantly supervised objectives. SimCSE-Distant, however, cannot surpass our proposed InfoDCL framework on average F1 over all the tasks. We also note that InfoDCL outperforms SCL, LCL, and WCL
with 2.05, 1.87, and 1.90 average F1, respectively.
Although our simplified model, i.e., DCL, underperforms InfoDCL with 0.20 average F1, it outperforms all the baselines. Overall, our proposed models (DCL and InfoDCL) obtain best performance in 14 out of 16 tasks, and InfoDCL acquires the best average F1. We further investigate the relation between model performance and emoji presence, finding that our proposed approach not only improves tasks involving high amounts of emoji content (e.g., the test set of EmoMoham has 23.43%
tweets containing emojis) but also those without any emoji content (e.g., HateDav). 12 Compared to the original BERTweet, our InfoDCL-RoBERTa is still better (0.36 higher F1). This demonstrates not only effectiveness of our approach as compared to domain-specific models pre-trained simply with MLM, but also its data efficiency: BERTweet is pretrained with ∼ 27× more data (850M tweets vs.
only 31M for our model). Moreover, the BERTweet we continue training with our framework obtains an average improvement of 0.77 F1 (outperforms it on 14 individual tasks). The results demonstrate that our framework can enhance the domain-specific PLM as well.
Out-of-Domain Results. InfoDCL achieves an average improvement of 1.01 F1 (F1 = 75.54)
over the eight out-of-domain datasets compared to Baseline (1) as Table 1 shows. Our DCL and InfoDCL models also surpass all baselines on average, achieving highest on seven out of eight datasets. We notice the degradation of BERTweet when we evaluate on the out-of-domain data. Again, this shows generalizability of our proposed framework for leaning SM.
Significance Tests. We conduct two types of significance test on our results, i.e., the classical paired student's t-test (Fisher, 1936) and Almost Stochastic Order (ASO) (Dror et al., 2019). The t-test shows that our InfoDCL-RoBERTa significantly
(*p < .*05) outperforms 9 out of 11 baselines (exceptions are SimCSE-Distant and BERTweet) on the average scores over 16 in-domain datasets and 10 baselines (exception is SLP) on the average scores over eight out-of-domain datasets. ASO
concludes that InfoDCL-RoBERTa significantly
(*p < .*01) outperforms all 11 baselines on both average scores of in-domain and out-of-domain datasets. InfoDCL-BERTweet also significantly
(*p < .*05 by t-test, *p < .*01 by ASO) outperforms BERTweet on the average scores. We report standard deviations of our results and significance tests in Appendix E.1.
Additional Results. *Comparisons to Individual* SoTAs. We compare our models on each dataset to the task-specific SoTA model on that dataset, acquiring strong performance on the majority of these as we show in Table 12, Sec. E.2 in Appendix. *Beyond English.* We also demonstrate effectiveness and generalizability of our proposed framework on nine SM tasks in three additional languages in Sec. E.3. *Beyond Emojis.* To show the generalizability of our framework to surrogate labels other than emojis, we train DCL and InfoDCL with hashtags and observe comparable gains (Sec. E.4).
Beyond Sociopragmatics. Although the main objective of our proposed framework is to improve model representation for SM, we also evaluate our models on two topic classification datasets and a sentence evaluation benchmark, SentEval (Conneau and Kiela, 2018). This allows us to show both strengths of our framework (i.e., improvements beyond SM) and its limitations (i.e., on textual semantic similarity). More about SentEval is in Appendix C.2, and results are in Sections E.5 and E.6.
Few-Shot Learning Results. Since DCL and InfoDCL exploit an extensive set of cues, allowing them to capture a broad range of nuanced concepts of SM, we hypothesize they will be particularly effective in few-shot learning. We hence fine-tune our DCL, InfoDCL, strongest two baselines, and the original RoBERTa with varying amounts of downstream data.13 As Table 2 shows, for in-domain tasks, with only 20 and 100 training samples per task, our InfoDCLRoBERTa strikingly improves 11.66 and 17.52 points over the RoBERTa baseline, respectively.
Similarly, InfoDCL-RoBERTa is 13.88 and 17.39 over RoBERTa with 20 and 100 training samples for out-of-domain tasks. These gains also persist when we compare our framework to all other strong baselines, including as we increase data sample size. Clearly, our proposed framework remarkably alleviates the challenge of labelled data scarcity even under severely few-shot settings.14
![6_image_0.png](6_image_0.png)
![6_image_3.png](6_image_3.png)
RoBERTa 35.22 41.92 70.06 72.20 BERTweet 39.14 38.23 68.35 73.50 Ours (SimCSE-Distant) 44.99 54.06 71.56 73.39 Ours (DCL) 46.60 58.31 72.00 73.86 Ours (InfoDCL-RoBERTa) **46.88 59.44 72.72 74.47**
Ours (InfoDCL-BERTweet) 45.29 52.64 71.31 74.03 RoBERTa 27.07 41.12 69.26 71.42 BERTweet 30.89 39.40 62.52 68.22 Ours (SimCSE-Distant) 39.02 53.95 66.85 70.50 Ours (DCL) **42.19** 56.62 68.22 71.21 Ours (InfoDCL-RoBERTa) 40.96 **58.51 69.36 71.92**
Ours (InfoDCL-BERTweet) 38.72 48.87 65.64 69.25 N 20 100 500 1000
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
## 5 Ablation Studies And Analyses
Ablation Studies. We investigate effectiveness of each of the ingredients in our proposed framework through ablation studies exploiting TweetEmoji-EN for pre-training. We evaluate on the 16 in-domain SM datasets with the same hyper-parameters identified in Sec. D.3 and report 13Data splits for few-shot experiments are in Appendix C.2.
14We offer additional few-shot results in Appendix E.7.
InfoDCL 78.17 (±0.19) -
wo CCL 77.75†⋆(±0.18) -0.42
wo LCL 78.09†(±0.28) -0.08 wo CCL & LCL 77.98†(±0.19) -0.19
wo SLP 76.37†⋆(±0.35) -0.80
wo MLM 77.12 (±0.31) -0.05
wo SLP & MLM (Our DCL) 77.97†(±0.24) -0.20
wo EpW-RP 78.00†(±0.41) -0.17
w additional weighting model 78.16 (±0.21) -0.02
InfoDCL+Self-Aug 77.79†⋆(±0.27) -0.38
Model Avg F1 **Diff**
results over five runs. As Table 3 shows, InfoDCL
outperforms all other settings, demonstrating the utility of the various components in our model.
Results show the SLP objective is the most important ingredient in InfoDCL (with a drop of 0.80 average F1 when removed). However, when we drop both SLP and MLM objectives, DCL (our second best proposed model) only loses 0.20 F1 as compared to InfoDCL. Results also show that our proposed CCL is more effective than LCL: CCL
is second most important component and results in 0.42 F1 drop vs. only 0.08 F1 drop when ablating LCL. Interestingly, when we remove *both* CCL
and LCL, the model is relatively less affected (i.e.,
0.19 F1 drop) than when we remove CCL alone.
We hypothesize this is the case since CCL and LCL are two somewhat opposing objectives: LCL
tries to make individual samples distinguishable across confusable classes, while CCL tries to keep the semantic relations between confusable classes.
Overall, our results show the utility of distantly supervised contrastive loss. Although distant labels are intrinsically noisy, our InfoDCL is able to mitigate this noise by using CCL and LCL losses. Our epoch-wise re-pairing (EpW-RP) strategy is also valuable, as removing it results in a drop of 0.18 average F1. We believe EpW-RP helps regularize our model as we dynamically re-pair an anchor with a new positive pair for each training epoch.
We also train an additional network to produce the weight vector, wi, in LCL loss as Suresh and Ong
(2021) proposed instead of using our own main model to assign this weight vector end-to-end. We observe a slight drop of 0.02 average F1 with the additional model, showing the superiority of our end-to-end approach (which is less computational costly). We also adapt a simple self-augmentation method introduced by Liu et al. (2021a) to our distant supervision setting: given an anchor xi, we acquire a positive set {xi, xm+i, x2m+i, x3m+i}
where xm+iis a sample with the same emoji as the anchor, x2m+iis an augmented version (applying dropout and masking) of xi, and x3m+iis an augmented version of xm+i. As Table 3 shows, InfoDCL+Self-Aug underperforms InfoDCL (0.38 F1 drop). We investigate further issues as to how to handle inter-class relations in our models and answer the following questions:
Should we cluster or push apart the large number of fine-grained (correlated) classes? In previous works, contrastive learning is used to push apart samples from different classes. Suresh and Ong (2021) propose the LCL to penalize samples that is more confusable. In this paper, we hypothesize that we should also incorporate inter-class relations into learning objectives (our CCL). Hence, we introduce the PMI score into SCL to *scale down* the loss of a pair belonging to semantically related classes (emojis) as defined in Section 3.3 (which should help cluster our fine-grained classes). Here, we investigate an alternative strategy where we explore using the PMI scores as weights to **scale** up the loss of a pair with related labels (which should keep the fine-grained emoji classes separate). Hence, we set wyi,ya = 1 + Sim(yi, ya)
where Sim(yi, ya) = max(0, npmi(yi, ya)). We train RoBERTa on 5M random samples from the training set of TweetEmoji-EN with the overall loss function in Eq. 7, one time using this new weighting method and another time using the weighting method used in all our reported models so far: wyi,ya = 1 − Sim(yi, ya). Given these two ways to acquire wyi,ya in Eq. 4, we fine-tune the trained model on the 16 Twitter tasks. Our results in Table 4 show the penalizing strategy to perform lower than our original clustering strategy reported in all experiments in this paper. We also present their performance on each dataset in Table 5.
Table 4: Comparing different weighting strategies and methods of measuring inter-class similarity.
Can we use the emoji class embedding (ECEmb) for corpus-level weighting? We experiment with using the embedding of the emoji class
| wyi ,ya | Method | Average |
|-----------------|----------|-----------|
| 1 − Sim(yi, ya) | PMI | 77.70 |
| EC-Emb | 77.53 | |
| 1 + Sim(yi, ya) | PMI | 77.39 |
| EC-Emb | 77.36 | |
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
,ya1 − Sim(yi, ya) 1 + Sim(yi, ya)RB
Method PMI CLS-emb PMI CLS-emb CrisisOltea 95.93 95.93 95.88 95.95 95.87 EmoMoham 81.03 81.30 81.00 80.43 78.76 HateWas 57.26 57.16 57.35 57.26 57.01 HateDav 76.07 77.42 76.95 76.59 76.04 HateBas 51.86 50.47 52.04 51.68 47.85 HumorMea 93.77 93.66 93.65 93.53 93.28 IronyHee-A 75.39 73.95 74.09 74.32 72.87 IronyHee-B 57.02 55.50 56.99 55.10 53.20 OffenseZamp 80.29 80.89 81.08 80.81 79.93 SarcRiloff 76.73 75.90 72.45 74.64 73.71 SarcPtacek 96.01 95.98 95.99 95.73 95.99 SarcRajad 86.81 86.28 86.22 86.13 85.21 SarcBam 81.40 81.02 81.18 80.48 79.79 SentiRosen 91.30 91.64 91.45 91.95 89.55 SentiThel 71.72 71.71 72.02 71.65 71.44 StanceMoham 70.69 71.60 69.91 71.57 69.44 Average 77.70 77.53 77.39 77.36 76.24
(EC-Emb) as an alternative weighting method in place of PMI. Namely, we train RoBERTa on SLP
(using the training set of TweetEmoji-EN) for three epochs with a standard cross-entropy loss.
We then extract weights of the last classification layer and use these weights as class embeddings, E = {e1, e2*, . . . , e*C}, where ei = Rd, d is hidden dimension (i.e., 768), and |C| is the size of classes (i.e., 1, 067). The correlation of each pair of emojis is computed using cosine similarity, i.e.,
Sim(yi, ya) = e⊤
i ea
∥ei∥·∥ea∥
.
15 As Table 4 and 5 shows, using PMI scores performs slightly better than using class embeddings in both the clustering and penalizing strategies mentioned previously in the current section. For more intuition, we handpick three query emojis and manually compare the quality of similarity measures produced by both PMI and class embeddings for these. As Table 17 in Appendix shows, both PMI and EC-Emb are capable of capturing sensible correlations between emojis (although the embedding approach includes a few semantically distant emojis, such as the emoji
' ' being highly related to ' ').
Qualitative Analysis. To further illustrate the effectiveness of the representation learned by InfoDCL, we compare a t-SNE (Van der Maaten and Hinton, 2008) visualization of it to that of two strong baselines on two SM datasets.16 Fig. 2 shows that our model has clearly learned to cluster the samples with similar semantics and separate semantically different clusters before fine-tuning on the gold downstream samples, for both in-domain 15Self-similarity is set to 0.
16Note that we use our model representations *without* downstream fine-tuning.
and out-of-domain tasks. We provide more details
![8_image_3.png](8_image_3.png)
about how we obtain the t-SNE vitalization and provide another visualization study in Appendix F.2.
![8_image_4.png](8_image_4.png)
![8_image_5.png](8_image_5.png)
Uniformity-Tolerance Dilemma. Following Wang and Liu (2021), we investigate uniformity and tolerance of our models using Dev data of downstream tasks.17 As Fig. 3 shows, unlike other models, our proposed DCL and InfoDCL models make a balance between uniformity and tolerance
(which works best for SM).
## 6 Conclusion
We proposed InfoDCL, a novel framework for adapting PLMs to SM exploiting surrogate labels in contrastive learning. We demonstrated effectiveness of our framework on 16 in-domain and eight out-of-domain datasets as well as nine non-English datasets. Our model outperforms 11 strong baselines and exhibits strikingly powerful performance in few-shot learning.
17For details see Sec. G in Appendix.
## 7 Limitations
We identify the potential limitations of our work as follow: (1) Distant labels may not be available in every application domain (e.g., patient notes in clinical application), although domain adaptation can be applied in these scenarios. We also believe that distantly supervised contrastive learning can be exploited in tasks involving image and video where surrogate labels are abundant. (2) We also acknowledge that the offline NPMI matrix of our proposed CCL method depends on a dataset
(distantly) labeled with multiple classes in each sample. To alleviate this limitation, we explore an alternative method that uses learned class embeddings to calculate the inter-class relations in Section 5. This weighting approach achieves sizable improvement over RoBERTa on 16 in-domain datasets, though it underperforms our NPMI-based approach. (3) Our framework does not always work on tasks outside SM. For example, our model underperforms self-supervised CL models, i.e., SimCSESelf and Mirror-BERT, on semantic textual similarity task in Appendix E.6. As we showed, however, our framework exhibits promising performance on some other tasks. For example, our hashtag-based model acquires best performance on the topic classification task, as shown in Appendix E.5.
## Ethical Considerations
All our evaluation datasets are collected from publicly available sources. Following privacy protection policy, all the data we used for model pretraining and fine-tuning are anonymized. Some annotations in the downstream data (e.g., for hate speech tasks) can carry annotator bias. We will accompany our data and model release with model cards. We will also provide more detailed ethical considerations on a dedicated GitHub repository.
All our models will be distributed for research with a clear purpose justification.
## Acknowledgements
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-20201004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Digital Research Alliance of Canada (the Alliance),18 and UBC ARCSockeye.19 Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSERC, SSHRC, CFI, the Alliance, or UBC ARC-Sockeye.
## References
Muhammad Abdul-Mageed, Chiyu Zhang, Azadeh Hashemi, and El Moatez Billah Nagoudi. 2020.
AraNet: A deep learning toolkit for Arabic social media. In Proceedings of the 4th Workshop on OpenSource Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 16–23, Marseille, France. European Language Resource Association.
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M.
Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Iñigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In *Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2015, Denver, Colorado, USA, June 4-5, 2015*,
pages 252–263. The Association for Computer Linguistics.
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M.
Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In *Proceedings of the* 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014, pages 81–91. The Association for Computer Linguistics.
Eneko Agirre, Daniel M. Cer, Mona T. Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A
pilot on semantic textual similarity. In Proceedings of the 6th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2012, Montréal, Canada, June 7-8, 2012, pages 385–393. The Association for Computer Linguistics.
Eneko Agirre, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics, *SEM 2013, June 13-14, 2013, Atlanta, Georgia, USA, pages 32–43. Association for Computational Linguistics.
Eneko Agirre, Aitor Gonzalez-Agirre, Iñigo LopezGazpio, Montse Maritxalar, German Rigau, and Larraitz Uria. 2016. Semeval-2016 task 2: Interpretable semantic textual similarity. In *Proceedings of the* 18https://www.computecanada.ca 19https://arc.ubc.ca/ubc-arc-sockeye
10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA,
June 16-17, 2016, pages 512–524. The Association for Computer Linguistics.
David Bamman and Noah A. Smith. 2015. Contextualized sarcasm detection on twitter. In Proceedings of the Ninth International Conference on Web and Social Media, ICWSM 2015, University of Oxford, Oxford, UK, May 26-29, 2015, pages 574–577. AAAI
Press.
Francesco Barbieri, José Camacho-Collados, Luis Espinosa Anke, and Leonardo Neves. 2020. TweetEval:
Unified benchmark and comparative evaluation for tweet classification. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 1644–1650. Association for Computational Linguistics.
Francesco Barbieri, José Camacho-Collados, Francesco Ronzano, Luis Espinosa Anke, Miguel Ballesteros, Valerio Basile, Viviana Patti, and Horacio Saggion. 2018. Semeval 2018 task 2: Multilingual emoji prediction. In *Proceedings of The* 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, pages 24–33. Association for Computational Linguistics.
Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In *Proceedings of the* 13th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2019, Minneapolis, MN, USA, June 6-7, 2019, pages 54–63. Association for Computational Linguistics.
Federico Bianchi, Debora Nozza, and Dirk Hovy.
2021. FEEL-IT: emotion and sentiment classification for the italian language. In *Proceedings of the* Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA@EACL 2021, Online, April 19, 2021, pages 76–83. Association for Computational Linguistics.
Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020.
Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718–8735, Online. Association for Computational Linguistics.
Cristina Bosco, Felice Dell'Orletta, Fabio Poletto, Manuela Sanguinetti, and Maurizio Tesconi. 2018.
Overview of the EVALITA 2018 hate speech detection task. In *Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and* Speech Tools for Italian. Final Workshop (EVALITA
2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018), Turin, Italy, December 12-13, 2018, volume 2263 of CEUR
Workshop Proceedings. CEUR-WS.org.
Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, 30:31–40.
Rui Cao, Yihao Wang, Yuxin Liang, Ling Gao, Jie Zheng, Jie Ren, and Zheng Wang. 2022. Exploring the impact of negative samples of contrastive learning: A case study of sentence embedding. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3138–3152. Association for Computational Linguistics.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong.
2017. On sampling strategies for neural networkbased collaborative filtering. In *Proceedings of the* 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017, pages 767–776. ACM.
Alessandra Teresa Cignarella, Simona Frenda, Valerio Basile, Cristina Bosco, Viviana Patti, and Paolo Rosso. 2018. Overview of the EVALITA 2018 task on irony detection in italian tweets (ironita). In Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018), Turin, Italy, December 12-13, 2018, volume 2263 of *CEUR Workshop Proceedings*.
CEUR-WS.org.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020*, pages 8440–8451. Association for Computational Linguistics.
Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli, and Serena Villata. 2020. Hybrid emojibased masked language models for zero-shot abusive language detection. In *Findings of the Association*
for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 943–949. Association for Computational Linguistics.
Gianna M. Del Corso, Antonio Gulli, and Francesco Romani. 2005. Ranking a stream of news. In Proceedings of the 14th international conference on World Wide Web, WWW 2005, Chiba, Japan, May 10-14, 2005, pages 97–106. ACM.
Kheir Eddine Daouadi, Rim Zghal Rebaï, and Ikram Amous. 2021. Optimizing semantic deep forest for tweet topic classification. *Inf. Syst.*, 101:101801.
Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International Conference on Web and Social Media, ICWSM 2017, Montréal, Québec, Canada, May 15-18, 2017, pages 512–515.
AAAI Press.
Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan S. Cowen, Gaurav Nemade, and Sujith Ravi.
2020. Goemotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL
2020, Online, July 5-10, 2020, pages 4040–4054.
Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop on Paraphrasing, IWP@IJCNLP 2005, Jeju Island, Korea, October 2005, 2005. Asian Federation of Natural Language Processing.
Rotem Dror, Segev Shlomov, and Roi Reichart. 2019.
Deep dominance - how to properly compare deep neural models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2773–2785, Florence, Italy. Association for Computational Linguistics.
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
Hongchao Fang and Pengtao Xie. 2020. CERT: contrastive self-supervised learning for language understanding. *CoRR*, abs/2005.12766.
Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm.
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP
2017, Copenhagen, Denmark, September 9-11, 2017, pages 1615–1625. Association for Computational Linguistics.
Ronald Aylmer Fisher. 1936. Design of experiments.
British Medical Journal, 1(3923):554.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6894–
6910. Association for Computational Linguistics.
Bilal Ghanem, Jihen Karoui, Farah Benamara, Véronique Moriceau, and Paolo Rosso. 2019. IDAT
at FIRE2019: overview of the track on irony detection in arabic tweets. In *FIRE '19: Forum for* Information Retrieval Evaluation, Kolkata, India, December, 2019, pages 10–13. ACM.
John M. Giorgi, Osvald Nitski, Bo Wang, and Gary D.
Bader. 2021. Declutr: Deep contrastive learning for unsupervised textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 879–895. Association for Computational Linguistics.
Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In *9th International Conference on Learning Representations,*
ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
OpenReview.net.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006.
Dimensionality reduction by learning an invariant mapping. In *2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition*
(CVPR 2006), 17-22 June 2006, New York, NY, USA,
pages 1735–1742. IEEE Computer Society.
Cynthia Van Hee, Els Lefever, and Véronique Hoste. 2018. Semeval-2018 task 3: Irony detection in english tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, pages 39–50. Association for Computational Linguistics.
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In *Proceedings of the Tenth* ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25, 2004, pages 168–177. ACM.
Pei Ke, Haozhe Ji, Siyang Liu, Xiaoyan Zhu, and Minlie Huang. 2020. SentiLARE: Sentiment-aware language representation learning with linguistic knowledge. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6975–6988. Association for Computational Linguistics.
Md Tawkat Islam Khondaker, El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed, and Laks Lakshmanan, V.S. 2022.
A benchmark study of contrastive learning for Arabic social meaning. In *Proceedings of the The Seventh Arabic Natural Language Processing Workshop*
(WANLP), pages 63–75, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
William Labov. 1972. *Sociolinguistic patterns*. 4. University of Pennsylvania press.
Geoffrey N Leech. 1983. *Principles of pragmatics*. London: Longman.
Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9119–9130, Online. Association for Computational Linguistics.
Fangyu Liu, Ivan Vulic, Anna Korhonen, and Nigel Collier. 2021a. Fast, effective, and self-supervised:
Transforming masked language models into universal lexical and sentence encoders. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1442–1459. Association for Computational Linguistics.
Junhua Liu, Trisha Singhal, Luciënne T. M. Blessing, Kristin L. Wood, and Kwan Hui Lim. 2021b. CrisisBERT: A robust transformer for crisis classification and contextual crisis embedding. In HT '21: 32nd ACM Conference on Hypertext and Social Media, Virtual Event, Ireland, 30 August 2021 - 2 September 2021, pages 133–141. ACM.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014, pages 216–223.
European Language Resources Association (ELRA).
J. A. Meaney, Steven R. Wilson, Luis Chiruzzo, Adam Lopez, and Walid Magdy. 2021. Semeval 2021 task 7: Hahackathon, detecting and rating humor and offense. In *Proceedings of the 15th International Workshop on* Semantic Evaluation, SemEval@ACL/IJCNLP 2021, Virtual Event / Bangkok, Thailand, August 5-6, 2021, pages 105–119. Association for Computational Linguistics.
Yu Meng, Chenyan Xiong, Payal Bajaj, saurabh tiwary, Paul Bennett, Jiawei Han, and XIA SONG. 2021.
Coco-lm: Correcting and contrasting text sequences for language model pretraining. In Advances in Neural Information Processing Systems, volume 34, pages 23102–23114. Curran Associates, Inc.
George A. Miller. 1995. Wordnet: A lexical database for english. *Commun. ACM*, 38(11):39–41.
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval2018 task 1: Affect in tweets. In *Proceedings of* The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, pages 1–17. Association for Computational Linguistics.
Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiao-Dan Zhu, and Colin Cherry. 2016.
Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 31–41.
The Association for Computer Linguistics.
Hamdy Mubarak, Kareem Darwish, Walid Magdy, Tamer Elsayed, and Hend Al-Khalifa. 2020.
Overview of OSACT4 Arabic offensive language detection shared task. In *Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language* Detection, pages 48–52, Marseille, France. European Language Resource Association.
Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen.
2020. BERTweet: A pre-trained language model for English tweets. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing: System Demonstrations, EMNLP 2020 -
Demos, Online, November 16-20, 2020, pages 9–14.
Association for Computational Linguistics.
Dong Nguyen, Laura Rosseel, and Jack Grieve. 2021.
On learning and representing social meaning in NLP:
a sociolinguistic perspective. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 603–612. Association for Computational Linguistics.
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Sarah Vieweg. 2014. Crisislex: A lexicon for collecting and filtering microblogged communications in crises. In Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM
2014, Ann Arbor, Michigan, USA, June 1-4, 2014.
The AAAI Press.
Shereen Oraby, Vrindavan Harrison, Lena Reed, Ernesto Hernandez, Ellen Riloff, and Marilyn A.
Walker. 2016. Creating and characterizing a diverse corpus of sarcasm in dialogue. In Proceedings of the SIGDIAL 2016 Conference, The 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 13-15 September 2016, Los Angeles, CA, USA, pages 31–41. The Association for Computer Linguistics.
Reynier Ortega-Bueno, Francisco Rangel, D Hernández Farıas, Paolo Rosso, Manuel Montes-y Gómez, and José E Medina Pagola. 2019. Overview of the task on irony detection in spanish variants. In *Proceedings of the Iberian Languages Evaluation Forum co-located with 35th Conference of the Spanish Society for Natural Language Processing, IberLEF@SEPLN 2019, Bilbao, Spain, September 24th,*
2019, volume 2421 of *CEUR Workshop Proceedings*,
pages 229–256. CEUR-WS.org.
Lin Pan, Chung-Wei Hang, Avirup Sil, and Saloni Potdar. 2022. Improved text classification via contrastive adversarial training. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 11130–11138. AAAI Press.
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, 21-26 July, 2004, Barcelona, Spain, pages 271–278. ACL.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In *ACL 2005, 43rd Annual* Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA, pages 115–124.
The Association for Computer Linguistics.
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn:
Machine learning in python. *J. Mach. Learn. Res.*,
12:2825–2830.
Tomás Ptácek, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on czech and english twitter. In *COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference:*
Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 213–223. ACL.
Ashwin Rajadesingan, Reza Zafarani, and Huan Liu.
2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM 2015, Shanghai, China, February 2-6, 2015, pages 97–106. ACM.
Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013.
Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA,
A meeting of SIGDAT, a Special Interest Group of the ACL, pages 704–714. ACL.
Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive learning with hard negative samples. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net.
Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017.
Semeval-2017 task 4: Sentiment analysis in twitter. In *Proceedings of the 11th International Workshop* on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 502–518.
Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631–1642. ACL.
Varsha Suresh and Desmond C. Ong. 2021. Not all negatives are equal: Label-aware contrastive loss for fine-grained text classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event
/ Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4381–4394. Association for Computational Linguistics.
Sali A Tagliamonte. 2015. *Making waves: The story of* variationist sociolinguistics. John Wiley & Sons.
Mike Thelwall, Kevan Buckley, and Georgios Paltoglou.
2012. Sentiment strength detection for the social web. *J. Assoc. Inf. Sci. Technol.*, 63(1):163–173.
Jenny A Thomas. 2014. Meaning in interaction: An introduction to pragmatics. Routledge.
Hao Tian, Can Gao, Xinyan Xiao, Hao Liu, Bolei He, Hua Wu, Haifeng Wang, and Feng Wu. 2020. SKEP:
sentiment knowledge enhanced pre-training for sentiment analysis. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4067–4076. Association for Computational Linguistics.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. Journal of machine learning research, 9(11).
Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In SIGIR 2000:
Proceedings of the 23rd Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval, July 24-28, 2000, Athens, Greece, pages 200–207. ACM.
Marilyn A. Walker, Jean E. Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In *Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, Turkey, May 23-25, 2012*, pages 812–817.
European Language Resources Association (ELRA).
Harald G Wallbott and Klaus R Scherer. 1986. How universal and specific is emotional experience? evidence from 27 countries on five continents. Social science information, 25(4):763–795.
Dong Wang, Ning Ding, Piji Li, and Haitao Zheng.
2021. CLINE: contrastive learning with semantic negative examples for natural language understanding. In *Proceedings of the 59th Annual Meeting of* the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2332–2342. Association for Computational Linguistics.
Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR
2021, virtual, June 19-25, 2021, pages 2495–2504.
Computer Vision Foundation / IEEE.
Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning* Research, pages 9929–9939. PMLR.
Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In *Proceedings of the Student* Research Workshop, SRW@HLT-NAACL 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 88–93. The Association for Computational Linguistics.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005.
Annotating expressions of opinions and emotions in language. *Lang. Resour. Evaluation*, 39(2-3):165–
210.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers:
State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 38–45. Association for Computational Linguistics.
Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2021. Fine-tuning pretrained language model with weak supervision: A
contrastive-regularized self-training approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1063–1077. Association for Computational Linguistics.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1415–1420. Association for Computational Linguistics.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar.
2019b. SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval). In *Proceedings of the 13th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2019, Minneapolis, MN, USA, June 6-7, 2019*,
pages 75–86. Association for Computational Linguistics.
Chiyu Zhang and Muhammad Abdul-Mageed. 2022.
Improving social meaning detection with pragmatic masking and surrogate fine-tuning. In *Proceedings* of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 141–156, Dublin, Ireland. Association for Computational Linguistics.
Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021a. Pairwise supervised contrastive learning of sentence representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event
/ Punta Cana, Dominican Republic, 7-11 November, 2021, pages 5786–5798. Association for Computational Linguistics.
Jianguo Zhang, Trung Bui, Seunghyun Yoon, Xiang Chen, Zhiwei Liu, Congying Xia, Quan Hung Tran, Walter Chang, and Philip S. Yu. 2021b. Few-shot intent detection via contrastive pre-training and finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1906–
1912. Association for Computational Linguistics.
Mingkai Zheng, Fei Wang, Shan You, Chen Qian, Changshui Zhang, Xiaogang Wang, and Chang Xu. 2021. Weakly supervised contrastive learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10042–
10051.
Kun Zhou, Beichen Zhang, Xin Zhao, and Ji-Rong Wen.
2022. Debiased contrastive learning of unsupervised sentence representations. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6120–
6130. Association for Computational Linguistics.
## Appendices A Survey Of Contrastive Learning Frameworks.
There has been a flurry of recent contrastive learning frameworks introducing self-supervised, semisupervised, weakly-supervised, and strongly supervised learning objectives. These frameworks differ across a number of key dimensions: (i) *type of* the object (e.g., image, sentence, document), **(ii)**
positive example creation method (e.g., same class as anchor, anchor with few words replaced with synonyms), **(iii)** *negative example* creation method
(e.g., random sample, anchor with few words replaced with antonyms), **(iv)** *supervision* level (e.g.,
self, semi, weakly, hybrid, strong), and (v) *weighing of negative samples* (e.g., equal, confidencebased). Table 6 provides a summary of previous frameworks, comparing them with our proposed framework.
## B Method B.1 Normalized Point-Wise Mutual Information
The normalized point-wise mutual information
(NPMI) (Bouma, 2009) between ya and yi.
npmi(yi, ya) ∈ [−1, 1] is formulated as:
$$npmi(y_{i},y_{a})=\left(\log\frac{p(y_{i},y_{a})}{p(y_{i})p(y_{a})}\right)/-\log p(y_{i},y_{a}).\tag{8}$$
When npmi(yi, ya) = 1, ya and yi only occur together and are expected to express highly similar semantic meanings. When npmi(yi, ya) = 0, ya and yi never occur together and are expected to express highly dissimilar (i.e., different) semantic meanings. We only utilize NPMI scores of related class pairs, i.e., npmi(yi, ya) > 0. As the NPMI
score of ya and yiis higher, the weight wyi,ya is lower. As a result of incorporating NPMI scores into the negative comparison in the SCL, we anticipate that the representation model would learn better inter-class correlations and cluster the related fine-grained classes.
## B.2 Surrogate Label Predication
Our proposed framework also exploits a surrogate label prediction (SLP) objective, where the encoder Φ is optimized for the surrogate label prediction task using cross entropy. Specifically, we pass the hidden representation hithrough two feed-forward
| Reference | Object Type | Positive Sample | Neg. Sample | Supervision | Neg. Weighting |
|-----------------------|---------------|---------------------------------------------------------------------|--------------------------------------------------|---------------|---------------------|
| Khosla et al. (2020) | Image | Same class as anchor | Random sample | Strong | Equal |
| Giorgi et al. (2021) | Textual span | Span that overlaps with, adjacent to, or subsumed by anchor span | Random span | Self | Equal |
| Gunel et al. (2021) | Document | Same class as anchor | Random sample | Strong | Equal |
| Zhang et al. (2021b) | Utterance | Few tokens masked from anchor / Same class as anchor | Random sample | Self / Strong | Equal |
| Gao et al. (2021) | Sentence | Anchor with different hidden dropout / Sentence entails with anchor | Random sample / Sentence contradicts with anchor | Self / Strong | Equal |
| Wang et al. (2021) | Sentence | Anchor with few words replaced with synonyms, hypernyms and morphological changes | Anchor with few words replaced with antonyms and | Self | Equal |
| random words | | | | | |
| Yu et al. (2021) | Sentence | Same class as anchor | Different class as anchor | Semi- | Equal |
| Zheng et al. (2021) | Image | Same class as anchor | Different class as anchor | Weak | Equal |
| Zhang et al. (2021a) | Sentence | Sentence entails with anchor | Sentence contradicts with anchor & Random sample | Strong | Similarity |
| Suresh and Ong (2021) | Sentence | Anchor with few words replaced with synonyms / Same class as anchor | Random sample | Self / Strong | Confidence |
| Meng et al. (2021) | Textual span | Randomly cropped contiguous span | Random sample | Self | Equal |
| Zhou et al. (2022) | Sentence | Anchor with different hidden dropout | Random samples and Gaussian noise based samples | Self / Strong | Semantic similarity |
| Cao et al. (2022) | Sentence | Anchor with different hidden dropout and fast gradient sign method | Random sample | Self | Equal |
| Ours | Sentence | Same class as anchor | Random sample | Distant | Confidence & PMI |
layers with *T anh* non-linearity in between and obtain the prediction yˆi. Then, the surrogate classification loss based on cross entropy can be formalized as:
$${\mathcal{L}}_{S L P}=-{\frac{1}{2m}}\sum_{i=1}^{2m}\sum_{c=1}^{C}y_{i,c}\cdot\log{\hat{y}}_{i,c},\quad(9)$$
where yˆi,c is the predicted probability of sample xi w.r.t class c.
## B.3 Masked Language Modeling Objective
Our proposed framework also exploits a MLM objective to mitigate the effect of catastrophic forgetting of the token-level knowledge. Following Liu et al. (2019), we randomly corrupt an input sentence by replacing 15% of its tokens with
'[MASK]' tokens. Given the corrupted input sequence, we then train our model to predict original tokens at masked positions. Formally, given an input sequence, xi = {t1*, . . . , t*n}, the loss function of MLM is formulated as:
$$\mathcal{L}_{MLM}=-\frac{1}{2m}\sum_{i=1}^{2m}\sum_{t_{j}\in mk(x_{i})}\log(p(t_{j}|t_{cor(x_{i})})),\tag{10}$$
where mk(xi) indicates the set of masked tokens of the input sequence xi and cor(xi) denotes the corrupted input sequence xi.
## B.4 Epoch-Wise Re-Pairing
Rather than augmenting a batch D with using some data augmentation technique, in our framework, the positive sample xm+i of the anchor xiis a sample that uses the same emoji. To alleviate any potential noise in our distant labels, we introduce an epochwise re-pairing (EpW-RP) mechanism where the pairing of a positive sample with a given anchor is not fixed for epochs: at the beginning of each epoch, we flexibly re-pair the anchor with a new positive pair xm+i randomly re-sampled from the whole training dataset using the same emoji as xi.
This ensures that each anchor in a given batch will have at least one positive sample.20
## C Data C.1 Representation Learning Data And Pre-Processing.
Emoji Pre-Training Dataset. We normalize tweets by converting user mentions and hyperlinks to 'USER' and 'URL', respectively. We keep all the tweets, retweets, and replies but remove the
'RT USER:' string in front of retweets. We filter out short tweets (< 5 actual English word without counting the special tokens such as hashtag, emoji, USER, URL, and RT) to ensure each tweet contains sufficient context. Following previous works (Felbo et al., 2017; Barbieri et al., 2018; Bamman and Smith, 2015), we only keep the tweet that contains only a unique type of emoji (regardless of the number of emojis) and that uses a emoji at the end of the tweet. We then extract the emoji as a label of the tweet and remove the emoji from the tweet. We exclude emojis occurring less than 200 times, which gives us a set of 1, 067 emojis in 32M tweets. Moreover, we remove few tweets overlapped with Dev and Test sets of our evalua-20Note that each sample in the training dataset is used only once at each epoch, either as the anchor or as a positive sample of the anchor.
tion tasks by Twitter ID and string matching. We refer to this dataset as TweetEmoji-EN and split it into a training (31M) and validation (1M) set.
Hashtag Pre-Training Dataset. We also explore using hashtags as surrogate labels for InfoDCL
training. Following our data pre-processing procedure on TweetEmoji-EN, we randomly extract 300M English tweets each with at least one hashtags from a larger in-house dataset collected between 2014 and 2020. We only keep tweets that contain a single hashtag used at the end. We then extract the hashtag as a distant label and remove it from the tweet. We exclude hashtags occurring less than 200 times, which gives us a set of 12, 602 hashtags in 13M tweets. We refer to this dataset as TweetHashtag-EN and split the tweets into a training set (12M) and a validation (1M) set.
## Multilingual Emoji Pre-Training Dataset. We
collect a multilingual dataset to train multilingual models with our proposed framework. We apply the same data pre-processing and filtering conditions used on English data, and only include tweets that use the 1, 067 emojis in TweetEmojiEN. We obtain 1M tweets from our in-house dataset for three languages, i.e., Arabic, Italian, and Spanish.21 We refer to these datasets as TweetEmoji-AR, TweetEmoji-IT, and TweetEmoji-ES. We also randomly extract 1M
English tweets from our TweetEmoji-EN and refer to is as TweetEmoji-EN-1M. We then combine these four datasets and call the combined dataset TweetEmoji-Multi.
## C.2 Evaluation Data
In-Domain Datasets. We collect 16 English witter datasets representing eight different SM tasks to evaluate our models, including (1) crisis awareness task (Olteanu et al., 2014), (2) emotion recognition (Mohammad et al., 2018), (3) hateful and offensive language detection (Waseem and Hovy, 2016; Davidson et al., 2017; Basile et al.,
2019; Zampieri et al., 2019a), (4) humor identification (Meaney et al., 2021), (5) irony and sarcasm detection (Hee et al., 2018; Riloff et al., 2013; Ptácek et al., 2014; Rajadesingan et al., 2015; Bamman and Smith, 2015), (6) irony type identification (Hee et al., 2018) (7) sentiment analysis (Thelwall et al.,
2012; Rosenthal et al., 2017), and (8) stance detection (Mohammad et al., 2016). We present the 21However, we were only able to obtain 500K Italian tweets satisfying our conditions.
distribution, the number of labels, and the short name of each dataset in Table 7.
Out-of-Domain Datasets. We evaluate our model on downstream SM tasks from diverse social media platforms and domains. For emotion recognition task, we utilize (1) PsychExp (Wallbott and Scherer, 1986), a seven-way classification dataset of self-described emotional experiences created by psychologists, and (2) GoEmotion (Demszky et al.,
2020), a dataset of Reddit posts annotated with 27 emotions (we exclude neutral samples). For sarcasm detection task, we use two datasets from the Internet Argument Corpora (Walker et al., 2012; Oraby et al., 2016) that posts from debate forums.
For sentiment analysis, we utilize (1) five-class and binary classification versions of the Stanford Sentiment Treebank (Socher et al., 2013) (SST-5 and SST-2) that include annotated movie reviews with sentiment tags, (2) movie review (MR) for binary sentiment classification (Pang and Lee, 2005),
and (3) SentiStrength for YouTube comments (SSYouTube) (Thelwall et al., 2012).
Multilingual Datasets. As explained, to evaluate the effectiveness of our framework on different languages, we collect nine Twitter tasks in three languages: Arabic, Italian, and Spanish. For each language, we include three emotion-related tasks, (1)
emotion recognition(Abdul-Mageed et al., 2020; Bianchi et al., 2021; Mohammad et al., 2018),
(2) irony identification (Ghanem et al., 2019; Cignarella et al., 2018; Ortega-Bueno et al., 2019),
and (3) offensive language/hate speech detection (Mubarak et al., 2020; Bosco et al., 2018; Basile et al., 2019).
Few-Shot Data. We conduct our few-shot experiments only on our English language downstream data. We use different sizes from the set {20, 100, 500, 1, 000} sampled randomly from the respective Train splits of our data. For each of these sizes, we randomly sample five times with replacement (as we report the average of five runs in our experiments). We also run few-shot experiments with varying percentages of the Train set of each task
(i.e., 1%, 5%, 10%, 20% . . . 90%). We randomly sample **five** different training sets for each percentage, evaluate each model on the original Dev and Test sets, and average the performance over five runs.
| Task | Study | Cls | Domain | Lang | Data Split | |
|------------------|-------------------------------------|-------|----------------|--------|----------------------|-------------------|
| (Train/Dev/Test) | % of Emoji Samples (Train/Dev/Test) | | | | | |
| CrisisOltea | Olteanu et al. (2014) | 2 | Twitter | EN | 48,065/6,008/6,009 | 0.01/0.02/0.00 |
| EmoMoham | Mohammad et al. (2018) | 4 | Twitter | EN | 3,257/374/1,422 | 11.39/27.81/23.43 |
| HateWas | Waseem and Hovy (2016) | 3 | Twitter | EN | 8,683/1,086/1,086 | 2.23/2.03/2.76 |
| HateDav | Davidson et al. (2017) | 3 | Twitter | EN | 19,826/2,478/2,479 | 0.00/0.00/0.00 |
| HateBas | Basile et al. (2019) | 2 | Twitter | EN | 9,000/1,000/3,000 | 6.50/1.50/11.57 |
| HumorMea | Meaney et al. (2021) | 2 | Twitter | EN | 8,000/1,000/1,000 | 0.55/0.00/1.00 |
| IronyHee-A | Hee et al. (2018) | 2 | Twitter | EN | 3,450/384/784 | 10.58/10.94/11.22 |
| IronyHee-B | Hee et al. (2018) | 4 | Twitter | EN | 3,450/384/784 | 10.58/10.94/11.22 |
| OffenseZamp | Zampieri et al. (2019a) | 2 | Twitter | EN | 11,916/1,324/860 | 11.43/10.88/13.37 |
| SarcRiloff | Riloff et al. (2013) | 2 | Twitter | EN | 1,413/177/177 | 5.38/3.39/4.52 |
| SarcPtacek | Ptácek et al. (2014) | 2 | Twitter | EN | 71,433/8,929/8,930 | 4.34/4.36/4.92 |
| SarcRajad | Rajadesingan et al. (2015) | 2 | Twitter | EN | 41,261/5,158/5,158 | 16.94/18.01/17.10 |
| SarcBam | Bamman and Smith (2015) | 2 | Twitter | EN | 11,864/1,483/1,484 | 8.47/8.29/9.64 |
| SentiRosen | Rosenthal et al. (2017) | 3 | Twitter | EN | 42,756/4,752/12,284 | 0.00/0.00/6.59 |
| SentiThel | Thelwall et al. (2012) | 2 | Twitter | EN | 900/100/1,113 | 0.00/0.00/0.00 |
| StanceMoham | Mohammad et al. (2016) | 3 | Twitter | EN | 2,622/292/1,249 | 0.00/0.00/0.00 |
| EmoWall | Wallbott and Scherer (1986) | 7 | Questionnaire | EN | 900/100/6,481 | 0.00/0.00/0.00 |
| EmoDem | Demszky et al. (2020) | 27 | Reddit | EN | 23,486/2,957/2,985 | 0.00/0.00/0.00 |
| SarcWalk | Walker et al. (2012) | 2 | Debate Forums | EN | 900/100/995 | 0.00/0.00/0.00 |
| SarcOra | Oraby et al. (2016) | 2 | Debate Forums | EN | 900/100/2,260 | 0.00/0.00/0.10 |
| Senti-MR | Pang and Lee (2005) | 2 | Moview reviews | EN | 8,529/1,066/1,067 | 2.01/1.76/1.84 |
| Senti-YT | Thelwall et al. (2012) | 2 | Video comments | EN | 900/100/1,142 | 0.00/0.00/0.00 |
| SST-5 | Socher et al. (2013) | 5 | Moview reviews | EN | 8,544/1,100/2,209 | 0.00/0.00/0.00 |
| SST-2 | Socher et al. (2013) | 2 | Moview reviews | EN | 6,919/871/1,820 | 0.00/0.00/0.00 |
| EmoMag | Abdul-Mageed et al. (2020) | 8 | Twitter | AR | 189,902/910/941 | 16.58/25.27/25.40 |
| EmoBian | Bianchi et al. (2021) | 4 | Twitter | IT | 1,629/204/204 | 27.62/28.43/32.84 |
| Emo-esMoham | Mohammad et al. (2018) | 4 | Twitter | ES | 4,541/793/2,616 | 23.67/21.94/22.71 |
| HateBos | Bosco et al. (2018) | 2 | Twitter | IT | 2,700/300/1,000 | 1.93/1.67/1.50 |
| Hate-esBas | Basile et al. (2019) | 2 | Twitter | ES | 4,500/500/1,600 | 11.07/10.00/7.63 |
| IronyGhan | Ghanem et al. (2019) | 2 | Twitter | AR | 3,621/403/805 | 8.62/9.68/7.95 |
| IronyCig | Cignarella et al. (2018) | 2 | Twitter | IT | 3,579/398/872 | 1.68/2.01/5.50 |
| IronyOrt | Ortega-Bueno et al. (2019) | 2 | Twitter | ES | 2,160/240/600 | 11.94/15.00/10.00 |
| OffenseMub | Mubarak et al. (2020) | 2 | Twitter | AR | 6,839/1,000/2,000 | 38.79/36.50/38.75 |
| AGNews | Corso et al. (2005) | 4 | News | EN | 108,000/12,000/7,600 | 0.00/0.00/0.00 |
| TopicDao | Daouadi et al. (2021) | 2 | Twitter | EN | 11,943/1,328/5,734 | 0.00/0.00/0.00 |
Topic Classification Datasets. To investigate the generalizability of our models, we evaluate our models on two topic classifcation datasets: AGNews (Corso et al., 2005) and TopicDao (Daouadi et al., 2021). Given a news title and a short description, AGNews classifies the input text into four categories, including world, sports, business, and Sci/Tech. TopicDao identifies if a given tweet is related to politics or not. The data distribution is presented in Table 7.
SentEval. We utilize SentEval benchmark (Conneau and Kiela, 2018)
22, a toolkit for evaluating the quality of sentence representations, to evaluate on seven semantic textual similarity (STS)
datasets and eight transfer learning datasets. Seven STS datasets include STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), SICKRelatedness (Marelli et al., 2014), and STS Benchmark (Cer et al., 2017). Eight transferring classification datasets consist of four sentiment analysis
(i.e., movie review (MR) (Pang and Lee, 2005),
product review (CR) (Hu and Liu, 2004), SST2, and SST5 (Socher et al., 2013)), subjectivity detection (SUBJ) (Pang and Lee, 2004), opinion polarity
(MPQA) (Wiebe et al., 2005), question-type classification (TREC) (Voorhees and Tice, 2000), and paraphrase detection (MRPC) (Dolan and Brockett, 2005). The data distribution and evaluation metrics are presented in Table 8. The STS datasets only have test set since they do not train any model.
Tasks of MR, CR, SUBJ and MPQA are evaluated by nested 10-fold cross-validation, TREC and MRPC use cross-validation, and two SST datasets have standard development and test sets.
## D Experiment D.1 Implementation
For experiments on English language datasets, we initialize our model with a pre-trained English RoBERTaBase (Liu et al., 2019) model from Hug-
| Task | Train | Dev | Test | Metric |
|--------|---------|-------|--------|----------|
| STS12 | - | - | 3.1K | spearman |
| STS13 | - | - | 1.5K | spearman |
| STS14 | - | - | 3.7K | spearman |
| STS15 | - | - | 8.5K | spearman |
| STS16 | - | - | 9.2K | spearman |
| SICK-R | - | - | 1.4K | spearman |
| STS-B | - | - | 4.9K | spearman |
| MR | 10.6K | - | 10.6K | accuracy |
| CR | 3.7K | - | 3.7K | accuracy |
| SUBJ | 10.0K | - | 10.0K | accuracy |
| MPQA | 10.6K | - | 10.6K | accuracy |
| SST2 | 67.3K | 872 | 1.8K | accuracy |
| SST5 | 8.5K | 1.1K | 2.2K | accuracy |
| TREC | 5.5K | - | 500 | accuracy |
| MRPC | 4.1K | - | 1.7K | accuracy |
gingface's Transformers (Wolf et al., 2020) library.
RoBERTaBase consists of 12 Transformer Encoder layers, 768 hidden units each, 12 attention heads, and contains 110M parameters in entire model. RoBERTa uses a byte-pair-encoding vocabulary with a size of 50, 265 tokens. RoBERTa was pretrained on large English corpora (e.g., Bookcorpus) with the MLM objective. In accordance with convention (Liu et al., 2019; Gao et al., 2021),
we pass the hidden state corresponding to the
'[CLS]' token from the last layer through a feedforward layer with hidden size of 768 and a hyperbolic tangent function and, then, use the output as the sentence-level embedding, hi. For the classification objective, we feed hiinto a feedforward layer with hidden size of 1, 067 23, a softmax function and a dropout of 0.1. For multilingual experiments, we utilize the pre-trained XLM-RoBERTaBase model24 (Conneau et al., 2020)
as our initial checkpoint. XLM-RBase has the same architecture as RoBERTa. XLM-R includes a vocabulary of 250, 002 BPE tokens for 100 languages and is pre-trained on 2.5TB of filtered CommonCrawl.
We fine-tune pre-trained models on each downstream task for five times with different random seeds and report the averaged model performance.
Our main metric is macro-averaged F1 score. To evaluate the overall ability of a model, we also report an aggregated metric that averages over the 16 Twitter datasets, eight out-of-domain tasks, and the 23The number of Emoji classes is 1, 067.
24For short, we refer to the official released XLMRoBERTaBase as XLM-R in the rest of the paper.
nine multi-lingual Twitter datasets, respectively.
NPMI weighting matrix. We randomly sample 150M tweets from the 350M tweets with at least one emoji each. We extract all emojis in each tweet and count the frequencies of emojis as well as cooccurrences between emojis. To avoid noisy relatedness from low frequency pairs, we filter out emoji pairs, (yi, ya), whose co-occurrences are less than 20 times or 0.02× frequency of yi. We employ Eq. 8 to calculate NPMI for each emoji pair.
Similarly, we calculate the NPMI weighting matrix using 150M with at least one hashtag each and filtering out low frequency pairs.
## D.2 Baselines
We compare our proposed framework against 11 strong baselines, which we describe here. (1)
RoBERTa: The original pre-trained RoBERTa, fine-tuned on downstream tasks with standard cross-entropy loss. **(2) MLM:** We continue pretraining RoBERTa on our pre-training dataset
(TweetEmoji-EN for emoji-based experiment and TweetHashtag-EN for hashtag-based experiment) with solely the MLM objective in Eq. 10
(Appendix B.3), then fine-tune on downstream tasks. **(3) Emoji-Based MLM (E-MLM):** Following Corazza et al. (2020), we mask emojis in tweets and task the model to predict them, then finetune on downstream tasks.25 **(4) SLP.** A RoBERTa model fine-tuned on the *surrogate label prediction* task (e.g., emoji prediction) (Zhang and AbdulMageed, 2022) with cross-entropy loss, then finetuned on downstream tasks. **Supervised Contrastive Learning:** We also compare to state-ofthe-art supervised contrastive fine-tuning frameworks. We take the original pre-trained RoBERTa and fine-tune it on each task with **(5) SCL** (Gunel et al., 2021) and **(6) LCL** (Suresh and Ong, 2021),
respectively. Both works combine supervised contrastive loss with standard cross-entropy as well as augmentation of the training data to construct positive pairs. We follow the augmentation technique used in Suresh and Ong (2021), which replaces 30% of words in the input sample with their synonyms in WordNet dictionary (Miller, 1995). **SelfSupervised Contrastive Learning.** We further train RoBERTa on different recently proposed selfsupervised contrastive learning frameworks. (7)
25For hashtag-based experiment, we adapt this method to masking hashtags in tweets and refer to it as Hashtag-based MLM (H-MLM).
SimCSE-Self. Gao et al. (2021) introduce SimCSE
where they produce a positive pair by applying different dropout masks on input text twice. We similarly acquire a positive pair using the same droput method. **(8) SimCSE-Distant.** Gao et al.
(2021) also propose a supervised SimCSE that utilizes gold NLI data to create positive pairs where an anchor is a premise and a positive sample is an entailment. Hence, we adapt the supervised SimCSE framework to our distantly supervised data and construct positive pairs applying our epochwise re-pairing strategy. Specifically, each anchor has one positive sample that employs the same emoji as the anchor in a batch. **(9) MirrorBERT.** (Liu et al., 2021a) construct positive samples in Mirror-BERT by random span masking as well as different dropout masks. After contrastive learning, sentence-encoder models are finetuned on downstream tasks with the cross-entropy loss. **(10) Weakly-supervised Contrastive Learning.** We simplify and adapt the WCL framework of Zheng et al. (2021) to language: We first encode unlabelled tweets to sequence-level representation vectors using the hidden state of the '[CLS]'
token from the last layer of RoBERTa. All unlabelled tweets are clustered by applying k-means to their representation vectors. We then use the cluster IDs as weak labels to perform an SCL to pull the tweets assigned to the same cluster closer.
Following Zheng et al. (2021), we also include an SSCL loss by augmenting the positive sample of an anchor using random span as well as dropout masking. We jointly optimize the SCL and SSCL losses in our implementation. **(11) DomainSpecific PLM (BTw):** We compare to the SoTA
domain-specific PLM, BERTweet (Nguyen et al.,
2020). BERTweet was pre-trained on 850M tweets with RoBERTaBase architecture. We download the pre-trained BERTweet checkpoint from Huggingface's Transformers (Wolf et al., 2020) library and fine-tune it on each downstream task with crossentropy loss. More details about hyper-parameters of these baselines are in Appendix D.3.
## D.3 Hyper-Parameters
InfoDCL Training Hyper-Parameters. For hyper-parameter tuning of our proposed InfoDCL
framework, we randomly sample 5M tweets from the training set of our TweetEmoji-EN. We continue training the pre-trained RoBERTa for three epochs with Adam optimizer with a weight decay of 0.01 and a peak learning rate of 2e − 5.
The batch size is 128, and the total number of input samples is 256 after constructing positive pairs. As Gao et al. (2021) find contrastive learning is not sensitive to the learning rate nor batch size when further training a PLM, we do not finetune these (i.e., the learning rate and batch size)
in this paper. Following (Liu et al., 2019), we mask 15% of tokens for our MLM objective. We fine-tune the loss scaling weights λ1 in a set of
{0.1, 0.3, 0.4}, λ2 in a set of {0.1, 0.3, 0.5}, and γ in a set of {0.1, 0.3, 0.5, 0.7, 0.9}. To reduce search space, we use the same temperature value for the τ in Eq. 3 and Eq. 4 and fine-tune in a set of
{0.1, 0.3, 0.5, 0.7, 0.9}. We use grid search to find the best hyper-parameter set and evaluate performance on the Dev set of the 15 English language Twitter datasets (excluding SentiThel). 26 We select the best hyper-parameter set that achieves the best macro-F1 averaged over the 15 downstream tasks. Our best hyper-parameter set is λ1 = 0.3, λ2 = 0.1, γ = 0.5, and τ = 0.3. As Figure 4 shows, our model is not sensitive to changes of these hyper-parameters, and we observe that all the differences are less than 0.45 comparing to the best hyper-parameter set. Finally, we continue training RoBERTa/BERTweet on the full training set of TweetEmoji-EN with InfoDCL framework and best hyper-parameters. We train InfoDCL model for three epochs and utilize 4 Nvidia A100 GPU
(40GB each) and 24 CPU cores. Each epoch takes around 7 hours.
Downstream Task Fine-Tuning HyperParameters. Furthermore, we take the model trained with the best hyper-parameters and search the best hyper-parameter set of downstream task fine-tuning. We search the batch size in a set of {8, 16, 32, 64} and the peak learning rate in a set of {2e − 5, 1e − 5, 5e − 6}. We identify the best fine-tuning hyper-parameters based on the macro-F1 27 on Dev sets averaged over the 16 English language Twitter datasets. Our best hyper-parameters for fine-tuning is a learning rate of 1e − 5 and a batch size of 32. For all the downstream task fine-tuning experiments in this paper, we train a model on the task for 20 epochs with early stop (*patience* = 5 epochs). We use the 26We fine-tune the learned model on each downstream task with an arbitrary learning rate of 5e − 6, a batch size of 16, and a training epoch of 20. The performance is macro-F1 over three runs with random seeds.
27We run three times and use the mean of them.
![21_image_0.png](21_image_0.png)
same hyper-parameters identified in this full data setting for our few-shot learning. For each dataset, we fine-tune for five times with a different random seed every time, and report the mean macro-F1 of the five runs. Each downstream fine-tuning experiment use a single Nvidia A100 GPU (40GB)
and 4 CPU cores.
Baseline Hyper-Parameters. Our **Baseline (1)**
is directly fine-tuning RoBERTa on downstream tasks. We fine-tune Baseline (1) hyper-parameters as follows: The batch size is chosen from a set of
{8, 16, 32, 64} and the peak learning rate in a set of
{2e−5, 1e−5, 5e−6}. The best hyper-parameters for RoBERTa fine-tuning is a learning rate of 2e−5 and a batch size of 64.
For **Baseline (2-3)**, we further pre-train the RoBERTa model for three epochs (same as our InfoDCL) with the MLM objective with an arbitrary learning rate of 5e − 5 and a batch size of 4, 096.
We mask 15% of tokens in each input tweet. For Baseline (3), we give priority to masking emojis in a tweet: if the emoji tokens are less than 15%,
we then randomly select regular tokens to complete the percentage of masking to the 15%. **Baseline**
(4) is about surrogate label prediction (with emojis). We also train Baseline (4) for three epochs with a learning rate of 2e − 5 and a batch size of 4, 096. After training, models are fine-tuned on downstream tasks using the same hyper-parameters as our proposed model.
Baselines (5-7). SimCSE (Gao et al., 2021)
was trained in two setups, i.e., self-supervised and supervised by label data. We also train RoBERTa on both settings. For *self-supervised SimCSE*, we train RoBERTa on our pre-training dataset for three epochs with a learning rate of 2e − 5, a batch size of 256, and τ of 0.05. For the *distantly-supervised* SimCSE, we construct positive pairs as described in Section B.4. Similar to self-supervised SimCSE,
we train RoBERTa for three epochs with a learning rate of 2e − 5 but with a batch size of 128.
28 The pre-training of **Mirror-BERT** is similar to the pre-training of self-supervised SimCSE. We set the span masking rate of k = 3, a temperature of 0.04, a learning rate of 2e − 5, and a batch size of 256. Trained models, then, are fine-tuned on downstream tasks. For downstream task finetuning with baselines 2-7, we use the same hyperparameters identified with InfoDCL downstream task fine-tuning.
Baselines (8-9). SCL (Gunel et al., 2021) and LCL (Suresh and Ong, 2021) directly fine-tune on downstream tasks with cross-entropy loss. We reproduce these two methods on our evaluation tasks. For SCL, we follow Gunel et al. (2021) and fine-tune each task with a temperature of τ = 0.3, a SCL scaling weighting of 0.9, and a learning rate of 2e − 5. For LCL, we fine-tune each task with a temperature τ of 0.3, a LCL scaling weighting of 0.5, and a learning rate of 2e − 5.
Baselines (10). We implement WCL (Zheng et al., 2021) to continue train RoBERTa with our emoji dataset. We remove all emojis in the 31M
tweets and encode tweets using the hidden state of '[CLS]' token from the last layer of RoBERTa.
The tweets are then clustered by k-means clustering algorithm.29 For hyper-parmeter tuning of WCL, we randomly sample 5M tweets from the training set of TweetEmoji-EN and train a model for three epochs with different hyperparmeter sets. We search the number of clusters in a set of {200, 500, 1067, 2000} and temperature τ in a set of {0.1, 0.3}. To reduce the search space, we use the same temperature value for SSCL
and SCL losses. We evaluate performance on the Dev set of the 16 English language Twitter datasets 30 and find the best hyper-parameter set is k = 1067 and τ = 0.1. We then train WCL
on the TweetEmoji-EN dataset for three epochs with our best hyper-parameters and fine tune the model on 24 downstream tasks with the same hyperparameters identified for InfoDCL downstream fine-tuning.31 Baseline (11). We fine-tune BERTweet with hyperparameters utilized in (Nguyen et al., 2020)
that are a fixed learning of 1e − 5 and a batch size of 32.
InfoDCL PT (emoji) 0.3 0.1 0.5 0.3 2e − 5 128
![22_image_2.png](22_image_2.png)
InfoDCL PT (hashtag) 0.4 0.1 0.1 0.1 2e − 5 128 DCL PT (emoji) - - 0.5 0.3 2e − 5 128 DCL PT (hashtag) - - 0.1 0.1 2e − 5 128 Downstream FT - - - - 1e − 5 32 RoBERTa FT - - - - 2e − 5 64 MLM - - - - 5e − 5 4,096 E-MLM - - - - 5e − 5 4,096 SLP - - - - 2e − 5 4,096 SimCSE-Self - - - 0.05 2e − 5 256 SimCSE-Distant - - - 0.05 2e − 5 128 Mirror-BERT - - - 0.04 2e − 5 256 SCL - - - 0.30 2e − 5 32 LCL - - - 0.30 2e − 5 32 WCL - - - 0.10 2e − 5 256 BERTweet FT - - - - 1e − 5 32 λ1 λ2 γ τ **lr batch**
![22_image_0.png](22_image_0.png)
![22_image_1.png](22_image_1.png)
Multi-Lingual Experiment Hyper-Parameters.
For multi-lingual experiments, we utilize the pretrained XLM-RoBERTaBase model (Conneau et al.,
2020) as our initial checkpoint. We continue training XLM-R on multi-lingual tweets with our framework and the best hyperparameters identified for English. For the downstream fine-tuning, we use as same as the best hyperparameters identified for English tasks.
## Hahstag Experiment Hyper-Parameters. For
the hashtag-based experiments presented in Section E.4, we use the same hyper-parameter optimization set up to find the best hyper-parameter set for hashtag-based models. The best hyperparameter set for hashtag-based models is λ1 =
0.4, λ2 = 0.1, γ = 0.1, and τ = 0.1. We then use the same downstream fine-tuning hyper-parameters identified with emoji-based InfoDCL for downstream task.
## E Results E.1 **Standard Deviation And Significance Tests**
Table 10 shows the standard deviations of our emoji-based InfoDCl models and all baselines over five runs. We conduct two significance tests on our results, i.e., the classical paired student's ttest (Fisher, 1936) and Almost Stochastic Order
(ASO) (Dror et al., 2019) (better adapts to results of neural networks). As we pointed out earlier, we run each experiment five times with different random seeds. Hence, we conduct these two significance tests by inputting the obtained five evaluation scores on the Test set. Table 11 presents p-values for t-test and minimal distance ϵ at significance level of 0.01 for ASO test. We also conduct significance tests on the results of individual tasks, finding that our InfoDCL-RoBERTa significantly (p < 0.05) improves the original RoBERTa on 13 (out of 24) and 24 (out of 24) datasets based on t-test and ASO, respectively. InfoDCLRoBERTa also significantly (p < 0.05) outperforms BERTweet (the strongest baseline) on 10
(out of 24) and 15 (out of 24) tasks based on t-test and ASO, respectively.
## E.2 Comparisons To Individual Sotas.
Although the focus of our work is on producing effective representations suited to the whole class of SM tasks, rather than to one or another of these tasks, we also compare our models on each dataset to other reported task-specific SoTA
models on that particular dataset in Table 12. We compare our methods on each dataset to other reported task-specific SoTA models on that particular dataset as shown. Due to diverse metrics used in previous studies, we compare models of each task reporting the corresponding metric of the SoTA method. Some SoTA models are trained on different data splits or use different evaluation approaches (e.g., Olteanu et al. (2014) is evaluated by cross-validation). To provide meaningful comparisons, we thus fine-tune BERTweet on our splits and report against our models. Our InfoDCL-RoBERTa outperform SoTA on 11 out of 16 in-domain datasets and four out of eight out-ofdomain datasets. We achieve the best average score over 16 in-domain datasets applying our model 30We fine-tune the trained WCL model with a learning rate of 1e − 5 and a batch size of 32.
31For hashtag-based experiment, we use the same hyperparameters.
| Task | RB | MLM | E-MLM | SLP | Mir-B | Sim-Self | Sim-D | SCL | LCL | WCL | DCL | InfoDCL-R | BTw | InfoDCL-B | |
|---------------|------------|-------|---------|-------|---------|------------|---------|-------|-------|-------|-------|-------------|-------|-------------|------|
| CrisisOltea | 0.15 | 0.15 | 0.23 | 0.17 | 0.24 | 0.30 | 0.25 | 0.23 | 0.13 | 0.29 | 0.25 | 0.15 | 0.26 | 0.07 | |
| EmoMoham | 1.60 | 0.85 | 0.72 | 1.05 | 0.50 | 0.85 | 0.70 | 0.56 | 0.37 | 0.53 | 0.93 | 0.79 | 0.66 | 0.70 | |
| HateWas | 0.21 | 0.63 | 0.79 | 0.55 | 0.21 | 0.19 | 0.40 | 0.21 | 0.25 | 0.24 | 0.67 | 0.41 | 0.63 | 0.57 | |
| HateDav | 1.31 | 0.85 | 0.58 | 0.36 | 1.71 | 1.39 | 1.04 | 0.43 | 1.24 | 0.93 | 0.81 | 0.61 | 0.78 | 0.76 | |
| HateBas | 1.96 | 2.20 | 1.86 | 1.64 | 0.82 | 1.62 | 2.65 | 3.52 | 1.20 | 2.21 | 0.47 | 1.00 | 3.50 | 1.88 | |
| HumorMea | 0.47 | 0.38 | 0.65 | 0.38 | 0.38 | 0.87 | 0.59 | 0.65 | 0.66 | 0.73 | 0.19 | 0.62 | 0.15 | 0.48 | |
| IronyHee-A | 1.30 | 1.06 | 0.85 | 1.02 | 1.11 | 0.87 | 1.35 | 1.13 | 0.95 | 1.46 | 1.38 | 1.51 | 1.38 | 0.85 | |
| IronyHee-B | 1.60 | 0.63 | 2.43 | 2.38 | 0.56 | 0.84 | 2.70 | 2.03 | 1.44 | 0.89 | 1.05 | 0.53 | 2.06 | 3.19 | |
| OffenseZamp | 1.41 | 0.37 | 0.78 | 0.50 | 1.32 | 1.67 | 0.60 | 0.83 | 0.15 | 0.42 | 0.85 | 1.51 | 1.96 | 0.92 | |
| SarcRiloff | 1.47 | 1.34 | 2.58 | 1.26 | 4.32 | 2.06 | 1.86 | 2.79 | 2.03 | 1.15 | 0.85 | 1.09 | 1.69 | 1.60 | |
| SarcPtacek | 0.30 | 0.10 | 0.10 | 0.22 | 0.18 | 0.28 | 0.21 | 0.23 | 0.14 | 0.17 | 0.12 | 0.07 | 0.23 | 0.10 | |
| SarcRajad | 0.51 | 0.30 | 0.30 | 0.71 | 0.57 | 0.27 | 0.22 | 0.55 | 0.55 | 0.58 | 0.47 | 0.49 | 0.73 | 0.64 | |
| SarcBam | 0.54 | 0.61 | 0.87 | 0.38 | 0.69 | 1.18 | 0.60 | 0.83 | 0.78 | 0.36 | 0.48 | 0.39 | 0.31 | 0.71 | |
| SentiRosen | 0.93 | 1.64 | 0.35 | 0.91 | 1.06 | 0.57 | 0.67 | 1.14 | 0.40 | 0.73 | 0.76 | 0.52 | 0.40 | 0.43 | |
| SentiThel | 0.61 | 1.01 | 0.69 | 0.33 | 0.65 | 0.50 | 0.56 | 1.29 | 0.85 | 0.54 | 0.78 | 0.62 | 0.63 | 0.66 | |
| StanceMoham | 0.87 | 1.55 | 0.80 | 1.07 | 1.40 | 1.94 | 1.67 | 1.01 | 1.66 | 1.11 | 1.25 | 1.33 | 1.35 | 1.37 | |
| Average | 0.24 | 0.24 | 0.20 | 0.26 | 0.23 | 0.17 | 0.31 | 0.35 | 0.42 | 0.23 | 0.24 | 0.19 | 0.33 | 0.20 | |
| EmotionWall | 0.41 | 0.78 | 0.69 | 1.01 | 1.14 | 0.40 | 0.33 | 0.73 | 0.36 | 0.73 | 1.13 | 0.26 | 1.50 | 0.85 | |
| In-Domain | EmotionDem | 0.58 | 0.60 | 0.42 | 0.80 | 0.71 | 0.88 | 0.74 | 0.52 | 1.05 | 0.86 | 1.28 | 0.61 | 1.20 | 1.73 |
| SarcWalk | 1.29 | 1.14 | 0.99 | 0.98 | 1.25 | 4.09 | 1.01 | 0.88 | 1.19 | 0.59 | 1.66 | 1.11 | 0.69 | 0.72 | |
| SarcOra | 1.20 | 1.41 | 0.99 | 0.24 | 1.56 | 1.85 | 0.32 | 1.33 | 1.70 | 1.21 | 0.68 | 0.77 | 1.05 | 1.00 | |
| Senti-MR | 0.56 | 0.29 | 0.70 | 0.50 | 0.32 | 0.27 | 0.27 | 0.46 | 0.41 | 0.61 | 0.30 | 0.39 | 0.57 | 0.43 | |
| Senti-YT | 0.52 | 0.59 | 0.43 | 0.36 | 1.00 | 0.95 | 0.37 | 0.37 | 0.62 | 0.29 | 0.53 | 0.26 | 0.25 | 0.52 | |
| SST-5 | 0.35 | 0.56 | 0.64 | 1.18 | 0.72 | 0.55 | 0.57 | 1.06 | 0.78 | 0.79 | 0.97 | 0.64 | 0.90 | 0.53 | |
| SST-2 | 0.39 | 0.41 | 0.40 | 0.22 | 0.38 | 0.35 | 0.50 | 0.34 | 0.30 | 0.35 | 0.32 | 0.24 | 0.32 | 0.22 | |
| Average | 0.31 | 0.15 | 0.27 | 0.41 | 0.19 | 0.42 | 0.21 | 0.26 | 0.17 | 0.14 | 0.54 | 0.27 | 0.28 | 0.12 | |
| Out-of-Domain | | | | | | | | | | | | | | | |
on BERTweet. Further training RoBERTa with our framework obtains the best average score across the eight out-of-domain datasets. We note that some SoTA models adopt task-specific approaches and/or require task-specific resources. For example, Ke et al. (2020) utilize SentiWordNet to identify the sentiment polarity of each word. In this work, our focus on producing effective representations suited for the whole class of SM tasks, rather than one or another of these tasks. Otherwise, we hypothesize that task-specific approaches can be combined with our InfoDCL framework to yield even better performance on individual tasks.
| p-value (t-test) | Minimal Distance ϵ (ASO) | | | |
|----------------------|----------------------------|-----------|---------------|--------|
| In-Domain | Out-of-Domain | In-Domain | Out-of-Domain | |
| InfoDCL-RoBERTa vs. | | | | |
| RoBERTa | 0.0000 | 0.0075 | 0.0000 | 0.0000 |
| MLM | 0.0002 | 0.0020 | 0.0000 | 0.0000 |
| E-MLM | 0.0100 | 0.0410 | 0.0000 | 0.0000 |
| SLP | 0.0213 | 0.0843 | 0.0000 | 0.0011 |
| Mirror-B | 0.0000 | 0.0001 | 0.0000 | 0.0000 |
| SimSCE-self | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
| SimCSE-D | 0.0818 | 0.0005 | 0.0000 | 0.0000 |
| SCL | 0.0003 | 0.0014 | 0.0000 | 0.0000 |
| LCL | 0.0003 | 0.0001 | 0.0000 | 0.0000 |
| WCL | 0.0001 | 0.0001 | 0.0000 | 0.0000 |
| BERTweet | 0.0960 | 0.0000 | 0.0000 | 0.0000 |
| InfoDCL-BERTweet vs. | | | | |
| BERTweet | 0.0076 | 0.0377 | 0.0321 | 0.0000 |
## E.3 Multilingual Tasks
We also investigate the effectiveness of our proposed model on multilingual tasks. Table 13 shows the performance on nine downstream tasks in three different languages. Here, we continue training XLM-R with our proposed objectives. We experiment with three settings: (1) English only: training on the TweetEmoji-1M and evaluating on the nine multilingual datasets, (2) Target mono-lingual: training on each 1M mono-lingual tweets in the target language independently (i.e., TweetEmoji-AR for Arabic, TweetEmoji-IT for Italian, and TweetEmoji-ES for Spanish) and evaluating on the respective dataset corresponding to the same language as training data, and (3) Multilingual:
training on the TweetEmoji-Multi dataset and evaluating on the nine multilingual datasets. We still use the NPMI weighting matrix generated from English tweets in these experiments. 32 Table 13 shows that our models outperform the original XLM-R on all the datasets and obtains improvements of 1.44 and 0.85 average F1 across the nine datasets under the multilingual and target mono-lingual settings, respectively. Training on English mono-lingual data helps four datasets, but cannot benefit all the nine non-English datasets on average. Compared to previous SoTA models, our proposed methods outperform these on six out of
![24_image_3.png](24_image_3.png)
Task Metric SoTA BTw **InfoDCL**
RB**InfoDCL**
BTw
CrisisOltea M-F1 95.60⋆ 95.76 **96.01** 95.84
EmoMoham M-F1 78.50♠ 80.23 81.34 **81.96**
HateWas W-F1 73.62⋆⋆ 88.95 88.73 **89.12**
HateDav W-F1 90.00†91.26 91.12 **91.27**
HateBas M-F1 **65.10**♡ 53.62 52.84 53.95
HumorMea M-F1 **98.54**= 94.43 93.75 94.04
(i)
170.50†† 73.99 72.10 **74.81**
IronyHee-B M-F1 50.70†† 56.73 57.22 **59.15**
OffenseZamp M-F1 **82.90**‡79.35 81.21 79.83
(s)
151.00‡‡ 66.59 65.90 **69.28**
SarcPtacek M-F1 92.37§96.40 96.10 **96.67**
SarcRajad Acc 92.94§§ 95.30 95.20 **95.32**
SarcBam Acc **85.10**∥ 81.79 81.51 83.22 SentiRosen M-Rec 72.60♠ **72.91** 72.77 72.46
SentiThel Acc 88.00♦ 89.81 **91.81** 90.67
StanceMoham Avg(a,f) 71.00♣ 71.26 **73.31** 72.09
Average - 78.65 80.52 80.68 **81.23**
EmotionWall M-F1 57.00♦ 64.48 **68.41** 65.61
| Out-of-Domain |
|-----------------|
EmotionDem W-F1 64.80⊥ 64.53 **65.16** 64.80
SarcWalk M-F1 **69.00**♦ 67.27 68.45 67.30
SarcOra M-F1 75.00♦ 77.33 **77.41** 76.88
Senti-MR Acc **90.82**♭87.95 89.43 88.21
Senti-YT Acc 93.00♦ 93.24 93.12 **93.47**
SST-5 Acc **58.59**♭56.32 57.74 57.23
SST-2 Acc **96.70**♮93.32 94.98 93.73
Average - 75.61 75.55 **76.84** 75.90
nine datasets. 33 These results demonstrate that our methods are not only task-agnostic within the realm of SM tasks, but also language-independent.
## E.4 Using Hashtag As Distant Supervision
As Table 14 presents, our proposed framework also can enhance the representation quality using hashtags as distantly supervised labels. InfoDCLRoBERTa, the model further training RoBERTa on the training set of TweetHashtag-EN with our framework, obtains average F1 of 77.36 and 33For Emo-esMoham, we use fine-tuning XLM-R as SoTA
model because we convert the intensity regression task to a emotion classification and there is no SoTA model.
75.43 across the 16 in-domain and eight out-ofdomain datasets, respectively. Comapred to baselines, our DCL obtains the best performance average F1 score across 16 in-domain datasets (
F1 = 77.64). InfoDCL-BERTweet, the further pre-trained BERTweet on the training set of TweetHashtag-EN with our framework, obtains average F1 of 78.29 and 74.44 across the 16 in-domain and eight out-of-domain datasets, respectively.
## E.5 Topic Classification
We fine-tune baselines and our models on two topic classification datasets and report macro F1 scores in Table 15. We find that our hashtagbased InfoDCL model acquires best performance on both datasets, for AGNews F1 = 97.42, and for TopicDao F1 = 94.80. These results indicate that our framework can also effectively improve topic classification when we use hashtags as distant labels.
| L | Task | XLM | InfoDCL-XLMR | SoTA | |
|-------------|--------|-------|----------------|--------|--------|
| EN | Mono | Mult | | | |
| EmoMag | 72.23 | 72.08 | 72.59 | 72.56 | 60.32⋆ |
| IronyGhan | 81.15 | 78.75 | 81.85 | 82.23 | 84.40† |
| OffenseMub | 84.87 | 85.08 | 85.61 | 87.10 | 90.50‡ |
| EmoBian | 70.37 | 73.51 | 73.58 | 74.36 | 71.00§ |
| IronyCig | 73.22 | 73.52 | 74.07 | 73.42 | 73.10♠ |
| HateBos | 78.63 | 78.06 | 79.44 | 79.77 | 79.93♦ |
| Emo-esMoham | 76.61 | 76.59 | 77.29 | 77.66 | - |
| IronyOrt | 72.88 | 73.11 | 72.98 | 74.91 | 71.67♣ |
| Hate-esBas | 76.07 | 75.33 | 76.33 | 77.03 | 73.00♡ |
| Average | 76.23 | 76.23 | 77.08 | 77.67 | - |
![24_image_0.png](24_image_0.png)
![24_image_1.png](24_image_1.png)
![24_image_2.png](24_image_2.png)
## E.6 Senteval
Each STS dataset includes pairs of sentences each with a gold semantic similarity score ranging from 0 to 5. We encode each sentence by the hidden state of '[CLS]' token from the last Transformer encoder layer. We then calculate the Spearman's correlation between cosine similarity of sentence embeddings
Task RB MLM H-MLM SLP Mir-B Sim-S Sim-D WCL DCL InfoD-R BTw InfoD-B
CrisisOltea 95.87 95.75 95.74 95.96 96.12 95.88 **95.94** 95.84 95.92 **95.94** 95.76 **95.84** EmoMoham 78.76 79.17 79.70 78.85 78.67 77.58 80.55 77.33 80.36 **80.58 80.23** 80.22 HateWas 57.01 57.70 57.22 57.55 56.78 56.40 56.40 57.59 57.17 56.64 **57.32** 57.11 HateDav 76.04 76.81 **77.59** 77.40 76.71 75.81 76.75 76.82 77.44 77.17 76.93 **78.31** HateBas 47.85 50.28 **50.96** 49.11 46.26 45.90 50.22 48.04 48.93 49.99 53.62 **53.75** HumorMea 93.28 93.30 93.46 93.55 92.21 91.81 94.07 92.51 **94.64** 93.88 **94.43** 94.25
![25_image_0.png](25_image_0.png) IronyHee-A 72.87 73.05 73.68 73.87 71.64 69.76 **77.41** 72.88 76.41 75.94 77.03 **79.51** IronyHee-B 53.20 51.12 54.75 54.76 50.70 48.68 55.38 51.84 **57.36** 55.74 56.73 **58.78** OffenseZamp 79.93 79.81 79.20 **80.74** 79.73 79.74 80.56 79.53 80.55 80.65 79.35 **79.36** SarcRiloff 73.71 70.04 72.44 74.12 68.73 67.92 75.22 70.51 **75.90** 74.51 78.76 **78.83** SarcPtacek 95.99 95.99 96.15 95.99 95.57 95.20 96.07 95.68 **96.19** 95.98 96.40 **96.66** SarcRajad 85.21 85.97 85.79 85.72 84.60 83.93 86.71 85.61 86.76 **86.77** 87.13 **87.43** SarcBam 79.79 80.32 80.84 80.09 78.95 78.31 **81.45** 79.79 81.24 80.33 81.76 **83.87** SentiRosen 89.55 89.59 90.20 89.05 87.33 85.58 90.35 88.34 90.76 **90.93** 89.53 **89.59**
SentiThel 71.41 **72.19** 71.72 71.81 71.12 70.66 **72.19** 71.63 71.71 71.93 71.64 **71.82**
StanceMoham 69.44 69.95 70.34 69.77 65.47 64.76 70.16 68.80 **70.87** 70.73 **68.33** 67.30
Average 76.24 76.31 76.86 76.77 75.04 74.25 77.46 75.80 **77.64** 77.36 77.81 **78.29**
EmotionWall 66.51 66.41 67.34 65.27 63.92 62.19 **68.37** 63.45 67.78 67.74 64.48 **64.64** EmotionDem 56.59 56.19 56.50 56.00 56.15 56.20 **56.68** 55.78 56.24 55.76 53.33 **55.61** SarcWalk 67.50 67.90 **68.66** 65.06 63.65 66.15 67.48 66.87 66.53 68.44 67.27 **67.86**
SarcOra 76.92 77.41 76.06 76.85 75.37 76.34 76.82 76.44 77.38 **77.77 77.33** 77.04
Senti-MR 89.00 89.90 89.48 88.96 88.86 88.73 **90.29** 88.94 90.14 90.12 87.94 **88.06** Senti-YT 90.22 90.65 90.40 90.19 89.59 87.74 91.81 90.44 91.68 **92.16** 92.25 **92.65**
SST-5 54.96 55.92 55.52 55.69 55.00 54.35 56.26 54.18 55.40 **56.33** 55.74 **55.97**
SST-2 94.57 94.69 94.34 94.39 93.76 93.07 94.14 94.12 94.42 **95.15** 93.32 **93.72** Average 74.53 74.88 74.79 74.05 73.29 73.10 75.23 73.78 74.94 **75.43** 73.96 **74.44**
In-Domain Out-of-Domain
Emoji-based Hashtag-based
Model AGN Topic Ave Model AGN Topic Ave
RB 96.97 **94.75** 95.86 - - - -
MLM 97.00 94.58 95.79 MLM 97.01 94.78 95.89 E-MLM 96.97 94.73 95.85 E-MLM 97.13 94.66 95.90
SLP 97.12 94.54 95.83 SLP 97.04 94.63 95.84
Mir-B 96.86 94.72 95.79 Mir-B 97.13 94.66 95.90 Sim-S 96.88 94.73 95.81 Sim-S 96.90 94.65 95.78
Sim-D 97.08 94.70 **95.89** Sim-D 97.30 94.79 96.04
WCL **97.13** 94.65 **95.89** WCL 97.09 94.56 95.83
DCL 97.08 94.59 95.84 DCL 97.23 94.64 95.93
InfoD-R 97.01 94.48 95.74 InfoD-R **97.42 94.80 96.11**
BTw 97.00 94.43 95.72 - - - -
InfoD-B **97.05 94.47 95.76** InfoD-B **97.26 94.49 95.87**
and the gold similarity score of each pair. Same as Mirror-BERT (Liu et al., 2021a) and SimCSE (Gao et al., 2021), we report the overall Spearman's correlation. For transfer learning tasks, we follow the evaluation protocal of SentEval, where a trainable logistic regression classifier is added on top of a frozen encoder that is an PLM. We report classification accuracy of eight transfer learning datasets in Tables 16. Although our InfoDCL underperforms Mirror-BERT on all STS datasets, but it still outperforms than Baseline 1, 2, and 3. Our InfoDCL
is not designed to improve STS task but it does not hurt performance compared to Baseline 2. Moreover, our InfoDCL achieves the best average performance on eight transferring datasets. We note that four datasets are SM tasks. Only regarding the other four non-SM tasks, our InfoDCL model still outperforms most baselines and achieves the second best performance on average, which is only 0.40 F1 points lower than Mirror-BERT.
![25_image_1.png](25_image_1.png)
Task RB MLM E-MLM SLP Mir-B Sim-S Sim-D WCL DCL InfoD-R BTw InfoD-B
STS12 15.88 37.71 34.55 50.07 **59.07** 54.18 46.13 34.81 46.46 48.13 29.20 **42.54**
STS13 38.11 55.72 53.90 53.87 **69.89** 65.06 45.99 37.56 47.24 51.44 36.26 **44.40** STS14 28.58 40.16 40.86 44.88 **63.82** 59.18 43.20 24.51 42.76 46.79 33.76 **38.95**
STS15 40.22 59.49 56.35 61.83 **73.78** 70.30 52.76 50.36 49.11 58.04 49.19 **54.67**
STS16 50.12 62.13 65.12 58.41 **74.20** 70.45 51.17 36.33 45.39 57.09 46.99 **49.42** SICK-R 62.54 64.42 63.48 64.21 **64.29** 63.53 57.14 47.22 56.93 62.81 48.76 **59.15** STS-B 46.63 56.00 58.50 59.93 **68.75** 64.49 53.00 42.24 50.64 56.65 38.24 **52.46**
Average 40.30 53.66 53.25 56.17 **67.69** 63.88 49.91 39.00 48.36 54.42 40.34 **48.80**
MR 75.92 76.85 80.62 86.79 76.72 73.77 86.04 78.96 **86.83** 86.66 79.58 **86.12**
CR 69.59 77.35 84.79 89.69 81.48 80.19 89.48 83.74 **90.36** 89.75 80.82 **89.62**
SUBJ 91.50 90.63 91.01 92.24 91.57 90.29 91.24 92.91 92.61 **93.71** 93.03 **93.53** MPQA 73.75 80.40 78.54 **87.93** 85.39 83.92 87.18 85.30 87.51 87.12 71.78 **86.21** SST2 82.81 85.50 88.14 92.53 81.05 78.69 91.87 85.28 91.43 **92.59** 86.66 **91.10**
SST5 38.46 41.81 46.65 52.31 44.48 41.45 48.60 43.48 50.77 **53.08** 43.71 **52.13**
TREC 61.40 73.20 72.20 78.60 **87.00** 86.00 74.60 84.20 75.80 83.00 80.80 **83.40** MRPC 71.42 73.04 74.09 74.61 **74.67** 74.49 71.59 71.88 71.54 73.22 **72.35** 72.00
Average 70.61 74.85 77.01 81.84 77.80 76.10 80.08 78.22 80.86 **82.39** 76.09 **81.76**
## E.7 Few Shot Learning
Since InfoDCL exploits an extensive set of cues in the data that capture a broad range of finegrained SM concepts, we hypothesize it will be also effective in few-shot learning. Hence, we test this hypothesis for both in-domain and outof-domain tasks. Figure 5 and Table 19 compare our models to three strong baselines when they are trained with different percentages of training samples. Results show that our proposed InfoDCL model always outperforms all baselines on average F1 scores across both in-domain and out-of-domain tasks. For 16 in-domain tasks, our InfoDCL-RoBERTa remarkably surpasses the RoBERTa baseline with a sizable 12.82 average F1 scores when we only provide 1% training data from downstream tasks. Compared to other strong baselines, fine-tuning BERTweet and SimCSEDistant (also our method), InfoDCL-RoBERTa outperforms these with 12.91 and 3.55 average F1 scores, respectively, when we use 1% training data for downstream fine-tuning. With only 5%
of gold data, InfoDCL-RoBERTa improves 5.76 points over the RoBERTa baseline. For eight outof-domain tasks, InfoDCL-RoBERTa outperforms the RoBERTa, BERTweet, and SimCSE-Distant baselines with 16.23, 15.52, and 2.89 average F1 scores, respectively, when the models are only finetuned on 1% training data of downstream tasks. As Figure 5b and Table 19 show, InfoDCL-RoBERTa consistently outperforms all the baselines given any percentage of training data. Tables 20, 21, 22, 23, 24, and 25, respectively, present the performance of RoBERTa, BERTweet, SimCSE-Distant, DCL, InfoDCL-RoBERTa, InfoDCL-BERTweet on all
## F Analyses F.1 Model Analysis
Table 17 shows that both PMI and EC-Emb are capable of capturing sensible correlations between emojis (although the embedding approach includes a few semantically distant emojis, such as the emoji
' ' being highly related to ' ').
## F.2 Qualitative Analysis
We provide a qualitative visualization analysis of our model representation. For this purpose, we use our InfoDCL-RoBERTa to obtain representations of samples in the TweetEmoji-EN's validation set ('[CLS]' token from the last encoder layer) then average the representations of all tweets with the same surrogate label (emoji). We then project these emoji embeddings into a two-dimensional space using t-SNE. As Fig. 6 shows, we can observe a number of distinguishable clusters. For instance, a cluster of love and marriage is grouped in the left region, unhappy and angry faces are in the right side, and food at the bottom. We can also observe sensible relations between clusters. For instance, the cluster of love and marriage is close to the cluster of smiling faces but is far away from the cluster of unhappy faces. In addition, the cluster of aquatic animals (middle bottom) is close to terrestrial animals while each of these is still visually distinguishable. We also note that emojis which contain the same emoji character but differ in skin tone are clustered together. An example of these is emojis of Santa Claus (left bottom). This indicates
Q Method 1 2 3 4 5 6 7 8 9 10
PMI .11 .11 .10 .10 .10 .10 .10 .09 .09 .07
E-em .34 .32 .31 .28 .28 .28 .28 .27 .27 .26 PMI .67 .67 .66 .66 .62 .62 .61 .55 .54 .46
E-em .36 .36 .36 .36 .36 .35 .35 .34 .34 .33
PMI .65 .53 .53 .52 .52 .50 .49 .45 .45 .43
E-em .36 .34 .34 .34 .34 .32 .32 .32 .32 .32
that our InfoDCL model has meticulously captured the relations between the emoji surrogate labels.
## G Uniformity And Tolerance
Wang and Liu (2021) investigate representation quality measuring the uniformity of an embedding distribution and the tolerance to semantically similar samples. Given a dataset D and an encoder Φ, the uniformity is based on a gaussian potential kernel introduced by Wang and Isola (2020) and is formulated as:
$$\mathcal{L}_{uniformity}=log\underset{x_{i},x_{j}\in D}{\mathbb{E}}\left[e^{-t||\Phi(x_{i})-\Phi(x_{j})||_{2}^{2}}\right],\tag{13}$$
(11)
where t = 2. Wang and Liu (2021) use
−L*unif ormity* as the uniformity metric, thus a higher uniformity score indicates that the embedding distribution is closer to a uniform distribution.
The tolerance metric measures the mean of similarities of samples belonging to the same class, which defined as:
T olerance = E
xi,xj∈D
[(Φ(xi)
T Φ(xj ))·Il(xi)=l(xj )], (12)
where l(xi) is the supervised label of sample xi. Il(xi)=l(xj )is an indicator function, giving the value of 1 for l(xi) = l(xj ) and the value of 0 for l(xi) ̸= l(xj ). In our experiments, we use gold development samples from our downstream SM
datasets.
![28_image_0.png](28_image_0.png)
Task InfoDCL A B C D E F G H I
CrisisOltea 96.01 95.91 95.88 95.91 95.83 95.96 95.92 95.75 95.96 95.79
![28_image_1.png](28_image_1.png)
![28_image_2.png](28_image_2.png)
EmoMoham 81.34 82.31 82.03 80.98 80.06 81.28 80.54 81.27 82.11 81.49
HateWas 57.30 57.13 57.09 57.03 57.30 57.24 57.14 56.89 57.08 57.12 HateDav 77.29 76.82 77.88 77.59 76.74 76.11 76.79 77.69 77.40 77.15
HateBas 52.84 51.77 52.39 51.90 52.79 51.26 52.17 51.67 53.63 50.97
HumorMea 93.75 93.08 93.62 93.17 94.23 93.64 94.13 93.26 93.87 93.78
IronyHee-A 76.31 76.41 77.14 77.11 74.99 78.19 77.15 76.95 76.55 76.18
IronyHee-B 57.22 55.88 57.60 56.01 53.98 58.69 57.48 56.51 57.62 56.00 OffenseZamp 81.21 80.49 81.13 80.97 80.45 79.01 79.94 81.05 80.40 81.61
SarcRiloff 78.31 76.26 76.78 77.44 74.81 78.09 79.26 77.76 78.22 76.14
SarcPtacek 96.10 95.96 95.85 96.18 95.84 96.45 96.13 95.94 96.10 96.20 SarcRajad 87.00 86.54 86.63 86.69 86.79 87.61 87.45 86.85 86.66 86.63
SarcBam 81.49 81.35 81.74 81.34 80.82 83.02 81.31 81.69 81.80 81.46
SentiRosen 91.59 91.51 91.62 91.91 91.51 91.44 90.65 91.97 91.28 91.85 SentiThel 71.87 71.65 71.60 71.67 72.09 71.19 71.73 72.01 71.50 71.80
StanceMoham 71.13 71.03 70.51 71.84 69.75 70.80 69.74 70.66 70.35 70.45
Average 78.17 77.75 78.09 77.98 77.37 78.12 77.97 78.00 78.16 77.79
| Percentage | 1 | 5 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 |
|---------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| In-Domain | | | | | | | | | | | | |
| RoBERTa | 46.96 | 62.70 | 66.41 | 71.96 | 73.54 | 74.34 | 75.09 | 74.99 | 75.37 | 75.95 | 76.27 | 76.24 |
| BERTweet | 46.87 | 60.46 | 64.75 | 69.08 | 74.96 | 75.88 | 76.35 | 76.70 | 77.12 | 77.39 | 77.92 | 77.81 |
| Sim-D | 56.23 | 65.43 | 70.19 | 73.70 | 75.24 | 75.45 | 76.08 | 76.32 | 76.79 | 77.01 | 77.35 | 77.81 |
| DCL | 59.05 | 67.12 | 71.81 | 74.33 | 75.45 | 75.85 | 76.47 | 76.80 | 77.17 | 77.29 | 77.54 | 77.97 |
| InfoDCL-RB | 59.78 | 68.45 | 73.19 | 74.85 | 75.82 | 75.98 | 76.81 | 76.93 | 77.37 | 77.35 | 77.67 | 78.17 |
| InfoDCL-BTw | 56.06 | 65.54 | 70.24 | 74.54 | 75.84 | 76.10 | 76.68 | 76.99 | 77.42 | 77.77 | 78.11 | 78.58 |
| Out-of-Domain | | | | | | | | | | | | |
| RoBERTa | 32.62 | 50.10 | 52.38 | 67.80 | 71.41 | 72.64 | 73.44 | 73.89 | 74.16 | 74.13 | 74.53 | 74.53 |
| BERTweet | 33.33 | 48.69 | 52.01 | 58.68 | 62.52 | 69.81 | 70.67 | 71.74 | 72.32 | 73.08 | 73.48 | 73.96 |
| Sim-D | 45.96 | 55.74 | 61.32 | 69.05 | 70.74 | 72.01 | 72.80 | 73.03 | 73.94 | 74.22 | 74.36 | 74.48 |
| DCL | 49.72 | 59.60 | 65.35 | 69.64 | 71.76 | 72.79 | 73.44 | 73.59 | 74.26 | 74.36 | 74.71 | 75.37 |
| InfoDCL-RB | 48.85 | 62.06 | 67.10 | 70.75 | 72.28 | 73.45 | 74.17 | 74.44 | 74.95 | 75.22 | 75.28 | 75.54 |
| InfoDCL-BTw | 45.59 | 54.15 | 59.42 | 67.43 | 70.61 | 71.50 | 72.33 | 72.50 | 73.12 | 73.63 | 74.15 | 74.32 |
Table 19: Few-shot learning on downstream tasks where we use varying percentages of Train sets. We report the averaged Test macro-F1 score across 16 in-domain tasks and eight out-of-domain tasks, respectively. **Sim-D:**
SimCSE-Distant, RB: RoBERTa, **BTw:** BERTweet.
1 5 10 20 30 40 50 60 70 80 90 100 20 100 500 1000
| Percentage | # of Training Samples |
|--------------|-------------------------|
CrisisOltea 94.88 95.18 95.59 95.67 95.73 95.65 95.88 95.77 95.72 95.83 95.92 95.87 37.20 70.27 95.09 95.20 EmoMoham 13.39 51.63 70.83 74.20 75.45 76.42 76.59 76.70 78.00 77.85 77.40 78.76 14.21 14.68 73.85 75.49
HateWas 28.23 52.72 54.66 55.30 56.65 58.78 56.80 56.77 56.64 57.26 59.98 57.01 26.59 32.94 52.98 54.53
HateDav 38.02 71.66 73.50 74.74 76.08 76.55 76.06 77.31 77.62 76.58 77.65 76.04 30.64 30.47 67.68 71.24
HateBas 44.61 51.48 48.71 48.77 48.29 45.60 48.60 46.46 47.72 50.35 46.78 47.85 41.43 42.54 49.49 46.99
HumorMea 38.08 88.33 90.07 91.33 91.33 92.08 92.00 91.92 92.34 92.75 92.17 93.28 42.28 58.71 90.08 91.20
IronyHee-A 41.78 56.76 64.98 68.11 68.82 69.62 70.68 71.67 70.66 72.92 73.44 72.87 44.79 55.90 65.82 68.05 IronyHee-B 20.49 34.16 41.95 46.54 48.62 48.10 51.49 51.29 51.20 52.25 53.22 53.20 20.29 21.98 44.58 47.52 OffenseZamp 42.70 75.61 77.99 77.70 79.24 79.04 79.60 79.81 78.83 80.73 80.45 79.93 34.63 41.89 76.09 76.90 SarcRiloff 45.76 44.48 43.99 53.03 65.37 71.90 73.46 70.35 71.81 73.72 74.29 73.71 45.65 43.99 70.53 74.78
SarcPtacek 81.99 85.98 87.24 88.72 89.99 91.15 92.01 92.73 93.51 94.16 95.07 95.99 45.05 39.78 81.35 83.21 SarcRajad 69.83 76.95 79.45 81.02 82.07 82.34 83.48 83.36 84.29 84.19 85.21 85.21 47.42 47.01 64.09 73.27
SarcBam 62.09 73.41 75.41 76.39 77.15 77.46 78.50 78.92 79.39 78.79 79.59 79.79 43.90 61.87 73.11 75.10 SentiRosen 40.91 43.05 36.98 86.94 87.53 88.73 88.49 88.95 89.61 88.82 89.66 89.55 45.27 57.00 88.78 89.55
SentiThel 65.13 68.73 69.87 69.56 70.02 71.06 70.69 70.96 70.22 70.83 70.76 71.41 19.46 24.10 65.52 67.15 StanceMoham 23.45 33.07 51.42 63.28 64.34 65.02 67.21 66.87 68.36 68.21 68.73 69.44 24.70 27.57 61.95 65.05
Average 46.96 62.70 66.41 71.96 73.54 74.34 75.09 74.99 75.37 75.95 76.27 76.24 35.22 41.92 70.06 72.20
EmotionWall 5.54 7.10 10.44 41.46 57.69 61.02 62.59 64.16 65.74 64.83 65.76 66.51 4.19 21.06 63.93 66.50
EmotionDem 12.73 42.06 46.31 51.58 52.65 53.90 54.89 54.58 55.67 55.49 56.28 56.59 0.51 2.47 30.70 41.68
SarcWalk 40.08 34.73 43.92 62.89 63.02 66.13 66.64 67.67 67.43 67.69 68.96 67.50 35.22 51.67 67.02 67.39
SarcOra 45.66 53.56 48.87 74.78 75.47 75.19 76.55 77.27 77.02 77.40 77.07 76.92 45.92 63.66 77.69 75.42
Senti-MR 44.08 85.93 87.02 87.98 88.52 88.30 89.13 88.84 89.29 89.31 89.05 89.00 40.69 67.17 86.02 87.17 Senti-YT 40.90 40.48 40.49 78.67 88.28 90.19 89.47 90.29 89.42 89.59 89.86 90.22 45.05 43.40 89.55 90.24
SST-5 8.87 45.89 50.01 52.26 52.57 53.37 54.00 54.51 54.94 54.81 54.79 54.96 10.91 11.70 47.76 50.42
SST-2 63.12 91.09 91.99 92.75 93.11 93.05 94.28 93.78 93.74 93.89 94.49 94.57 34.08 67.80 91.44 92.50
Average 32.62 50.10 52.38 67.80 71.41 72.64 73.44 73.89 74.16 74.13 74.53 74.53 27.07 41.12 69.26 71.42
Table 20: Full results of few-shot learning on Baseline (1), fine-tuning RoBERTa.
| Percentage | # of Training Samples | | | | | | | | | | | | | | | |
|--------------|-------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| 1 | 5 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | 20 | 100 | 500 | 1000 | |
| CrisisOltea | 93.47 | 95.07 | 95.42 | 95.40 | 95.53 | 95.55 | 95.59 | 95.79 | 95.63 | 95.76 | 95.68 | 95.76 | 50.00 | 46.96 | 94.65 | 95.02 |
| EmoMoham | 20.49 | 18.15 | 53.35 | 73.16 | 76.84 | 76.95 | 78.00 | 78.55 | 78.88 | 79.54 | 79.94 | 80.23 | 20.02 | 16.86 | 70.70 | 75.59 |
| HateWas | 28.22 | 51.43 | 53.03 | 54.95 | 55.62 | 55.54 | 56.26 | 56.46 | 56.39 | 56.26 | 56.91 | 57.32 | 29.25 | 28.22 | 51.29 | 53.60 |
| HateDav | 28.86 | 68.38 | 73.29 | 75.37 | 76.60 | 76.12 | 77.32 | 76.39 | 76.77 | 77.07 | 76.90 | 76.93 | 31.20 | 30.34 | 57.32 | 67.51 |
| HateBas | 50.93 | 54.01 | 52.50 | 53.49 | 53.56 | 53.77 | 52.72 | 53.69 | 54.42 | 54.98 | 53.51 | 53.62 | 45.40 | 46.97 | 51.86 | 54.08 |
| HumorMea | 42.89 | 90.08 | 92.22 | 92.98 | 93.13 | 93.56 | 93.57 | 93.82 | 94.00 | 93.90 | 94.33 | 94.43 | 45.11 | 44.00 | 90.90 | 92.15 |
| IronyHee-A | 46.60 | 56.60 | 67.13 | 72.41 | 74.13 | 74.43 | 76.25 | 76.26 | 76.39 | 77.06 | 78.15 | 77.03 | 47.26 | 55.23 | 71.80 | 75.48 |
| IronyHee-B | 19.99 | 21.82 | 30.42 | 39.89 | 46.99 | 47.97 | 49.80 | 51.11 | 53.21 | 54.25 | 56.66 | 56.73 | 17.08 | 21.35 | 33.09 | 45.62 |
| OffenseZamp | 44.58 | 73.92 | 76.19 | 78.03 | 79.25 | 79.58 | 79.10 | 79.65 | 79.40 | 79.60 | 80.32 | 79.35 | 45.86 | 45.30 | 74.47 | 75.95 |
| SarcRiloff | 44.49 | 44.19 | 45.48 | 43.99 | 78.47 | 78.96 | 78.14 | 78.29 | 79.28 | 78.93 | 79.67 | 78.76 | 45.77 | 44.92 | 77.83 | 78.66 |
| SarcPtacek | 85.44 | 88.13 | 89.21 | 90.71 | 91.61 | 92.47 | 93.34 | 93.77 | 94.39 | 95.03 | 95.76 | 96.40 | 53.31 | 43.61 | 83.95 | 86.01 |
| SarcRajad | 47.01 | 82.25 | 82.89 | 84.70 | 85.09 | 85.70 | 85.87 | 86.52 | 86.32 | 86.87 | 86.65 | 87.13 | 47.90 | 47.01 | 47.01 | 80.09 |
| SarcBam | 62.12 | 76.58 | 78.45 | 79.24 | 80.48 | 81.26 | 81.32 | 81.61 | 81.64 | 81.93 | 82.05 | 81.76 | 45.48 | 42.21 | 76.34 | 77.86 |
| SentiRosen | 45.50 | 50.05 | 41.24 | 42.27 | 78.93 | 87.63 | 88.36 | 88.70 | 89.20 | 89.35 | 89.76 | 89.53 | 51.81 | 52.98 | 88.86 | 89.82 |
| SentiThel | 61.79 | 68.00 | 70.32 | 70.70 | 71.35 | 71.85 | 71.77 | 71.64 | 71.51 | 71.95 | 72.44 | 71.64 | 24.59 | 19.20 | 63.77 | 66.84 |
| StanceMoham | 27.59 | 28.67 | 34.79 | 58.06 | 61.74 | 62.71 | 64.13 | 64.86 | 66.43 | 65.74 | 68.01 | 68.33 | 26.19 | 26.47 | 59.82 | 61.65 |
| Average | 46.87 | 60.46 | 64.75 | 69.08 | 74.96 | 75.88 | 76.35 | 76.70 | 77.12 | 77.39 | 77.92 | 77.81 | 39.14 | 38.23 | 68.35 | 73.50 |
| EmotionWall | 8.44 | 8.78 | 7.76 | 17.85 | 31.73 | 45.72 | 51.85 | 56.03 | 58.17 | 61.24 | 62.31 | 64.48 | 6.25 | 7.86 | 55.09 | 62.94 |
| EmotionDem | 1.74 | 22.10 | 33.95 | 43.88 | 46.79 | 47.76 | 49.06 | 49.61 | 51.02 | 51.24 | 52.89 | 53.33 | 1.27 | 1.48 | 4.41 | 20.92 |
| SarcWalk | 44.46 | 49.15 | 52.05 | 60.70 | 64.68 | 65.06 | 65.05 | 66.16 | 66.17 | 67.48 | 67.57 | 67.27 | 49.57 | 53.74 | 65.94 | 69.24 |
| SarcOra | 48.93 | 59.61 | 57.33 | 75.14 | 75.32 | 76.06 | 75.06 | 76.70 | 76.04 | 77.04 | 76.73 | 77.33 | 40.55 | 64.86 | 76.03 | 76.76 |
| Senti-MR | 48.58 | 84.79 | 86.21 | 86.57 | 87.36 | 87.98 | 87.77 | 87.25 | 88.02 | 88.05 | 88.13 | 87.94 | 43.58 | 59.23 | 85.45 | 86.68 |
| Senti-YT | 48.07 | 46.96 | 42.87 | 43.43 | 50.78 | 90.74 | 91.20 | 91.77 | 91.94 | 92.05 | 91.93 | 92.25 | 45.43 | 44.56 | 91.75 | 91.91 |
| SST-5 | 14.15 | 28.93 | 45.48 | 50.25 | 51.58 | 52.82 | 52.79 | 53.32 | 54.24 | 54.26 | 54.87 | 55.74 | 14.34 | 12.86 | 32.28 | 46.22 |
| SST-2 | 52.28 | 89.19 | 90.43 | 91.63 | 91.96 | 92.31 | 92.55 | 93.04 | 92.91 | 93.28 | 93.44 | 93.32 | 46.12 | 70.62 | 89.23 | 91.08 |
| Average | 33.33 | 48.69 | 52.01 | 58.68 | 62.52 | 69.81 | 70.67 | 71.74 | 72.32 | 73.08 | 73.48 | 73.96 | 30.89 | 39.40 | 62.52 | 68.22 |
Percentage # of Training Samples
1 5 10 20 30 40 50 60 70 80 90 100 20 100 500 1000
CrisisOltea 94.21 94.94 95.28 95.53 95.69 95.72 95.76 95.81 95.89 95.96 95.86 95.94 61.80 90.88 94.31 94.63
EmoMoham 24.31 53.06 75.65 77.15 78.46 78.53 78.77 79.68 80.17 79.75 81.00 81.05 23.47 41.68 76.72 78.35
HateWas 32.05 51.26 53.38 54.94 55.47 56.18 55.99 56.46 56.80 57.06 57.29 57.13 34.03 32.73 51.66 53.03 HateDav 38.33 71.56 73.83 74.42 76.12 75.36 76.50 76.98 76.96 75.93 77.81 77.15 34.57 34.33 66.42 70.04
HateBas 52.43 49.63 48.54 49.62 50.11 48.63 50.51 49.55 50.98 52.33 51.20 52.32 48.50 48.69 47.91 47.77
HumorMea 87.85 91.21 92.17 92.34 92.86 92.70 92.98 92.72 93.39 93.83 93.45 93.42 61.12 89.40 92.11 92.33
IronyHee-A 55.34 65.12 69.03 70.36 71.15 72.07 72.34 72.80 74.06 73.86 75.32 75.36 47.78 62.27 69.17 70.81
IronyHee-B 24.70 29.93 38.35 46.56 48.07 49.36 51.92 52.88 53.28 53.24 53.02 54.06 22.69 28.97 43.97 47.44
OffenseZamp 56.44 75.83 76.51 78.26 79.01 79.86 80.08 79.38 80.17 79.91 80.31 80.80 50.05 47.84 74.67 77.01 SarcRiloff 49.67 50.08 50.87 69.15 76.39 75.52 76.36 76.03 76.45 77.53 78.14 80.27 49.37 48.90 74.22 77.77 SarcPtacek 84.26 87.25 88.17 89.49 90.47 91.68 92.41 93.16 93.89 94.56 95.35 96.07 62.61 64.88 83.56 84.73
SarcRajad 80.89 83.20 83.92 85.12 85.78 85.21 86.01 86.18 86.14 86.19 86.24 87.20 48.68 48.28 80.20 82.51
SarcBam 70.06 75.35 77.85 78.05 79.21 79.65 79.83 80.64 80.60 81.69 81.23 81.40 53.37 65.46 74.84 76.49 SentiRosen 50.91 60.45 76.82 87.28 89.19 89.62 89.81 89.84 90.01 90.34 90.13 90.64 62.69 85.07 90.69 90.31
SentiThel 63.40 68.90 70.07 70.03 70.96 71.30 71.15 71.13 71.17 71.17 71.56 71.68 26.96 35.60 64.63 66.59
StanceMoham 34.85 39.11 52.68 60.98 64.86 65.87 66.81 67.96 68.71 68.80 69.71 70.48 32.19 39.92 59.86 64.41
Average 56.23 65.43 70.19 73.70 75.24 75.45 76.08 76.32 76.79 77.01 77.35 77.81 44.99 54.06 71.56 73.39
EmotionWall 11.47 23.74 33.53 47.89 56.53 61.85 63.77 64.81 66.67 66.60 67.51 67.68 13.27 37.34 64.42 67.28
EmotionDem 6.54 32.45 43.01 47.14 48.98 50.07 52.00 52.55 54.19 55.42 55.41 55.27 1.41 5.04 16.61 30.48
SarcWalk 49.94 51.42 54.93 61.15 60.60 62.37 62.92 62.95 63.91 64.63 64.12 65.04 51.43 53.00 63.73 65.99 SarcOra 53.84 63.83 65.38 73.30 75.02 75.14 76.09 75.99 77.31 77.61 77.44 77.12 47.54 69.24 73.89 77.37
Senti-MR 83.37 86.80 87.12 87.29 87.68 87.76 88.23 87.98 88.45 88.66 88.45 89.09 58.39 84.19 86.24 87.12 Senti-YT 52.25 53.93 63.56 90.35 90.83 91.66 91.65 91.61 92.11 92.17 92.24 92.23 55.59 74.67 91.96 92.03 SST-5 22.17 42.07 51.01 52.34 53.02 53.87 54.06 54.38 54.98 54.85 55.46 55.09 16.13 17.87 46.35 50.77
SST-2 88.13 91.65 92.01 92.97 93.27 93.36 93.69 93.92 93.91 93.84 94.22 94.29 68.44 90.28 91.63 92.95
Average 45.96 55.74 61.32 69.05 70.74 72.01 72.80 73.03 73.94 74.22 74.36 74.48 39.02 53.95 66.85 70.50
Table 22: Full results of few-shot learning on SimCSE-Distant.
| Percentage | # of Training Samples |
|--------------|-------------------------|
1 5 10 20 30 40 50 60 70 80 90 100 20 100 500 1000
CrisisOltea 94.25 94.97 95.33 95.49 95.55 95.66 95.75 95.81 95.85 95.92 95.82 95.92 54.77 90.26 94.09 94.89
EmoMoham 40.74 64.88 74.52 75.24 78.39 77.92 77.96 79.74 79.67 79.42 80.54 80.54 33.70 52.43 77.18 77.97
HateWas 32.38 51.72 53.62 54.54 55.74 56.05 56.38 56.78 56.92 57.00 56.95 57.14 32.73 37.08 51.77 52.93 HateDav 51.88 70.75 72.86 76.27 76.30 75.80 76.30 76.45 77.00 75.89 76.79 76.79 32.33 34.89 67.12 69.86 HateBas 47.58 48.71 46.41 50.88 48.70 48.72 49.00 48.70 49.28 50.14 50.15 52.17 49.36 51.14 49.74 50.93
HumorMea 89.39 91.94 92.07 92.95 93.53 93.06 93.52 93.29 93.64 93.99 94.05 94.13 66.98 90.18 91.98 92.32
IronyHee-A 58.60 63.36 69.51 71.60 73.16 73.97 75.39 76.02 76.41 76.56 76.56 77.15 56.24 63.66 70.44 73.55
IronyHee-B 30.15 35.38 39.40 47.69 49.89 51.10 53.27 53.96 55.58 54.95 56.19 57.48 24.25 30.94 44.08 49.57 OffenseZamp 58.21 76.41 76.68 78.07 78.99 79.24 79.38 80.28 79.95 79.82 79.67 79.94 53.99 47.43 74.20 76.37
SarcRiloff 48.09 53.79 73.04 75.10 77.06 78.67 79.46 78.18 78.00 78.63 79.12 79.26 51.01 66.24 77.01 79.09
SarcPtacek 84.03 86.98 88.38 89.79 90.68 91.65 92.24 93.01 93.93 94.72 95.45 96.13 61.84 77.55 83.80 85.23
SarcRajad 81.12 83.42 84.50 85.62 85.75 86.10 86.24 86.16 86.77 86.99 86.90 87.45 49.20 56.02 80.90 82.63
SarcBam 69.96 75.07 77.42 78.85 79.13 80.33 80.60 80.79 81.25 81.37 80.68 81.31 52.21 66.83 75.82 76.61
SentiRosen 63.33 65.42 85.20 87.69 88.39 89.09 89.49 90.43 90.16 90.71 90.49 90.65 60.24 84.45 90.35 90.59
SentiThel 62.19 68.26 69.31 70.54 71.70 71.25 71.29 71.17 71.82 71.56 71.07 71.73 35.47 44.11 63.57 65.76
StanceMoham 32.91 42.83 50.68 58.91 64.28 65.01 67.25 68.06 68.51 68.99 70.19 69.74 31.31 39.80 59.99 63.38
Average 59.05 67.12 71.81 74.33 75.45 75.85 76.47 76.80 77.17 77.29 77.54 77.97 46.60 58.31 72.00 73.86
EmotionWall 13.32 24.15 35.91 51.67 60.54 64.25 65.18 65.55 67.36 66.92 68.68 68.36 15.03 37.66 66.24 68.38 EmotionDem 9.07 34.76 44.44 48.15 49.17 51.96 53.83 53.82 55.36 55.50 54.97 57.43 2.49 6.95 18.31 31.65
SarcWalk 50.36 53.15 58.43 61.57 62.78 63.28 64.74 64.01 65.57 64.88 66.28 67.39 49.08 54.10 64.36 67.36
SarcOra 54.61 64.78 66.60 71.23 74.62 75.61 76.67 76.85 76.08 77.58 76.78 77.76 49.37 69.83 76.78 78.04
Senti-MR 84.79 86.30 86.80 87.80 87.57 87.55 87.93 87.61 88.60 88.58 88.92 89.15 61.15 85.41 86.48 86.93 Senti-YT 65.50 78.44 85.95 90.51 91.81 91.83 91.42 91.62 91.67 92.00 92.30 92.26 66.60 82.08 92.01 92.47 SST-5 29.58 43.42 51.77 52.75 53.85 54.16 53.71 54.99 55.06 54.82 55.29 56.00 23.75 25.60 48.93 51.38
SST-2 90.50 91.78 92.87 93.42 93.71 93.72 94.05 94.27 94.34 94.57 94.46 94.64 70.04 91.35 92.65 93.51
Average 49.72 59.60 65.35 69.64 71.76 72.79 73.44 73.59 74.26 74.36 74.71 75.37 42.19 56.62 68.22 71.21
Table 23: Full results of few-shot learning on DCL.
Percentage # of Training Samples
1 5 10 20 30 40 50 60 70 80 90 100 20 100 500 1000
CrisisOltea 94.88 95.26 95.61 95.59 95.65 95.75 95.82 95.72 95.85 95.88 96.04 95.94 67.01 93.24 94.87 95.10
EmoMoham 30.07 66.09 76.41 77.93 79.08 78.51 79.80 80.42 80.69 79.35 80.96 81.05 23.74 55.31 77.21 79.26
HateWas 33.12 53.06 54.15 54.85 55.84 56.30 56.65 56.73 56.90 57.10 57.30 57.13 33.88 38.36 52.63 54.14 HateDav 62.43 72.62 74.77 74.38 75.47 76.11 77.32 77.53 77.19 77.08 77.96 77.15 33.47 42.19 68.00 70.83
HateBas 48.02 48.66 48.78 52.54 51.48 50.25 53.48 52.29 52.31 52.70 53.59 52.32 52.09 50.49 48.85 52.44
HumorMea 88.09 90.52 91.37 92.07 92.55 92.20 92.34 92.02 92.25 92.61 92.06 93.42 58.63 89.43 91.06 91.55
IronyHee-A 62.51 67.18 70.63 72.21 72.78 73.84 74.06 74.57 76.09 77.13 76.26 75.36 53.15 65.05 70.91 73.73
IronyHee-B 28.46 35.86 43.12 48.50 50.67 51.71 52.75 54.00 54.99 54.55 55.01 54.06 28.56 32.75 46.88 50.10
OffenseZamp 66.53 76.15 78.21 79.30 79.49 80.23 80.56 80.20 80.93 80.39 80.55 80.80 51.41 51.08 75.55 77.78 SarcRiloff 53.31 54.58 74.38 73.01 75.32 74.26 76.59 75.62 76.76 76.63 77.33 80.27 52.42 64.76 76.90 76.52 SarcPtacek 84.69 87.39 88.36 89.73 90.54 91.25 92.40 93.07 93.88 94.57 95.29 96.07 66.07 77.98 83.83 85.59
SarcRajad 79.77 82.55 83.75 84.78 85.64 85.50 85.80 85.68 86.11 86.10 86.35 87.20 48.87 52.95 79.76 81.44
SarcBam 71.06 75.57 77.61 78.89 79.21 80.11 80.42 80.33 80.92 80.51 81.02 81.40 54.57 68.07 75.05 76.81 SentiRosen 54.63 73.04 86.34 89.20 90.11 90.67 90.36 91.14 91.24 91.23 91.36 90.64 69.13 88.21 91.31 91.38
SentiThel 65.10 69.63 70.46 70.68 71.74 71.83 72.02 71.77 71.85 72.10 71.57 71.68 25.34 39.56 65.87 67.34
StanceMoham 33.86 47.08 57.05 63.91 67.62 67.18 68.54 69.82 70.00 69.65 70.11 70.48 31.78 41.55 64.78 67.55
Average 59.78 68.45 73.19 74.85 75.82 75.98 76.81 76.93 77.37 77.35 77.67 77.81 46.88 59.44 72.72 74.47
EmotionWall 14.07 23.50 37.91 54.60 61.89 65.34 65.90 67.00 67.37 67.15 67.84 67.68 11.85 44.18 67.15 68.85
EmotionDem 13.43 39.26 45.52 48.55 50.46 51.77 54.37 54.93 55.58 56.87 57.31 55.27 2.66 6.23 24.99 38.03
SarcWalk 47.28 52.12 60.47 63.49 64.28 65.45 66.92 66.19 68.03 67.72 68.04 65.04 48.78 52.23 66.80 67.28 SarcOra 54.88 70.84 72.21 74.24 74.46 76.80 76.64 77.36 77.47 77.93 77.43 77.12 54.03 70.98 76.68 76.26
Senti-MR 84.62 85.83 87.25 87.74 88.52 88.47 89.11 88.82 89.63 89.60 89.30 89.09 50.89 85.00 85.77 86.91 Senti-YT 54.82 84.10 87.81 90.23 90.51 91.20 91.38 91.77 91.69 92.03 91.83 92.23 72.37 89.37 91.53 91.81 SST-5 30.24 48.09 52.28 53.25 53.90 54.18 54.64 55.08 55.11 55.44 55.65 55.09 17.17 27.91 49.03 52.71
SST-2 91.45 92.79 93.31 93.86 94.20 94.39 94.40 94.34 94.69 95.07 94.88 94.29 69.91 92.15 92.90 93.54
Average 48.85 62.06 67.10 70.75 72.28 73.45 74.17 74.44 74.95 75.22 75.28 74.48 40.96 58.51 69.36 71.92
Table 24: Full results of few-shot learning on InfoDCL-RoBERTa.
| Percentage | # of Training Samples | | | | | | | | | | | | | | | |
|--------------|-------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| 1 | 5 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | 20 | 100 | 500 | 1000 | |
| CrisisOltea | 94.09 | 95.07 | 95.29 | 95.55 | 95.70 | 95.60 | 95.83 | 95.79 | 95.86 | 95.84 | 95.84 | 95.84 | 57.68 | 89.00 | 94.13 | 94.79 |
| EmoMoham | 29.53 | 34.42 | 67.80 | 74.42 | 77.04 | 77.55 | 77.83 | 79.56 | 80.06 | 80.66 | 80.04 | 81.96 | 25.21 | 30.90 | 73.64 | 76.25 |
| HateWas | 31.12 | 52.01 | 53.92 | 54.92 | 55.82 | 55.86 | 56.38 | 56.95 | 56.48 | 57.11 | 56.94 | 57.65 | 33.14 | 31.69 | 52.52 | 53.62 |
| HateDav | 32.42 | 69.28 | 74.02 | 75.12 | 76.59 | 76.15 | 76.77 | 77.05 | 77.23 | 77.40 | 77.77 | 77.94 | 32.84 | 31.47 | 60.75 | 68.86 |
| HateBas | 51.79 | 51.63 | 49.39 | 52.39 | 53.50 | 52.64 | 53.08 | 52.50 | 53.38 | 54.20 | 55.84 | 53.95 | 49.49 | 49.46 | 51.08 | 52.60 |
| HumorMea | 78.62 | 91.25 | 92.61 | 92.83 | 93.25 | 93.03 | 93.09 | 93.23 | 93.43 | 93.87 | 93.72 | 94.04 | 52.07 | 88.45 | 91.22 | 92.71 |
| IronyHee-A | 58.84 | 67.69 | 71.74 | 72.94 | 73.57 | 75.46 | 77.06 | 76.00 | 76.59 | 77.90 | 77.87 | 78.72 | 54.94 | 63.05 | 72.41 | 74.13 |
| IronyHee-B | 21.92 | 32.05 | 36.96 | 46.94 | 50.06 | 50.79 | 52.74 | 53.28 | 56.22 | 55.36 | 58.12 | 59.15 | 23.50 | 30.29 | 39.78 | 49.35 |
| OffenseZamp | 55.61 | 74.56 | 77.48 | 78.14 | 79.31 | 79.64 | 79.68 | 80.47 | 79.96 | 80.91 | 80.26 | 79.83 | 53.79 | 52.02 | 73.74 | 76.39 |
| SarcRiloff | 56.77 | 54.25 | 53.80 | 77.93 | 79.83 | 79.47 | 78.91 | 78.66 | 79.29 | 78.81 | 79.14 | 80.52 | 55.84 | 52.23 | 78.41 | 79.21 |
| SarcPtacek | 85.54 | 87.98 | 89.01 | 90.47 | 91.32 | 92.31 | 93.00 | 93.77 | 94.37 | 95.14 | 95.77 | 96.67 | 62.96 | 66.66 | 84.86 | 85.91 |
| SarcRajad | 80.56 | 82.99 | 83.82 | 84.98 | 86.12 | 86.07 | 86.12 | 86.34 | 86.10 | 86.78 | 86.42 | 87.20 | 48.97 | 48.55 | 80.24 | 81.42 |
| SarcBam | 71.96 | 78.74 | 79.64 | 81.03 | 80.94 | 81.84 | 82.25 | 81.96 | 82.42 | 82.88 | 83.11 | 83.20 | 54.47 | 67.95 | 77.88 | 79.86 |
| SentiRosen | 51.13 | 67.15 | 80.51 | 87.87 | 88.24 | 88.69 | 88.92 | 89.22 | 89.49 | 89.95 | 89.63 | 90.41 | 62.97 | 78.22 | 89.73 | 90.35 |
| SentiThel | 65.32 | 69.46 | 69.76 | 70.62 | 71.07 | 71.31 | 71.22 | 71.65 | 71.71 | 71.45 | 72.09 | 71.98 | 26.79 | 28.37 | 64.63 | 67.71 |
| StanceMoham | 31.67 | 40.06 | 48.05 | 56.54 | 61.17 | 61.10 | 64.04 | 65.38 | 66.12 | 66.08 | 67.19 | 68.22 | 29.90 | 33.94 | 55.87 | 61.30 |
| Average | 56.06 | 65.54 | 70.24 | 74.54 | 75.84 | 76.10 | 76.68 | 76.99 | 77.42 | 77.77 | 78.11 | 78.58 | 45.29 | 52.64 | 71.31 | 74.03 |
| EmotionWall | 12.31 | 14.81 | 27.45 | 44.30 | 54.18 | 57.67 | 60.11 | 59.24 | 62.41 | 64.31 | 65.20 | 65.61 | 13.00 | 29.74 | 61.28 | 65.57 |
| EmotionDem | 4.39 | 26.17 | 36.93 | 45.15 | 48.75 | 50.02 | 50.85 | 51.32 | 52.58 | 53.59 | 53.77 | 54.99 | 3.30 | 3.01 | 13.11 | 23.36 |
| SarcWalk | 47.12 | 50.30 | 54.64 | 56.70 | 62.89 | 62.29 | 64.76 | 65.53 | 65.84 | 65.57 | 67.73 | 67.30 | 46.91 | 51.50 | 65.01 | 67.89 |
| SarcOra | 49.18 | 66.42 | 68.51 | 70.98 | 74.05 | 74.78 | 75.17 | 75.85 | 75.40 | 76.33 | 77.27 | 76.88 | 49.69 | 67.78 | 76.81 | 76.70 |
| Senti-MR | 82.95 | 86.37 | 87.16 | 87.16 | 88.30 | 88.30 | 88.37 | 88.11 | 88.19 | 88.58 | 88.32 | 88.21 | 55.86 | 83.00 | 85.77 | 86.90 |
| Senti-YT | 56.44 | 59.69 | 59.46 | 90.81 | 91.02 | 92.04 | 92.13 | 92.06 | 92.35 | 92.23 | 92.36 | 92.41 | 64.98 | 44.15 | 92.22 | 92.07 |
| SST-5 | 23.14 | 38.42 | 49.67 | 52.45 | 52.98 | 54.06 | 54.09 | 54.45 | 54.84 | 55.01 | 55.13 | 55.93 | 17.84 | 21.24 | 40.02 | 49.84 |
| SST-2 | 89.22 | 91.04 | 91.52 | 91.85 | 92.72 | 92.84 | 93.16 | 93.45 | 93.33 | 93.44 | 93.42 | 93.73 | 58.17 | 90.51 | 90.88 | 91.69 |
| Average | 45.59 | 54.15 | 59.42 | 67.43 | 70.61 | 71.50 | 72.33 | 72.50 | 73.12 | 73.63 | 74.15 | 74.38 | 38.72 | 48.87 | 65.64 | 69.25 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Sec. 7
✓ A2. Did you discuss any potential risks of your work?
Section of Ethical Considerations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Sec. 1
✓ A4. Have you used AI writing assistants when working on this paper?
We use Grammarly to assist with grammar correction.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section C.2
✓ B1. Did you cite the creators of artifacts you used?
Section C.2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section of Ethical Considerations
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section C.2
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 3.6 and C.2
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3.5, 3.6, C1, and C.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3.5, 3.6, C1, and C.2
## C ✓ **Did You Run Computational Experiments?** Section 3.7, D.2, And D.3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3.7, D.2, and D.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.7, D.2, and D.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, 5, and E
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section D and E
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
wang-etal-2023-noisy | Noisy Positive-Unlabeled Learning with Self-Training for Speculative Knowledge Graph Reasoning | https://aclanthology.org/2023.findings-acl.153 | This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both false negative issue (i.e., potential true facts being excluded) and false positive issue (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call label posterior) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph. | # Noisy Positive-Unlabeled Learning With Self-Training For Speculative Knowledge Graph Reasoning
Ruijie Wang, Baoyu Li, Yichen Lu, Dachun Sun, Jinning Li, Yuchen Yan, Shengzhong Liu, Hanghang Tong, **Tarek F. Abdelzaher**
University of Illinois Urbana-Champaign, IL, USA
{ruijiew2,baoyul2,yichen14,dsun18,jinning4,yucheny5,sl29,htong,zaher}@illinois.edu
## Abstract
This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both *false negative issue* (i.e., potential true facts being excluded) and false positive issue (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call *label posterior*) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG
and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph.
## 1 Introduction
Knowledge graphs (KG), which store real-world facts in triples (head entity, relation, *tail entity*),
have facilitated a wide spectrum of knowledgeintensive applications (Wang et al., 2018b; Saxena et al., 2021; Qian et al., 2019; Wang et al., 2018a, 2022a). Automatically reasoning facts based on observed ones, a.k.a. Knowledge Graph Reasoning
(KGR) (Bordes et al., 2013), becomes increasingly vital since it allows for expansion of the existing KG at a low cost.
Numerous efforts have been devoted to KGR
task (Bordes et al., 2013; Lin et al., 2015; Trouillon et al., 2017; Sun et al., 2019; Li et al., 2021),
![0_image_0.png](0_image_0.png)
which assume the correctness of a fact is solely determined by its presence in KG. They ideally view facts included in KG as positive samples and excluded facts as negative samples. However, most real-world reasoning has to be performed based on sparse and unreliable observations, where there may be true facts excluded or false facts included.
Reasoning facts based on sparse and unreliable observations (which we call *speculative KG reasoning*) are still underexplored.
In this paper, we aim to enable the speculative reasoning ability on real-world KG. The fulfillment of the goal needs to address two commonly existing issues, as shown in Figure 1: 1) **The false negative**
issue (i.e., sparse observation): Due to the graph incompleteness, facts excluded from the KG can be used as implicit grounds of reasoning. This is particularly applicable to non-obvious facts. For example, personal information such as the birthplace of politicians may be missing when constructing a political KG, as they are not explicitly stated in the political corpus (Tang et al., 2022). However, it can be critical while reasoning personal facts like nationality. 2) **The false positive issue** (i.e., noisy observation): Facts included in the KG may be unreliable and should not be directly grounded without inspection. It can happen when relations between entities are incorrectly collected or when facts are extracted from outdated or unreliable sources. For example, Mary Elizabeth is no longer the Prime Minister of the United Kingdom, which may affect the reasoning accuracy of her current workplace. These issues generally affect both one-hop reasoning (Bordes et al., 2013) and multi-hop reasoning (Saxena et al., 2021). The main focus of this paper is investigating the one-hop speculative reasoning task as it lays the basis for complicated multi-hop reasoning capability.
Speculative KG reasoning differs from conventional KG reasoning in that the correctness of each collected/uncollected fact needs to be dynamically estimated as part of the learning process, such that the grounds of reasoning can be accordingly calibrated. Unfortunately, most existing work, if not all, lacks such inspection capability. Knowledge graph embedding methods (Bordes et al., 2013; Lin et al., 2015; Yang et al., 2014; Trouillon et al., 2017; Sun et al., 2019) and graph neural network (GNN) methods (Schlichtkrull et al., 2017; Dettmers et al.,
2018; Nguyen et al., 2018; Vashishth et al., 2020; Li et al., 2021) can easily overfit the false negative/positive cases because of their training objective that ranks the collected facts higher than other uncollected facts in terms of plausibility. Recent attempts on uncertain KG (Chen et al., 2019; Kertkeidkachorn et al., 2019) measure the uncertainty scores for facts, which can be utilized to detect false negative/positive samples. However, they explicitly require the ground truth uncertainty scores as supervision for reasoning model training, which are usually unavailable in practice.
Motivated by these observations, we formulate the speculative KG reasoning task as a noisy Positive-Unlabeled learning problem. The facts contained in the KG are seen as noisy positive samples with a certain level of label noise, and the facts excluded from the KG are treated as unlabeled samples, which include both negative ones and possible factual ones. Instead of determining the correctness of facts before training the reasoning model without inspection, we learn the two perspectives in an end-to-end training process. To this end, we propose nPUGraph, a novel variational framework that regards the underlying correctness of collected/uncollected facts in the KG
as latent variables for the reasoning process. We jointly update model parameters and estimate the posterior likelihood of the correctness of each collected/uncollected fact (referred to as *label posterior*), through maximizing a theoretical lower bound of the log-likelihood of each fact being collected or uncollected.
The estimated label posterior further facilitates the speculative KG reasoning from two aspects:
1) It removes false positive facts contained in KG
and improves the representation quality. We accordingly propose a label posterior-aware encoder to incorporate information only from entity neighbors induced by facts with a high posterior probability, under the assumption that the true positive facts from the collected facts provide more reliable information for reasoning. 2) It complements the grounds of reasoning by selecting missing but possibly plausible facts with high label posterior, which are iteratively added to acquire more informative samples for model training. These two procedures are ultimately unified in a simple yet effective self-training strategy that alternates between the *data sampling based on latest label posteriors* and the *model training based on latest data samples*. Empirically, nPUGraph outperforms eleven state-of-the-art baselines on three benchmark KG
data and one Twitter data we collected by large margins. Additionally, its robustness is demonstrated in speculative reasoning on data with multiple ratios of false negative/positive cases.
Our contributions are summarized as follows:
(1) We open up a practical but underexplored problem space of speculative KG reasoning, and formulate it as a noisy Positive-Unlabeled learning task; (2) We take the first step in tackling this problem by proposing a variational framework nPUGraph to jointly optimize reasoning model parameters and estimate fact label posteriors; (3) We propose a simple yet effective self-training strategy for nPUGraph to simultaneously deal with false negative/positive issues; (4) We perform extensive evaluations to verify the effectiveness of nPUGraph on both benchmark KG and Twitter interaction data with a wide range of data perturbations.
## 2 Preliminaries 2.1 Speculative Knowledge Graph Reasoning
A knowledge graph (KG) is denoted as G =
{(eh, r, et)*} ⊆ S*, where S = *E × R × E* denotes triple space, E denotes the entity set, R denotes the relation set. Each triple s = (eh*, r, e*t) refers to that a head entity eh ∈ E has a relation r ∈ R with a tail entity et ∈ E. Typically, a score function ψ(s; Θ),
parameterized by Θ, is designed to measure the plausibility of each potential triple s = (eh*, r, e*t),
and to rank the most plausible missing ones to complete KG during inference (Bordes et al., 2013; Sun et al., 2019). The goal of speculative KG reasoning is to infer the most plausible triple for each incomplete triple (eh, r, e?) or (e?*, r, e*t) given by sparse and unreliable observations in G. In addition, it requires correctness estimation for each potential fact collected or uncollected by G.
## 2.2 Noisy Positive-Unlabeled Learning
Positive-Unlabeled (PU) learning is a learning paradigm for training a model when only positive and unlabeled data is available (Plessis et al., 2015).
We formulate the speculative KG reasoning task as a noisy Positive-Unlabeled learning problem, where the positive set contains potentially label noise from false facts (Jain et al., 2016).
PU Triple Distribution. For the speculative KG
reasoning task, we aim to learn a binary classifier that maps a triple space S to a label space Y = {0, 1}. Data are split as labeled (collected)1 triples s l ∈ SL and unlabeled (uncollected) triples s u ∈ SU . The labeled triples are considered noisy positive samples with a certain level of label noise.
The distribution of labeled triples can be represented as follows:
## S L ∼ Βϕl1(S L) + (1 − Β)Φ L 0(S L), (1)
where ϕ
ly denotes the probability of being collected
over triple space S for the positive class (y = 1) and negative class (y = 0), and β ∈ [0, 1) denotes
the proportion of true positive samples in labeled
data. Unlabeled triples include both negative samples and possible factual samples. The distribution
of unlabeled samples can be represented as follows:
s
u ∼ αϕu
1 (s
u) + (1 − α)ϕ
u
$$s^{a})+(1-\alpha)\phi_{0}^{a}(s^{a}),$$
u), (2)
where ϕ u y = 1−ϕ ly denotes the probability of being uncollected, α ∈ [0, 1) denotes the positive class prior, i.e., the proportion of positive samples in unlabeled data.
PU Triple Construction. We then discuss the construction of S
L and S
U based on the collected KG
G. Triples in G naturally serve as labeled samples with a ratio of noise, i.e., S
L = G. For unlabeled set S
U , However, directly using *S \ G* as 1In this paper, we interchangeably use the term *labeled/unlabeled* and *collected/uncollected* with no distinction.
![2_image_0.png](2_image_0.png)
the unlabeled set S
U would result in too many unlabeled samples for training due to the large number of possible triples in triple space S. Following (Tang et al., 2022), we construct S
U as follows: For each labeled triple s l i = (eh*, r, e*t),
we construct K unlabeled triples s u ik by replacing the head and tail respectively with other entities:
s u ik = (eh*, r, e*−
k
) or (e
− k
, r, et), where e
−
kis the selected entity that ensures s u ik ∈ S /
L. Initially, the construction can be randomized. During the training process, it is further improved by selecting unlabeled samples with high label posterior in a self-training scheme, so as to cover positive samples in the unlabeled set to the greatest extent.
## 3 Methodology 3.1 Overview
Our approach views underlying triple labels (positive/negative) as latent variables, influencing the collection probability. Unlike the common objective of reasoning training that ranks the plausibility of the collected triples higher than uncollected ones, we instead maximize the data likelihood of each potential triple being collected or not. To this end, as shown in Figure 2, we propose nPUGraph framework to jointly optimize parameters and infer the label posterior. During the training process, the latest label posterior estimation can be utilized by a label posterior-aware encoder, which improves the quality of representation learning by only integrating information from the entity neighbors induced by true facts. Finally, a simple yet effective self-training strategy based on label posterior is proposed, which can dynamically update neighbor sets for the encoder and sample unlabeled triplets to cover positive samples in the unlabeled set to the greatest extent for model training.
The remaining of this section is structured as follows: Section 3.2 first formalizes the learning objective and the variational framework for likelihood maximization. Section 3.3 details the label posterior-aware encoder for representation learning, followed by Section 3.4 that introduces the self-training strategy.
## 3.2 Noisy Pu Learning On Kg
Due to the false negative/positive issues, the correctness of a fact (y) is not solely determined by its presence in a knowledge graph. nPUGraph addresses the issue by treating the underlying label as a latent variable that influences the probability of being collected or not. We, therefore, set maximizing the data collection likelihood as our objective.
In such a learning paradigm, the assumptions that collected triples are correct p(y = 1|s l) = 1 and uncollected triples are incorrect p(y = 0|s u) = 1 are removed. We aim to train a model on labeled triples S
L and unlabeled ones S
U , and infer the label posterior p(y|s u) and p(y|s l) at the same time by data likelihood maximization. The latest label posterior can help to detect false negative/positive cases during model training.
We first derive our training objective. To be more formal, the log-likelihood of each potential fact being collected or not is lower bounded by Eq. (3), which is given by Theorem 1.
Theorem 1. The log-likelihood of the complete data log p(S) *is lower bounded as follows:*
log p(S) ≥ E q(Y) [log p(S|Y)] − KL(q(Y)∥p(Y)) = E sl∈SL hw llog[ϕ l 1 (s l)] + (1 − w l) log[ϕ l 0 (s l)]i + E su∈SU -(w ulog[ϕ u 1 (s u)] + (1 − w u) log[ϕ u 0 (s u)] − KL(WU ∥W˜ U ) − KL(WL∥W˜ L) − ∥WL∥1 |SL|− ∥WU ∥1 |SU |, (3)
where S *denotes all labeled/unlabeled triples,* Y
is the corresponding latent variable indicating the positive/negative labels for triples, WU = {w u i}
denotes the point-wise probability for the uncollected triples being positive, WL = {w l i} denotes the probability for the collected triples being positive. W˜ U and W˜ U are the approximation of the collection probability for uncollected/collected triples respectively, produced by nPUGraph based on the latest parameters.
Proof. Refer to Appendix A.1 for proof.
We treat label Y as a latent variable and derive the lower bound for the log-likelihood, which is influenced by the prior knowledge of positive class prior α and true positive ratio β. Thus, maximizing the lower bound can jointly optimize model parameters and infer the posterior label distribution, WU
and WL. Such a learning process enables us to avoid false negative/positive issues during model training since it considers ϕ l0
(one negative triple is collected) and ϕ u 1
(one positive triple is missing)
as non-zero probability, which are determined by the latest label posterior during model training.
Probability Measure. We then specify the probability measures for positive/negative triples being collected, i.e., ϕ l1
(·) and ϕ l0
(·) (ϕ u y(·) = 1 − ϕ ly(·)
for y = 1/0). To better connect to other methods utilizing score functions for KGR, we hereby utilize the sigmoid function σ(·) to directly transform the score function ψ(s; Θ), parameterized by model parameters Θ, to probability:
$$\phi_{1}^{l}(s)=\sigma(\psi_{1}(s;\Theta)),\;\;\;\phi_{0}^{l}(s)=\sigma(\psi_{0}(s;\Theta)),$$
we hereby utilize two score functions ψ1(s; Θ) and ψ0(s; Θ) to measure the positive/negative triples being collected, as the influencing factors based on triple information can be different. We utilize two neural networks to approximate the probability measure, which will be detailed in Section 3.3.
Since we aim to detect the potential existence of positive triples in an unlabeled set, it is unnecessary to push the collection probability of all uncollected triple ϕ ly(s u) to 0 (ϕ u y(s u) to 1). A loose constraint is that we force the uncollection probability of a collected triple s llower than its corresponding uncollected triples s u: ϕ u y(s l) < ϕu y(s u). Therefore, we adopt the pair-wise ranking measure ϕ
⋆
y(s l, su)
to replace ϕ u y(s u) as follows:
$$\phi^{u}_{y}(s^{u})\rightarrow\phi^{*}_{y}(s^{l},s^{u})=\sigma(\psi_{y}(s^{u};\Theta)-\psi_{y}(s^{l};\Theta)).\tag{5}$$
Maximum Probability Training. We then derive the training objective based on Eq. (3) The first part of Eq. (3) measures the probability of data being collected/uncollected. Concretely, given each collected triple s l i ∈ SL and its corresponding K
uncollected triples s u ik ∈ SU , We denote the loss function measuring the probability as L*triple*:
$$\mathcal{L}_{triple}=-\frac{1}{K|\mathcal{S}^{L}|}\sum_{i}\sum_{k}(w_{i}^{l}\log[\phi_{1}^{l}(s_{i}^{l})]$$ $$+(1-w_{i}^{l})\log[\phi_{0}^{l}(s_{i}^{l})]+w_{ik}^{u}\log[\phi_{1}^{*}(s_{i}^{l},s_{ik}^{u})]$$ $$+(1-w_{ik}^{u})\log[\phi_{0}^{*}(s_{i}^{l},s_{ik}^{u})]),\tag{6}$$
2443
![4_image_0.png](4_image_0.png)
where w l i denotes the point-wise probability for the collected triple s l i being positive, w u ik denotes the probability for the uncollected triple s u ik being positive. Based on the definition, the posterior probability of each collected/uncollected triple being positive can be computed as:
$$\tilde{w}_{i}^{l}=\frac{\beta\phi_{1}^{l}(s_{i}^{l})}{\beta\phi_{1}^{l}(s_{i}^{l})+(1-\beta)\phi_{0}^{l}(s_{i}^{l})},\tag{7}$$ $$\tilde{w}_{ik}^{u}=\frac{\alpha\phi_{1}^{u}(s_{ik}^{u})}{\alpha\phi_{1}^{u}(s_{ik}^{u})+(1-\alpha)\phi_{0}^{u}(s_{ik}^{u})}.\tag{8}$$ To increase model expression ability, instead of
forcing WL = W˜ L and WU = W˜ U , we set WL and WU as free parameters and utilize the term LKL = KL(WL∥W˜ L)+KL(WU ∥W˜ U ) to regularize the difference. Finally, based on Eq. (3),
the training objective is formalized as follows:
$$\operatorname*{min}_{\Theta}{\mathcal{L}}=\operatorname*{min}_{\Theta}{\mathcal{L}}_{t r i p l e}+{\mathcal{L}}_{K L}+{\mathcal{L}}_{r e g},$$
where Lreg = ∥WL∥1 + ∥WU ∥1 can be viewed as a normalization term. Considering the sparsity property of real-world graphs, Lreg penalizes the posterior estimation that there are too many true positive facts on KG.
## 3.3 Label Posterior-Aware Encoder
We then introduce the encoder and the score functions ψ1(s; Θ) and ψ0(s; Θ) to measure the probability of positive/negative triples being collected, as shown in Figure 3. Recent work (Schlichtkrull et al., 2017; Dettmers et al., 2018; Nguyen et al.,
2018; Vashishth et al., 2020) has shown that integrating information from neighbors to represent entities engenders better reasoning performance.
However, the message-passing mechanism is vulnerable to the false positive issue, as noise can be integrated via a link induced by a false positive fact.
In light of this, we propose a label posterior-aware encoder to improve the quality of representations.
We represent each entity e ∈ E and each relation r ∈ R into a d-dimensional latent space:
he, hr ∈ R
d. To encode more information in he, we first construct a neighbor set Ne induced by the positive facts related to entity e. The latest label posterior for collected facts W˜ L naturally serves this purpose, as it indicates the underlying correctness for each collected fact.
Therefore, for each entity e, we first sort the related facts by label posterior W˜ L and construct the neighbor set Ne(W˜ L) = {(ei, ri)} from the top facts. Then the encoder attentively aggregates information from the collected neighbors, where the attention weights take neighbor features, relation features into account. Specifically:
$$\mathbf{h}_{e}^{l}=\mathbf{h}_{e}^{l-1}+\sigma\left(\sum_{(e_{i},r_{i})\in\mathcal{N}_{e}(\mathbf{W}^{L})}\gamma_{e,e_{i}}^{l}\left(\mathbf{h}_{e_{i}}^{l-1}\mathbf{M}\right)\right),\tag{10}$$
$$\mathbf{\Phi}(T)$$
$$({\boldsymbol{8}})$$
where l denotes the layer number, σ(·) denotes the activation function, γ le,ei denotes the attention weight of entity eito the represented entity e, and M is the trainable transformation matrix. The attention weight γ le,ei is supposed to be aware of entity feature and topology feature induced by relations.
We design the attention weight γ le,ei as follows:
$$\gamma^{l}_{e,e_{i}}=\frac{\exp(q^{l}_{e,e_{i}})}{\sum\limits_{\mathcal{N}_{e}(\overline{\mathbf{W}}^{L})}\exp(q^{l}_{e,e_{k}})},\ q^{l}_{e,e_{k}}=\mathbf{a}\left(\mathbf{h}^{l-1}\|\mathbf{h}^{l-1}_{e_{k}}\|\mathbf{h}_{r_{k}}\right),\tag{11}$$
$$\mathrm{(G)}$$
where q le,ek measures the pairwise importance from neighbor ek by considering the entity embedding, neighbor embedding, and relation embedding, a ∈
R
3dis a shared parameter in the attention.
To measure the collection probability for positive/negative triples, we utilize two multilayer perception (MLP) to approximate score function ψ1(s; Θ) and ψ0(s; Θ). Specifically, for each triple s = (eh*, r, e*t):
$$(\mathbf{h}_{s}),\quad\quad(12)$$
$$(s;\Theta)=\mathbf{M}$$
$$\mathbf{P}_{1}(\mathbf{h}_{s}),\ \psi_{0}(s;\Theta)=1$$
ψ1(s; Θ) = MLP1(hs), ψ0(s; Θ) = MLP0(hs), (12)
where the MLP input hs = [h leh∥hr∥h let
] concatenates entity embeddings and relation embedding.
## 3.4 Self-Training Strategy
The latest label posterior W˜ L and W˜ U is further utilized in a self-training strategy to enhance speculative reasoning. First, the latest posterior estimation W˜ L for collected links updates neighbor sets to gradually prevent the encoder effects by false
| Algorithm 1: Summary of nPUGraph. Input: The collected triple set S L . Output: The model parameter Θ, predicted triples. 1 Construct the uncollected triple set S U randomly; 2 Initialize the model parameter Θ and the label posterior W˜ L and W˜ U randomly; 3 for each training epoch do 4 Construct neighbor set Ne(W˜ L ) by W˜ L ; |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Input: The collected triple set S
![5_image_1.png](5_image_1.png)
![5_image_2.png](5_image_2.png)
L.
Output: The model parameter Θ, predicted triples.
1 Construct the uncollected triple set S
![5_image_0.png](5_image_0.png)
2 Initialize the model parameter Θ and the label posterior W˜ Land W˜ Urandomly; 3 for *each training epoch* do 4 Construct neighbor set Ne(W˜ L) by W˜ L;
5 Construct uncollected triple set S
6 for *each collected triple* s 7 Collect unlabeled triples {s 8 Calculate ϕ 9 Calculate each ϕ
⋆
y(s l i, su ik) by Eq. (5);
10 end 11 Calculate the total loss L by Eq. (9);
12 Optimize model parameter: Θ = Θ −
∂L
∂Θ
;
13 Update label posterior W˜ Land W˜ U;
14 end
positive links. Moreover, the latest estimation W˜ U
for uncollected facts enables us to continuously sample unlabeled triplets with high label posterior to cover positive samples in the unlabeled set to the greatest extent. For each labeled triple s l i =
(eh*, r, e*t), we construct K unlabeled triples s u ik by replacing the head and tail respectively with other entities: s u ik = (eh*, r, e*−
k
) or (e
−
k
, r, et), where e
−
kis the selected entity that ensures s u ik ∈ S /
L.
Such selection is performed by ranking the corresponding label posterior w˜
u ik. The updates of neighbor sets and unlabeled triples based on label posterior are nested with parameter optimization during model training alternatively. The training of nPUGraph is summarized in Algorithm 1.
## 4 Experiment 4.1 Experimental Setup
Dataset. We evaluate nPUGraph mainly on three benchmark datasets: FB15K (Bordes et al., 2013), FB15k-237 (Toutanova et al., 2015), and WN18 (Bordes et al., 2013) and one Twitter data we collected, which describes user interaction information towards tweets and hashtags. Table 1 summarizes the dataset statistics.
To better fit the real scenario for speculative reasoning, we randomly modify links on KG to simulate more false negative/positive cases. We modify a specific amount of positive/negative links (the ratio of the modified links is defined as perturbation rate, i.e., *ptb_rate*) by flipping. 90% of them are the removed positive links to simulate false negative cases and the remaining 10% are the added negative links to simulate false positive cases. More
ptb_rate **Dataset** |E| |R| #Train #Valid **#Test**
0.1
FB15K 14,951 1,345 340,968 146,129 59,071
FB15K-237 14,541 237 184,803 79,201 20,466
WN18 40,943 18 92,428 39,612 5,000
Twitter 17,839 2 282,233 120,956 110,456
0.3
FB15K 14,951 1,345 276,940 118,688 59,071
FB15K-237 14,541 237 149,229 63,954 20,466
WN18 40,943 18 72,462 31,055 5,000
Twitter 17,839 2 232,748 99,749 110,456
0.5
FB15K 14,951 1,345 213,380 91,448 59,071
FB15K-237 14,541 237 113,772 48,759 20,466
WN18 40,943 18 52,707 22,588 5,000
Twitter 17,839 2 183,263 78,540 110,456
0.7
FB15K 14,951 1,345 150,485 64,493 59,071
FB15K-237 14,541 237 78,531 33,656 20,466
WN18 40,943 18 34,984 14,993 5,000
Twitter 17,839 2 133,778 57,333 110,456
details about datasets and the data perturbation process can be found in Appendix A.2.
Baselines. We compare to eleven state-ofthe-art baselines: 1) KG embedding methods:
TransE (Bordes et al., 2013), **TransR** (Lin et al., 2015), **DistMult** (Yang et al., 2014),
ComplEX (Trouillon et al., 2017), and **RotatE** (Sun et al., 2019); 2) GNN methods on KG: **RGCN** (Schlichtkrull et al., 2017) and CompGCN (Vashishth et al., 2020); 3) Uncertain KG reasoning: **UKGE** (Chen et al., 2019); 4) Negative sampling methods: **NSCaching** (Zhang et al.,
2019) and **SANS** (Ahrabian et al., 2020); 5) PU
learning on KG: **PUDA** (Tang et al., 2022). More details can be found in Appendix A.3.
Evaluation and Implementation. For each
(eh, r, e?) or (e?*, r, e*t), we rank all entities at the missing position in triples, and adopt filtered mean reciprocal rank (MRR) and filtered Hits at
{1, 3, 10} as evaluation metrics (Bordes et al.,
2013). More implementation details of baselines and nPUGraph can be found in Appendix A.4.
## 4.2 Main Results
We first discuss the model performance on noisy and incomplete graphs, with *ptb_rate = 0.3*, as shown in Table 2. nPUGraph achieves consistently better results than all baseline models, with 10.3%
relative improvement on average. Specifically, conventional KGE and GNN-based methods produce unsatisfying performance, as they ignore the false negative/positive issues during model training. In some cases, GNN-based ways are worse, as the message-passing mechanism is more vulnerable to false positive links. As expected, the performance of uncertain knowledge graph embedding model (Chen et al., 2019) is much worse when there
| Dataset | FB15K | FB15K-237 | WN18 | | | | | | | | | |
|--------------------------------------------|---------|-------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Metrics | MRR | H@10 | H@3 | H@1 | MRR | H@10 | H@3 | H@1 | MRR | H@10 | H@3 | H@1 |
| Knowledge graph embedding methods | | | | | | | | | | | | |
| TransE | 0.336 | 0.603 | 0.425 | 0.189 | 0.196 | 0.394 | 0.236 | 0.094 | 0.229 | 0.481 | 0.416 | 0.030 |
| TransR | 0.314 | 0.579 | 0.397 | 0.170 | 0.184 | 0.359 | 0.211 | 0.098 | 0.229 | 0.480 | 0.408 | 0.035 |
| DistMult | 0.408 | 0.627 | 0.463 | 0.296 | 0.240 | 0.407 | 0.262 | 0.158 | 0.397 | 0.518 | 0.453 | 0.320 |
| ComplEx | 0.396 | 0.616 | 0.451 | 0.284 | 0.238 | 0.411 | 0.262 | 0.154 | 0.448 | 0.526 | 0.475 | 0.403 |
| RotatE | 0.431 | 0.636 | 0.489 | 0.323 | 0.255 | 0.433 | 0.280 | 0.169 | 0.446 | 0.524 | 0.474 | 0.400 |
| Graph neural network methods on KG | | | | | | | | | | | | |
| RGCN | 0.154 | 0.307 | 0.164 | 0.078 | 0.141 | 0.276 | 0.145 | 0.075 | 0.362 | 0.464 | 0.412 | 0.300 |
| CompGCN | 0.409 | 0.631 | 0.465 | 0.294 | 0.253 | 0.422 | 0.275 | 0.171 | 0.445 | 0.522 | 0.471 | 0.400 |
| Uncertain knowledge graph embedding method | | | | | | | | | | | | |
| UKGE | 0.311 | 0.556 | 0.337 | 0.189 | 0.172 | 0.233 | 0.128 | 0.081 | 0.241 | 0.447 | 0.309 | 0.119 |
| Negative sampling methods | | | | | | | | | | | | |
| NSCaching | 0.371 | 0.576 | 0.424 | 0.265 | 0.190 | 0.329 | 0.208 | 0.121 | 0.306 | 0.401 | 0.334 | 0.255 |
| SANS | 0.372 | 0.599 | 0.434 | 0.252 | 0.243 | 0.416 | 0.267 | 0.158 | 0.453 | 0.528 | 0.479 | 0.409 |
| Positive-Unlabeled learning methods on KG | | | | | | | | | | | | |
| PUDA | 0.403 | 0.623 | 0.458 | 0.291 | 0.234 | 0.394 | 0.255 | 0.156 | 0.382 | 0.499 | 0.444 | 0.306 |
| nPUGraph | 0.486* | 0.718* | 0.534* | 0.342* | 0.287* | 0.481* | 0.315* | 0.191* | 0.493* | 0.582* | 0.519* | 0.442* |
| Gains (%) | 12.7 | 12.8 | 9.2 | 5.9 | 12.6 | 11.2 | 12.5 | 11.4 | 8.9 | 10.3 | 8.3 | 8.0 |
![6_image_0.png](6_image_0.png)
are no available uncertainty scores for model training. SANS and PUDA generate competitive results in some cases, as their negative sampling strategy and PU learning objective can respectively mitigate the false negative/positive issues to some extent.
Table 2 demonstrates the superiority of nPUGraph which addresses the false negative/positive issues simultaneously. For space limitations, we report and discuss the model performance on Twitter data in Table 4 in Appendix A.5.1.
## 4.3 Experiments Under Various Degrees Of Noise And Incompleteness
We investigate the performance of baseline models and nPUGraph under different degrees of noise Table 3: Ablation Studies.
| Dataset | FB15K | FB15K-237 | Gains | | |
|----------------------------|---------|-------------|---------|-------|-------|
| Ablations | MRR | H@10 | MRR | H@10 | % |
| nPUGraph w/o nPU | 0.401 | 0.619 | 0.230 | 0.407 | -20.0 |
| nPUGraph w/o LP-Encoder | 0.457 | 0.681 | 0.261 | 0.459 | -6.6 |
| nPUGraph w/o Self-Training | 0.471 | 0.704 | 0.276 | 0.461 | -3.4 |
| nPUGraph | 0.486 | 0.718 | 0.287 | 0.481 | - |
and incompleteness. Figure 4 reports the performance under various *ptb_rate* , from 0.1 to 0.7, where higher *ptb_rate* means more links are perturbed as false positive/negative cases. Full results are included in Appendix A.5.2. The performance degrades as the *ptb_rate* increases for all in most cases, demonstrating that the false negative/positive issues significantly affect the reasoning performance. However, nPUGraph manages to achieve the best performance in all cases. Notably, the relative improvements are more significant under higher *ptb_rate* .
## 4.4 Model Analysis
Ablation Study. We evaluate performance improvements brought by the nPUGraph framework by following ablations: 1) **nPUGraph w/o nPU** is trained without the noisy Positive-Unlabeled framework, which instead utilizes the margin loss for model training; 2) **nPUGraph w/o LP-Encoder**
eliminates the label posterior-aware encoder (LPEncoder), which aggregates information from all neighbors instead of the sampled neighbors; 3)
nPUGraph w/o Self-Training is trained without the proposed self-training algorithm.
![7_image_0.png](7_image_0.png)
We report MRR and *Hit@10* over FB15K and FB15K-237 data, as shown in Table 3. As we can see, training the encoder without the proposed noisy Positive-Unlabeled framework will cause the performance drop, as this variant ignores the false negative/positive issues. Removing the label posterior-based neighbor sampling in the encoder will also cause performance degradation, as the information aggregation no longer distinguishes between true and false links. Such a variant can be easily influenced by the existence of false positive facts. Moreover, the last ablation result shows if the training process is further equipped with the self-training strategy, the performance will be enhanced, which verifies its effectiveness to select informative unlabeled samples for model training.
The Effect of PU Triple Construction. We then investigate the effect of PU Triple Construction on model performance, by varying different sizes of unlabeled samples from 10 to 50. Figure 5a shows that the performance improves as the number of unlabeled samples increases. Because more unlabeled samples can cover more false negative cases for model training. The training time grows linearly.
The Effect of Positive Class Prior α. The positive class prior α and true positive ratio β are two important hyperparameters. While β has a clear definition from real-world data, the specific value of α is unknown in advance. Figure 5b shows the model performance w.r.t. different values of α by grid search. Reasoning performance fluctuates a bit with different values of α since incorrect prior knowledge of α can bias the label posterior estimation and thus hurt the performance.
## 5 Related Work
Knowledge Graph Reasoning. Knowledge graph reasoning (KGR) aims to predict missing facts to automatically complete KG, including one-hop reasoning (Bordes et al., 2013) and multi-hop reasoning (Saxena et al., 2021). It has facilitated a wide spectrum of knowledge-intensive applications (Wang et al., 2018b, 2020; Saxena et al., 2021; Qian et al., 2019; Shao et al., 2020; Yang et al.,
2020; Yan et al., 2021b,a; Li et al., 2022; Wang et al., 2022a).To set the scope, we primarily focus on one-hop reasoning and are particularly interested in predicting missing entities in a partial fact.
Knowledge graph embeddings achieve state-of-theart performance (Bordes et al., 2013; Lin et al.,
2015; Yang et al., 2014; Trouillon et al., 2017; Sun et al., 2019; García-Durán et al., 2018; Wang et al.,
2022b, 2023). Recently, graph neural networks
(GNN) have been incorporated to enhance representation learning by aggregating neighborhood information (Schlichtkrull et al., 2017; Dettmers et al., 2018; Nguyen et al., 2018; Vashishth et al.,
2020; Li et al., 2021). However, most approaches significantly degrade when KG are largely incomplete and contain certain errors (Pujara et al., 2017), as they ignore the false negative/positive issues. Recent attempts on uncertain KG (Chen et al., 2019; Kertkeidkachorn et al., 2019; Chen et al., 2021)
measure the uncertainty score for facts, which can detect false negative/positive samples. However, they explicitly require the ground truth uncertainty scores for model training, which are usually unavailable in practice. Various negative sampling strategies have been explored to sample informative negative triples to facilitate model training (Cai and Wang, 2018; Zhang et al., 2019; Ahrabian et al., 2020). However, they cannot detect false negative/positive facts. We aim to mitigate the false negative/positive issues and enable the automatic detection of false negative/positive facts during model training.
Positive-Unlabeled Learning. Positive-Unlabeled
(PU) learning is a learning paradigm for training a model that only has access to positive and unlabeled data, where unlabeled data includes both positive and negative samples (Plessis et al., 2015; Bekker and Davis, 2018). PU learning roughly includes (1) two-step solutions (He et al., 2018; Jain et al., 2016); (2) methods that consider the unlabeled samples as negative samples with label noise (Shi et al., 2018); (3) unbiased risk estimation methods (Plessis et al., 2015; Tang et al., 2022). Recent work further studies the setting that there exists label noise in the observed positive samples (Jain et al., 2016). We formulate the KGR task on noisy and incomplete KG as a noisy Positive-Unlabeled learning problem and propose a variational framework for it, which relates to two-step solutions and unbiased risk estimation methods.
## 6 Conclusion
We studied speculative KG reasoning based on sparse and unreliable observations, which contains both *false negative issue* and false positive issue. We formulated the task as a noisy PositiveUnlabeled learning problem and proposed a variational framework nPUGraph to jointly update model parameters and estimate the posterior likelihood of collected/uncollected facts being true or false, where the underlying correctness is viewed as latent variables. During the training process, a label posterior-aware encoder and a self-training strategy were proposed to further address the false positive/negative issues. We found label posterior estimation plays an important role in moving toward speculative KG reasoning in reality, and the estimation can be fulfilled by optimizing an alternative objective without additional cost. Extensive experiments verified the effectiveness of nPUGraph on both benchmark KGs and Twitter interaction data with various degrees of data perturbations.
## Limitations
There are certain limitations that can be concerned for further improvements. First, the posterior inference relies on the prior estimation of positive class prior α and true positive ratio β. Our experiments show that a data-driven estimation based on end-to-end model training produces worse results than a hyperparameter grid search. An automatic prior estimation is desirable for real-world applications. Moreover, in nPUGraph, we approximate the probability of negative/positive facts being collected/uncollected via neural networks, which lacks a degree of interpretability. In the future, we plan to utilize a more explainable random process depending on entity/relation features to model the collection probability distribution.
## Ethical Impact
nPUGraph neither introduces any social/ethical bias to the model nor amplifies any bias in data.
Benchmark KG are publicly available. For Twitter interaction data, we mask all identity and privacy information for users, where only information related to user interactions with tweets and hashtags is presented. Our model is built upon public libraries in PyTorch. We do not foresee any direct social consequences or ethical issues.
## Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. Research reported in this paper was sponsored in part by DARPA award HR001121C0165, DARPA award HR00112290105, DoD Basic Research Office award HQ00342110002, the Army Research Laboratory under Cooperative Agreement W911NF-17-20196. It was also supported in part by ACE, one of the seven centers in JUMP
2.0, a Semiconductor Research Corporation (SRC)
program sponsored by DARPA. The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the official policies of DARPA and DoD Basic Research Office or the Army Research Laboratory. The US government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation hereon.
## References
Kian Ahrabian, Aarash Feizi, Yasmin Salehi, William L.
Hamilton, and Avishek Joey Bose. 2020. Structure aware negative sampling in knowledge graphs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 6093–6101, Online. Association for Computational Linguistics.
Jessa Bekker and Jesse Davis. 2018. Learning from positive and unlabeled data: A survey. *CoRR*,
abs/1811.04820.
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In *SIGMOD '08: Proceedings of* the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. In *Advances in Neural Information* Processing Systems.
Liwei Cai and William Yang Wang. 2018. KBGAN:
Adversarial learning for knowledge graph embeddings. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1470–1480,
New Orleans, Louisiana. Association for Computational Linguistics.
Xuelu Chen, Michael Boratko, Muhao Chen, Shib Sankar Dasgupta, Xiang Lorraine Li, and Andrew McCallum. 2021. Probabilistic box embeddings for uncertain knowledge graph reasoning.
In *Proceedings of the 2021 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Xuelu Chen, Muhao Chen, Weijia Shi, Yizhou Sun, and Carlo Zaniolo. 2019. Embedding uncertain knowledge graphs. In Proceedings of the ThirtyThird AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19.
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18.
Christiane Fellbaum. 1998. *WordNet: An Electronic* Lexical Database. Bradford Books.
Alberto García-Durán, Sebastijan Dumanciˇ c, and Math- ´
ias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
Fengxiang He, Tongliang Liu, Geoffrey I. Webb, and Dacheng Tao. 2018. Instance-dependent PU
learning by bayesian optimal relabeling. *CoRR*,
abs/1808.02180.
Shantanu Jain, Martha White, and Predrag Radivojac.
2016. Estimating the class prior and posterior from noisy positives and unlabeled data. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 2693–2701, Red Hook, NY, USA. Curran Associates Inc.
Natthawut Kertkeidkachorn, Xin Liu, and Ryutaro Ichise. 2019. Gtranse: Generalizing translationbased model on uncertain knowledge graph embedding. In Advances in Artificial Intelligence - Selected Papers from the Annual Conference of Japanese Society of Artificial Intelligence (JSAI 2019), Niigata, Japan, 4-7 June 2019, volume 1128 of *Advances in* Intelligent Systems and Computing, pages 170–178.
Jinning Li, Huajie Shao, Dachun Sun, Ruijie Wang, Yuchen Yan, Jinyang Li, Shengzhong Liu, Hanghang Tong, and Tarek Abdelzaher. 2022. Unsupervised belief representation learning with informationtheoretic variational graph auto-encoders. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 1728–1738.
Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang, and Xueqi Cheng. 2021. Temporal knowledge graph reasoning based on evolutional representation learning. In SIGIR.
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In *AAAI'15*.
Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In *Proceedings of the 16th* Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 327–333.
Marthinus Du Plessis, Gang Niu, and Masashi Sugiyama. 2015. Convex formulation for learning from positive and unlabeled data. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1386–1394, Lille, France.
Jay Pujara, Eriq Augustine, and Lise Getoor. 2017. Sparsity and noise: Where knowledge graph embeddings fall short. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1751–1756, Copenhagen, Denmark. Association for Computational Linguistics.
Jianwei Qian, Xiang-Yang Li, Chunhong Zhang, Linlin Chen, Taeho Jung, and Junze Han. 2019. Social network de-anonymization and privacy inference with knowledge graph model. *IEEE Transactions on Dependable and Secure Computing*, pages 679–692.
Apoorv Saxena, Soumen Chakrabarti, and Partha Talukdar. 2021. Question answering over temporal knowledge graphs. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).
Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. In *ESWC*, pages 593–607.
H. Shao, S. Yao, A. Jing, S. Liu, D. Liu, T. Wang, J. Li, C. Yang, R. Wang, and T. Abdelzaher. 2020. Misinformation detection and adversarial attack cost analysis in directional social networks. In *ICCCN'20*.
Hong Shi, Shaojun Pan, Jian Yang, and Chen Gong.
2018. Positive and unlabeled learning via loss decomposition and centroid estimation. In *Proceedings* of the 27th International Joint Conference on Artificial Intelligence, IJCAI'18, page 2689–2695. AAAI
Press.
Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In *International* Conference on Learning Representations.
Ruijie Wang, Zheng Li, Danqing Zhang, Qingyu Yin, Tong Zhao, Bing Yin, and Tarek Abdelzaher. 2022a.
Rete: Retrieval-enhanced temporal event forecasting on unified query product evolutionary graph. In The Web Conference.
Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, and Xiangliang Zhang. 2022. Positive-unlabeled learning with adversarial data augmentation for knowledge graph completion. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 2248–2254. International Joint Conferences on Artificial Intelligence Organization. Main Track.
Ruijie Wang, Yuchen Yan, Jialu Wang, Yuting Jia, Ye Zhang, Weinan Zhang, and Xinbing Wang. 2018b.
Acekg: A large-scale knowledge graph for academic data mining. In *CIKM*.
Ruijie Wang, zheng li, Dachun Sun, Shengzhong Liu, Jinning Li, Bing Yin, and Tarek Abdelzaher. 2022b.
Learning to sample and aggregate: Few-shot reasoning over temporal knowledge graphs. In *Advances in* Neural Information Processing Systems.
Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon.
2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Lisbon, Portugal. Association for Computational Linguistics.
Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, and Hanghang Tong. 2021a. Dynamic knowledge graph alignment. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 35.
Théo Trouillon, Christopher R. Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. 2017. Knowledge graph completion via complex tensor factorization. *J. Mach. Learn. Res.*
Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. In ICLR.
Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020. Composition-based multirelational graph convolutional networks. In *International Conference on Learning Representations*.
Chaoqi Yang, Jinyang Li, Ruijie Wang, Shuochao Yao, Huajie Shao, Dongxin Liu, Shengzhong Liu, Tianshi Wang, and Tarek F. Abdelzaher. 2020. Hierarchical overlapping belief estimation by structured matrix factorization. In *ASONAM'20*.
Haiwen Wang, Ruijie Wang, Chuan Wen, Shuhao Li, Yuting Jia, Weinan Zhang, and Xinbing Wang. 2020.
Author name disambiguation on heterogeneous information network with adversarial representation learning. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI
2020, New York, NY, USA, February 7-12, 2020, pages 238–245. AAAI Press.
Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2018a.
Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In CIKM '18.
Ruijie Wang, Zijie Huang, Shengzhong Liu, Huajie Shao, Dongxin Liu, Jinyang Li, Tianshi Wang, Dachun Sun, Shuochao Yao, and Tarek Abdelzaher.
2021. Dydiff-vae: A dynamic variational framework for information diffusion prediction. In *SIGIR'21*.
Ruijie Wang, Zheng Li, Jingfeng Yang, Tianyu Cao, Chao Zhang, Bing Yin, and Tarek Abdelzaher. 2023.
Mutually-paced knowledge distillation for crosslingual temporal knowledge graph reasoning. In *Proceedings of the ACM Web Conference 2023*, WWW
'23, page 2621–2632.
Yuchen Yan, Si Zhang, and Hanghang Tong. 2021b.
Bright: A bridging algorithm for network alignment.
In *Proceedings of the Web Conference 2021*.
Yongqi Zhang, Quanming Yao, Yingxia Shao, and Lei Chen. 2019. Nscaching: Simple and efficient negative sampling for knowledge graph embedding. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 614–625. IEEE.
## A Appendix A.1 Theorem Proof
Theorem 2. *The log-likelihood of the complete* data log p(S) *is lower bounded as follows:*
log p(S) ≥ E q(Y) [log p(S|Y)] − KL(q(Y)∥p(Y)) = E sl∈SL hw llog[ϕ l 1 (s l)] + (1 − w l) log[ϕ l 0 (s l)]i + E su∈SU -(w ulog[ϕ u 1 (s u)] + (1 − w u) log[ϕ u 0 (s u)] − KL(WU ∥W˜ U ) − KL(WL∥W˜ L) − ∥WL∥1 |SL|− ∥WU ∥1 |SU |, (13)
where S *denotes all labeled/unlabeled triples,* Y
is the corresponding latent variable indicating the positive/negative labels for triples, WU = {w u i}
denotes the point-wise probability for the uncollected triples being positive, WL = {w l i} denotes the probability for the collected triples being positive.
Proof. Let log p(S) denote the log-likelihood of all potential triples being collected in the KG or not, Y denote the corresponding latent variable indicating the positive/negative labels. We aim to infer the label posterior p(Y|S), which can be approximated by q(Y|S). We therefore are interested at the difference between the two, measured by the Kullback–Leibler (KL) divergence as follows:
$$\mathbb{KL}(q(\mathbf{Y}|\mathbf{S})\|p(\mathbf{Y}|\mathbf{S}))=-\underset{q(\mathbf{Y}|\mathbf{S})}{\mathbb{E}}\log\left(\frac{p(\mathbf{S}|\mathbf{Y})p(\mathbf{Y})}{q(\mathbf{Y}|\mathbf{S})}\right)+\log p(\mathbf{S}),\tag{14}$$
(14)
as KL divergence is positive, we derive the lower bound of the log-likelihood as follows:
$$\log p(\mathbf{S})\geq\mathop{\mathbb{E}}_{q(\mathbf{Y}|\mathbf{S})}\log\left(\frac{p(\mathbf{S}|\mathbf{Y})p(\mathbf{Y})}{q(\mathbf{Y}|\mathbf{S})}\right)$$ $$\geq\mathop{\mathbb{E}}_{q(\mathbf{Y}|\mathbf{S})}\log p(\mathbf{S}|\mathbf{Y})-\mathop{\mathbb{KL}}(q(\mathbf{Y}|\mathbf{S})\|p(\mathbf{Y}))-\mathop{\mathbb{E}}_{p(\mathbf{S})}q(\mathbf{Y}|\mathbf{S}),$$
$${\mathrm{(15)}}$$
which consists of three terms: triple collection probability measure, KL term and regularization of label posterior (positive). We discuss each term in detail.
Recall that the distribution of labeled triples can be represented as follows:
```
s
l ∼ βϕl1
(s
l) + (1 − β)ϕ
l
0
(s
l), (16)
```
where ϕ ly denotes the probability of being collected over triple space S for the positive class (y = 1)
and negative class (y = 0), and β ∈ [0, 1) denotes the proportion of true positive samples in labeled data. Similarly, considering the existence of unlabeled positive triples, the distribution of unlabeled samples can be represented as follows:
$$s^{u}\sim\alpha\phi_{1}^{u}(s^{u})+(1-\alpha)\phi_{0}^{u}(s^{u}),$$
u), (17)
where ϕ u y = 1 − ϕ ly denotes the probability of being uncollected, α ∈ [0, 1) is the class prior or the proportion of positive samples in unlabeled data. Based on that, the first term can be detailed as follows:
E q(Y|S) log p(S|Y) = E sl∈SLE y∈{0,1} q(y|s l) log p(s l |y) + E su∈SUE y∈{0,1} q(y|s l) log p(s l |y) = E sl∈SL hq(y = 1|s l) log[p(s l |y = 1))] + q(y = 0|s l) log[p(s l |y = 0))] + E su∈SU -q(y = 1|s u) log[p(s u |y = 1))] + q(y = 0|s u) log[p(s u |y = 0))] = E sl∈SL hw llog[ϕ l 1 (s l)] + (1 − w l) log[ϕ l 0 (s l)]i (18) + E su∈SU -(w ulog[ϕ u 1 (s u)] + (1 − w u) log[ϕ u 0 (s u)],
where WU = {w u} denotes the point-wise probability for the uncollected triples being positive q(y = 1|s u), WL = {w l} denotes the probability for the collected triples being positive q(y = 1|s l).
We view WU and WL as free parameters and regularize them by W˜ U and W˜ L, which are estimated posterior probability as follows:
$$\tilde{w}_{i}^{l}=\frac{\beta\phi_{1}^{l}(s_{i}^{l})}{\beta\phi_{1}^{l}(s_{i}^{l})+(1-\beta)\phi_{0}^{l}(s_{i}^{l})},$$
$$(19)$$
$$(20)$$
, (19)
$$\tilde{w}_{i k}^{u}=\frac{\alpha\phi_{1}^{u}(s_{i k}^{u})}{\alpha\phi_{1}^{u}(s_{i k}^{u})+(1-\alpha)\phi_{0}^{u}(s_{i k}^{u})}.$$
. (20)
Therefore, the KL term in Eq. (15) becomes $\mathbb{KL}(\mathbf{W}^{U}\|\tilde{\mathbf{W}}^{U})+\mathbb{KL}(\mathbf{W}^{L}\|\tilde{\mathbf{W}}^{L})$. Last but not least, the third term $\underset{p(\mathbf{S})}{\mathbb{E}}q(\mathbf{Y}|\mathbf{S})=$
∥WU ∥1/|SU | + ∥WL∥1/|SL| regulates the total number of potential triples (including both collected ones and uncollected ones), because of the sparsity nature of graphs. Finally, we derive the lower bound of the log-likelihood, as shown in Eq (13).
## A.2 Datasets
A.2.1 Dataset Information We evaluate our proposed model based on three widely used knowledge graphs and one Twitter ineraction graph:
- **FB15K** (Bordes et al., 2013) is a subset of Freebase (Bollacker et al., 2008), a large database containing general knowledge facts with a variety of relation types;
- **FB15K-237** (Toutanova et al., 2015) is a reduced version of FB15K, where inverse relations are removed;
- **WN18** (Bordes et al., 2013) is a subset of WorldNet (Fellbaum, 1998), a massive lexical English database that captures semantic relations between words;
- **Twitter** is an interaction graph relevant with Russo-Ukrainian War. Data is collected in the Twitter platform from May 1, 2022, to December 25, 2022, which records user-tweet interactions and user-hashtag interactions. Thus, the graph is formed by two relations (user-tweet and user-hashtag) and multiple entities, which can be categorized into three types (user, tweet, and hashtag). Following (Wang et al., 2021, 2022a), when constructing the graph, thresholds will be selected to remove inactive users, tweets, and hashtags according to their occurrence frequency. We set the thresholds for the user, tweet, and hashtag as 30, 30, and 10, respectively, i.e., if a tweet has fewer than 30 interactions with users, it will be regarded as inactive and removed from the graph. Besides, the extremely frequent users and tweets are deleted as they may be generated by bots.
For all datasets, we first merge the training set and validation set as a whole. Then we simulate noisy and incomplete graphs for the training process by adding various proportions of false negative/positive cases in the merged set. After that, we partition the simulated graphs into new train/valid sets with a ratio of 7 : 3 and the test set remains the same. Table 1 provides an overview of the statistics of the simulated datasets corresponding to various perturbation rates and based graphs.
## A.2.2 Dataset Perturbation
Data perturbation aims to simulate noisy and incomplete graphs from clean benchmark knowledge graphs. It consists of two aspects: First, to simulate the *false negative issue*, it randomly removes some existing links in a graph, considering the removed links as missing but potentially plausible facts. Second, to simulate the *false positive issue*, it randomly adds spurious links to the graph as unreliable or outdated facts.
We define perturbation rate, i.e., *ptb_rate* , as a proportion of modified edges in a graph to control the amount of removing positive links and adding negative links. For example, if a graph has 100 links and the perturbation rate is 0.5, then we will randomly convert the positivity or negativity of 50 links. Among these 50 modified links, 10%
of them will be added and the rest of them will be removed, i.e., we will randomly add 5 negative links and remove 45 positive links to generate a perturbed graph. The perturbed graph can be regarded as noisy and incomplete, leading to significant performance degradation. In our experiments, we set *ptb_rate* in a range of {0.1, 0.3, 0.5, 0.7}.
The detailed perturbation process is summarized in Figure 6.
## A.3 Baselines
We describe the baseline models utilized in the experiments in detail:
- **TransE**2(Bordes et al., 2013) is a translationbased embedding model, where both entities and relations are represented as vectors in the latent space. The relation is utilized as a translation operation between the subject and the object entity;
- **TransR** (Lin et al., 2015) advances TransE by optimizing modeling of n-n relations, where each entity embedding can be projected to hyperplanes defined by relations;
- **DistMult**3(Yang et al., 2014) is a general framework with the bilinear objective for multirelational learning that unifies most multirelational embedding models;
- **ComplEx** (Trouillon et al., 2017) introduces complex embeddings, which can effectively capture asymmetric relations while retaining the efficiency benefits of the dot product;
![13_image_0.png](13_image_0.png)
- **R-GCN**4(Schlichtkrull et al., 2017) uses relation-specific weight matrices that are defined as linear combinations of a set of basis matrices;
- **CompGCN**5(Vashishth et al., 2020) is a framework for incorporating multi-relational information in graph convolutional networks to jointly embeds both nodes and relations in a graph;
- **UKGE**6(Chen et al., 2019) learns embeddings according to the confidence scores of uncertain relation facts to preserve both structural and uncertainty information of facts in the embedding space;
- **NSCaching**7(Zhang et al., 2019) is an inexpensive negative sampling approach by using cache to keep track of high-quality negative triplets, which have high scores and rare;
- **SANS**8(Ahrabian et al., 2020) utilizes the rich graph structure by selecting negative samples from a node's k-hop neighborhood for negative sampling without additional parameters and difficult adversarial optimization;
- **PUDA**9(Tang et al., 2022) is a KGC method to circumvent the impact of the false negative issue by tailoring positive unlabeled risk estimator and address the data sparsity issue by unifying adversarial training and PU learning under the positive-unlabeled minimax game.
4https://github.com/JinheonBaek/RGCN
5https://github.com/malllabiisc/CompGCN
6https://github.com/stasl0217/UKGE 7https://github.com/AutoML-Research/NSCaching 8https://github.com/kahrabian/SANS
9https://github.com/lilv98/PUDA
## A.4 Reproducibility A.4.1 Baseline Setup
All baseline models and nPUGraph are trained on the perturbed training set and validated on the perturbed valid set. We utilize MRR on the valid set to determine the best models and evaluate them on the clean test set. For uncertain knowledge graph embedding methods UKGE (Chen et al., 2019), since the required uncertainty scores are unavailable, we set the scores for triples in training set as 1 and 0 otherwise. The predicted uncertainty scores produced by UKGE are utilized to rank the potential triples for ranking evaluation. We train all baseline models and nPUGraph on the same GPUs (GeForce RTX 3090) and CPUs (AMD Ryzen Threadripper 3970X 32-Core Processor).
## A.4.2 Npugraph Setup
For model training, we utilize Adam optimizer and set the maximum number of epochs as 200. Within the first 50 epochs, we disable self-training and focus on learning suboptimal model parameters on noisy and incomplete data. After that, we start the self-training strategy, where the latest label posterior estimation is utilized to sample neighbors for the encoder and select informative unlabeled samples for model training. We set batch size as 256, the dimensions of all embeddings as 128, and the dropout rate as 0.5. For the sake of efficiency, we employ 1 neighborhood aggregation layer in the encoder.
For the setting of hyperparameter, we mainly tune positive class prior α in the range of {1e −
1, 5e − 2, 1e − 2, 5e − 3, 1e − 3, 5e − 4, 1e −
4, 5e − 5}; true positive ratio β in the range of
{0.3, 0.2, 0.1, 0.005, 0.001}; learning rate in the 2453
| Dataset | Twitter | | | |
|--------------------------------------------|-----------|---------|--------|--------|
| Metrics | MRR | HIT@100 | HIT@50 | HIT@30 |
| Random | 0.001 | 0.006 | 0.003 | 0.002 |
| Knowledge graph embedding methods | | | | |
| TransE | 0.010 | 0.091 | 0.058 | 0.041 |
| TransR | 0.009 | 0.078 | 0.048 | 0.033 |
| DistMult | 0.021 | 0.091 | 0.065 | 0.052 |
| ComplEx | 0.022 | 0.089 | 0.064 | 0.051 |
| RotatE | 0.022 | 0.1115 | 0.077 | 0.059 |
| Graph neural network methods on KG | | | | |
| RGCN | 0.005 | 0.054 | 0.029 | 0.019 |
| CompGCN | 0.014 | 0.089 | 0.059 | 0.044 |
| Uncertain knowledge graph embedding method | | | | |
| UKGE | 0.011 | 0.072 | 0.053 | 0.033 |
| Negative sampling methods | | | | |
| NSCaching | 0.012 | 0.095 | 0.060 | 0.043 |
| SANS | 0.019 | 0.104 | 0.070 | 0.054 |
| Positive-Unlabeled learning method | | | | |
| PUDA | 0.013 | 0.082 | 0.057 | 0.044 |
| nPUGraph | 0.030* | 0.127* | 0.096* | 0.074* |
| Gains (%) | 38.2 | 13.9 | 25.3 | 26.5 |
range of {0.02, 0.01, 0.005, 0.001, 0.0005}; the number of sampled unlabeled triples for each labeled one in the range of {50, 40, 30, 20, 10}. We will publicly release our code and data upon acceptance.
## A.5 Experiments A.5.1 Experimental Results On Twitter Data
We discuss the model performance on noisy and incomplete Twitter data with *ptb_rate = 0.3* in this section, which is shown in Table 4. According to the result of Random, we can infer that all relations on Twitter are n − n, where relations can be 1 − n, n−1, and n−n for benchmark KG. Therefore, link prediction is more challenging for Twitter data and we adopt Hits at 30, 50, 100 as evaluation metrics, instead.
For Twitter data, nPUGraph achieves impressive performance compared with the baseline models, with 25.98% relative improvement on average. The results of Twitter data support the robustness of nPUGraph, which can mitigate false negative/positive issues not only in benchmark KG but also in the real-world social graph.
## A.5.2 Experimental Results Under Different Perturbation Rates
The experimental results under perturbation rates 0.1, 0.5, and 0.7 are shown in Table 5, Table 6, and Table 7, respectively. nPUGraph outperforms all baseline models for various perturbation rates, demonstrating that nPUGraph can mitigate false negative/positive issues on knowledge graphs with different degrees of noise and incompleteness. Notably, comparing these three tables, the relative improvements are more significant under higher ptb_rate in most cases, showing stronger robustness for nPUGraph on graphs with more false negative/positive facts.
Table 5: Overall performance on noisy and incomplete graphs, with *ptb_rate = 0.1*. Average results on 5 independent runs are reported. ∗ indicates the statistically significant results over baselines, with p-value < 0.01. The best results are in boldface, and the strongest baseline performance is underlined.
| Dataset | FB15K | FB15K-237 | WN18 | | | | | | | | | |
|--------------------------------------------|---------|-------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Metrics | MRR | H@10 | H@3 | H@1 | MRR | H@10 | H@3 | H@1 | MRR | H@10 | H@3 | H@1 |
| Knowledge graph embedding methods | | | | | | | | | | | | |
| TransE | 0.419 | 0.666 | 0.507 | 0.280 | 0.245 | 0.441 | 0.284 | 0.144 | 0.318 | 0.646 | 0.579 | 0.044 |
| TransR | 0.398 | 0.658 | 0.494 | 0.250 | 0.246 | 0.434 | 0.280 | 0.153 | 0.319 | 0.646 | 0.575 | 0.051 |
| DistMult | 0.516 | 0.719 | 0.578 | 0.407 | 0.271 | 0.446 | 0.297 | 0.185 | 0.524 | 0.686 | 0.610 | 0.418 |
| ComplEx | 0.503 | 0.711 | 0.570 | 0.390 | 0.276 | 0.459 | 0.305 | 0.186 | 0.612 | 0.702 | 0.643 | 0.560 |
| RotatE | 0.544 | 0.732 | 0.608 | 0.441 | 0.292 | 0.480 | 0.324 | 0.199 | 0.613 | 0.691 | 0.642 | 0.567 |
| Graph neural network methods on KG | | | | | | | | | | | | |
| RGCN | 0.196 | 0.372 | 0.209 | 0.110 | 0.169 | 0.317 | 0.177 | 0.097 | 0.483 | 0.625 | 0.554 | 0.396 |
| CompGCN | 0.460 | 0.677 | 0.525 | 0.343 | 0.293 | 0.475 | 0.324 | 0.203 | 0.608 | 0.686 | 0.636 | 0.564 |
| Uncertain knowledge graph embedding method | | | | | | | | | | | | |
| UKGE | 0.338 | 0.425 | 0.321 | 0.233 | 0.231 | 0.411 | 0.204 | 0.110 | 0.381 | 0.541 | 0.407 | 0.331 |
| Negative sampling method | | | | | | | | | | | | |
| NSCaching | 0.495 | 0.689 | 0.557 | 0.390 | 0.153 | 0.305 | 0.167 | 0.080 | 0.434 | 0.542 | 0.470 | 0.374 |
| SANS | 0.422 | 0.649 | 0.493 | 0.298 | 0.271 | 0.453 | 0.301 | 0.182 | 0.619 | 0.702 | 0.644 | 0.574 |
| Positive-Unlabeled learning method | | | | | | | | | | | | |
| PUDA | 0.493 | 0.713 | 0.559 | 0.377 | 0.271 | 0.443 | 0.298 | 0.185 | 0.520 | 0.667 | 0.608 | 0.419 |
| nPUGraph | 0.561* | 0.791* | 0.621* | 0.449* | 0.328* | 0.535* | 0.343* | 0.221* | 0.630* | 0.754* | 0.671* | 0.599* |
| Gains (%) | 3.0 | 8.1 | 2.1 | 1.9 | 12.0 | 11.4 | 5.9 | 8.7 | 1.8 | 7.4 | 4.1 | 4.4 |
Table 6: Experimental results under *ptb_rate = 0.5*.
Dataset FB15K FB15K-237 **WN18**
Metrics MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 **MRR H@10 H@3 H@1**
Knowledge graph embedding methods
TransE 0.279 0.540 0.363 0.136 0.151 0.342 0.190 0.053 0.158 0.337 0.285 0.018
TransR 0.256 0.500 0.323 0.127 0.141 0.294 0.160 0.064 0.150 0.327 0.267 0.020
DistMult 0.328 0.536 0.372 0.224 0.210 0.366 0.228 0.133 0.279 0.370 0.319 0.224
ComplEx 0.322 0.525 0.365 0.220 0.201 0.358 0.220 0.123 0.314 0.373 0.335 0.280
RotatE 0.350 0.547 0.398 0.249 0.227 0.387 0.246 0.149 0.307 0.377 0.333 0.266
Graph neural network methods on KG
RGCN 0.134 0.271 0.142 0.065 0.116 0.234 0.117 0.058 0.253 0.323 0.287 0.209
CompGCN 0.378 0.600 0.429 0.266 0.223 0.377 0.240 0.149 0.345 0.315 0.336 0.279
Uncertain knowledge graph embedding method
UKGE 0.257 0.299 0.213 0.088 0.143 0.284 0.172 0.053 0.228 0.297 0.210 0.115
Negative sampling method
NSCaching 0.272 0.454 0.310 0.179 0.176 0.297 0.190 0.115 0.123 0.182 0.133 0.093
SANS 0.335 0.545 0.387 0.224 0.225 0.382 0.244 0.147 0.313 0.379 0.337 0.275
Positive-Unlabeled learning method
PUDA 0.329 0.525 0.369 0.229 0.202 0.347 0.217 0.131 0.231 0.329 0.264 0.178
nPUGraph 0.417* 0.663* 0.470* 0.291* 0.258* 0.433* 0.285* 0.171* **0.373* 0.443* 0.379* 0.327***
Gains (%) 10.4 10.5 9.6 9.6 13.9 12.0 16.1 14.8 8.2 16.9 12.6 *16.9*
Table 7: Experimental results under *ptb_rate = 0.7*.
| Dataset | FB15K | FB15K-237 | WN18 | | | | | | | | | |
|--------------------------------------------|---------|-------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Metrics | MRR | H@10 | H@3 | H@1 | MRR | H@10 | H@3 | H@1 | MRR | H@10 | H@3 | H@1 |
| Knowledge graph embedding methods | | | | | | | | | | | | |
| TransE | 0.219 | 0.457 | 0.294 | 0.087 | 0.104 | 0.273 | 0.128 | 0.020 | 0.114 | 0.229 | 0.199 | 0.020 |
| TransR | 0.195 | 0.396 | 0.248 | 0.087 | 0.096 | 0.217 | 0.108 | 0.036 | 0.096 | 0.207 | 0.167 | 0.016 |
| DistMult | 0.250 | 0.425 | 0.282 | 0.163 | 0.173 | 0.308 | 0.186 | 0.105 | 0.181 | 0.240 | 0.211 | 0.143 |
| ComplEx | 0.241 | 0.408 | 0.271 | 0.157 | 0.158 | 0.290 | 0.170 | 0.092 | 0.202 | 0.243 | 0.219 | 0.178 |
| RotatE | 0.271 | 0.439 | 0.309 | 0.185 | 0.198 | 0.337 | 0.212 | 0.129 | 0.192 | 0.250 | 0.212 | 0.159 |
| Graph neural network methods on KG | | | | | | | | | | | | |
| RGCN | 0.129 | 0.229 | 0.134 | 0.075 | 0.086 | 0.178 | 0.086 | 0.040 | 0.166 | 0.215 | 0.191 | 0.135 |
| CompGCN | 0.354 | 0.578 | 0.402 | 0.243 | 0.193 | 0.325 | 0.204 | 0.129 | 0.212 | 0.262 | 0.229 | 0.183 |
| Uncertain knowledge graph embedding method | | | | | | | | | | | | |
| UKGE | 0.186 | 0.358 | 0.199 | 0.076 | 0.133 | 0.201 | 0.115 | 0.075 | 0.099 | 0.176 | 0.153 | 0.116 |
| Negative sampling method | | | | | | | | | | | | |
| NSCaching | 0.174 | 0.309 | 0.194 | 0.105 | 0.157 | 0.276 | 0.169 | 0.099 | 0.057 | 0.085 | 0.062 | 0.041 |
| SANS | 0.292 | 0.478 | 0.332 | 0.196 | 0.201 | 0.342 | 0.216 | 0.131 | 0.202 | 0.254 | 0.222 | 0.171 |
| Positive-Unlabeled learning method | | | | | | | | | | | | |
| PUDA | 0.254 | 0.421 | 0.283 | 0.170 | 0.165 | 0.291 | 0.176 | 0.104 | 0.109 | 0.171 | 0.127 | 0.077 |
| nPUGraph | 0.365* | 0.600* | 0.427* | 0.277* | 0.243* | 0.390* | 0.266* | 0.155* | 0.247* | 0.303* | 0.257* | 0.209* |
| Gains (%) | 3.0 | 3.8 | 6.3 | 13.9 | 20.9 | 14.1 | 23.0 | 18.5 | 16.8 | 15.5 | 12.3 | 14.4 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4.4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4.1, Section A.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, Section A.5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
li-etal-2023-across | {ACROSS}: An Alignment-based Framework for Low-Resource Many-to-One Cross-Lingual Summarization | https://aclanthology.org/2023.findings-acl.154 | This research addresses the challenges of Cross-Lingual Summarization (CLS) in low-resource scenarios and over imbalanced multilingual data. Existing CLS studies mostly resort to pipeline frameworks or multi-task methods in bilingual settings. However, they ignore the data imbalance in multilingual scenarios and do not utilize the high-resource monolingual summarization data. In this paper, we propose the Aligned CROSs-lingual Summarization (ACROSS) model to tackle these issues. Our framework aligns low-resource cross-lingual data with high-resource monolingual data via contrastive and consistency loss, which help enrich low-resource information for high-quality summaries. In addition, we introduce a data augmentation method that can select informative monolingual sentences, which facilitates a deep exploration of high-resource information and introduce new information for low-resource languages. Experiments on the CrossSum dataset show that ACROSS outperforms baseline models and obtains consistently dominant performance on 45 language pairs. | # Across: An Alignment-Based Framework For Low-Resource Many-To-One Cross-Lingual Summarization
Peiyao Li1 Zhengkun Zhang1 Jun Wang2 Liang Li3 Adam Jatowt4 **Zhenglu Yang**1∗
1TKLNDST, CS, Nankai University, China 2Shandong Key Laboratory of Language Resource Development and Application, College of Mathematics and Statistics Science, Ludong University 3Nayuan Technology Co., Ltd. 4University of Innsbruck, Austria
{peiyaoli, zhangzk2017, junwang}@mail.nankai.edu.cn [email protected], [email protected], [email protected]
## Abstract
This research addresses the challenges of CrossLingual Summarization (CLS) in low-resource scenarios and over imbalanced multilingual data. Existing CLS studies mostly resort to pipeline frameworks or multi-task methods in bilingual settings. However, they ignore the data imbalance in multilingual scenarios and do not utilize the high-resource monolingual summarization data. In this paper, we propose the Aligned **CROS**s-lingual Summarization (ACROSS) model to tackle these issues. Our framework aligns lowresource cross-lingual data with high-resource monolingual data via contrastive and consistency loss, which help enrich low-resource information for high-quality summaries. In addition, we introduce a data augmentation method that can select informative monolingual sentences, which facilitates a deep exploration of high-resource information and introduce new information for low-resource languages. Experiments1 on the CrossSum dataset show that ACROSS outperforms baseline models and obtains consistently dominant performance on 45 language pairs.
## 1 Introduction
Given a source document, Cross-Lingual Summarization (CLS) aims to generate a summary in a different language. Therefore, CLS helps users quickly understand news outlines written in foreign, unknown to them, languages. Early CLS approaches typically use pipeline frameworks (Leuski et al., 2003; Orasan and Chiorean, 2008), which are intuitive but suffer from the problem of error cascading. Researchers have recently turned to endto-end models (Zhu et al., 2019, 2020; Bai et al.,
2021) that are immune to this problem. However, these studies are limited to bilingual learning and do not conform to the reality of multilingual scenarios.
∗Corresponding author.
1https://github.com/Youggls/ACROSS-ACL23
![0_image_0.png](0_image_0.png)
Toyota will recall 6.4 million vehicles worldwide. It is because the helical wire connected to the driver's side airbag is flawed.
Given that the real-world news is written in diverse languages and that only a few researchers have explored the multilingual scenarios, we investigate the many-to-one CLS scenario to meet realistic demands. As stated before, CLS data can be viewed as low-resource since parallel CLS data is significantly less abundant than monolingual data (Zhu et al., 2019). The low-resource characteristic of CLS data is further amplified in multilingual scenarios. However, directly training an end-to-end model does not perform well due to the ineffective use of high-resource data and the scarcity of lowresource data. The foremost challenges are how to model cross-lingual semantic correlations in multilingual scenarios and introduce new knowledge to low-resource languages.
To tackle the above challenges, we investigate a novel yet intuitive idea of cross-lingual alignment.
The cross-lingual alignment method can extract deep semantic relations across languages. As portrayed in Figure 1, the materials in three languages
(i.e., French, Chinese, and English) express similar semantics. We can align all these languages for deep cross-lingual semantic knowledge, which is crucial for refining crosslingual materials over different languages for generating high-quality summaries. Moreover, we also consider devising a novel data augmentation (DA) method to introduce new knowledge to low-resource languages.
To investigate the two hypotheses, we introduce a novel many-to-one CLS model for lowresource learning called Aligned **CROS**s-lingual Summarization (ACROSS), which improves the performance of low-resource scenarios by effectively utilizing the abundant high-resource data.
This model conducts cross-lingual alignments both at the model and at the data levels. From the model perspective, we minimize the difference between the cross-lingual and monolingual representations via contrastive and consistency learning (He et al.,
2020; Pan et al., 2021; Li et al., 2021, 2022). This helps to facilitate a solid alignment relationship between low-resource and high-resource language.
From the data perspective, we propose a novel data augmentation method that selects informative sentences from monolingual summarization (MLS)
pairs, which aims to introduce new knowledge for low-resource language.
We conducted experiments on the CrossSum dataset (Hasan et al., 2021), which contains crosslingual summarization pairs in 45 languages. The results show that ACROSS outperforms the baseline models and achieves strong improvements in most language pairs (2.3 average improvement in ROUGE scores).
Our contributions are as follows:
- We propose a novel many-to-one summarization model that aligns cross-lingual and monolingual representations to enrich low-resource data.
- We introduce a data augmentation method to extract high-resource knowledge which is later transferred and which facilitates lowresource learning.
- An extensive experimental evaluation validate the low-resource CLS performance of our model in both quantitative and qualitative ways.
## 2 Related Work
Early CLS research typically used pipeline methods, such as the translate-then-summarize (Leuski et al., 2003; Ouyang et al., 2019) or summarizethen-translate methods (Orasan and Chiorean, 2008; Wan et al., 2010; Yao et al., 2015; Zhang et al., 2016), which are sensitive to error cascading that causes their subpar performance.
Thanks to the development of the transformerbased methods (Vaswani et al., 2017), researchers introduced teacher-student frameworks (Shen et al.,
2018; Duan et al., 2019) wherein the CLS task can be approached via an encoder-decoder model.
Thereafter, the multi-task framework started to be popular in this field (Zhu et al., 2019, 2020; Bai et al., 2021). Recently, researchers have begun to investigate how to fuse translation and summarization tasks into a unified model to improve the performance on the CLS tasks (Liang et al., 2022; Takase and Okazaki, 2022; Bai et al., 2022; Nguyen and Luu, 2022; Jiang et al., 2022). For example, Bai et al. (2022) considered compression so that their model can handle both the CLS and translation tasks at different compression rates.
Focusing on multi-task learning, these multi-task studies attempt to improve CLS performance using machine translation (MT) and MLS tasks in bilingual settings. However, such approaches still establish implicit connections among languages and leave aside the information of high-resource data.
Hasan et al. (2021) recognized the limitations of the above-mentioned scenarios. They proposed a new dataset, CrossSum, in multilingual scenarios and introduced a method balancing the number of different language pairs in a batch, which could alleviate the uneven distribution of training samples and balance performance in different languages.
However, deep semantical correlations across languages as well as abundant information from highresource data have not been investigated.
In contrast to the aforementioned methods, ACROSS introduces cross-lingual alignment and a novel data augmentation method, which can improve low-resource performance from both model and data perspectives.
## 3 **Aligned Cross-Lingual Summarization**
In this section, we explain the details of ACROSS.
ACROSS introduces alignment constraints at both
CrossEntropy Loss MonoLingual Encoder
![2_image_0.png](2_image_0.png)
MonoLingual Decoder
ା **= sim( )**
ି **= sim( )**
## 3.1 Preliminary
Mono-Lingual Abstractive Summarization.
Given a document DA = {x A 1
, xA
2
, ..., xA
n }
written in language A, a monolingual abstractive summarization model induces a summary S
A = {y A
1
, yA
2
, ..., yAm} by minimizing the loss function as follows:
$$\mathcal{L}_{\text{abs}}=-\sum_{t=1}^{n}\log P(y_{t}^{A}|y_{<t}^{A},D^{A},\boldsymbol{\theta}_{\text{mls}}),\tag{1}$$ where $n$ and $m$ are the lengths of the input document
where n and m are the lengths of the input document and output summary, respectively, and θmls is the parameter of the monolingual summarization model.
Cross-Lingual Abstractive Summarization.
Different from monolingual abstractive summarization models, a cross-lingual abstractive summarization model generates a summary S
B = {y B
1
, yB
2
, ..., yBm} in language B when given a source document DA = {x A 1
, xA
2
, ..., xA
n } in language A. The loss function of the CLS model can be formulated as:
can be formulated as: $$\mathcal{L}_{\text{cls}}=-\sum_{t=1}^{n}\log P(y_{t}^{B}|y_{<t}^{B},D^{A},\theta_{\text{cls}}),\tag{2}$$ where $\theta_{\text{cls}}$ is the parameter of the CLS model.
## 3.2 Cross-Lingual Alignment
Cross-Lingual Contrastive Learning for Encoder. Multilingual transformer treats all languages equally, which leads to the representation of
different languages being distributed over different
spaces, eventually making it difficult for CLS tasks to take advantage of the high-resource monolingual data. Therefore, we should encourage the model to
improve cross-lingual performance with a strong monolingual summarization capability. With the
help of contrastive learning, ACROSS can align
the cross-lingual input representation to the monolingual space, thus realizing the idea mentioned
above.
Firstly, given a cross-lingual summarization
and the paired monolingual document tuple:
(DA, DB
+, SB), we need to randomly choose a negative document set N = {DB
1
, DB
2
, ..., DB
|N |} in
the dataset. Then, we can obtain the representation
of DA with a Transformer Encoder and a pooling
function F as follows:
$${\boldsymbol{h}}^{A}={\mathcal{F}}(\mathrm{Encoder_{cls}}(D^{A})).$$
Similarly, we can obtain the representation of DB
+
with a pretrained Encoder of the monolingual summarization model as:
$${\boldsymbol{h}}^{B}={\mathcal{F}}(\mathrm{Encoder}_{\mathrm{mls}}^{*}({\boldsymbol{D}}^{B})).$$
B = F(Encoder∗mls(DB)). (4)
Finally, the contrastive learning objective is constructed to minimize the loss as follows:
$\int_{\mathbb{R}}$
$\int$.
$${\mathcal{L}}_{\mathrm{{cr}}}=-\log{\frac{e^{\mathrm{{sim}}({\boldsymbol{h}}^{A},{\boldsymbol{h}}_{+}^{B})/\tau}}{\sum_{i\in\operatorname{idx}({\mathcal{N}})}e^{\mathrm{{sim}}({\boldsymbol{h}}^{A},{\boldsymbol{h}}_{i}^{B})/\tau}}},\quad(5)$$
where τ is a temperature hyper-parameter and sim(·) denotes a similarity function that can measure the distance of two vectors in an embedding space2.
2We use cosine similarity as the similarity function.
Cross-Lingual Consistency Learning for Decoder. Consistency learning aims to model consistency across the models' predictions, which can help child models gain improvement from the pretrained parent model. By constraining the output probability distributions of decoders, the CLS child model can be aligned to the MLS pre-trained parent model.
Given a tuple composed of a CLS document and its paired monolingual document (DA, DB
+), we can obtain the output distribution of the CLS model at each decoding step as follows:
P(y B t|y B <t, DA, θcls) = Modelcls(y B <t, DA). (6)
Similarly, we can construct the output distribution of the MLS model at each decoding step as:
$$P(y_{t}^{B}|y_{<t}^{B},D_{+}^{B},\theta_{\mathrm{mls}}^{*})=\mathrm{Model}_{\mathrm{mls}}^{*}(y_{<t}^{B},D_{+}^{B}),\tag{7}$$
where θ
∗mls denotes frozen parameters and Model∗mls means that the parameters of the MLS
model are frozen during training. Then, we can bridge the distribution gap between the CLS and MLS models by minimizing the following consistency loss function as:
$$\begin{split}\mathcal{L}_{\text{con}}=\sum_{t=1}^{n}\text{JS-Div}[P(y_{t}^{B}|y_{<t}^{B},D_{}^{A},\boldsymbol{\theta}_{\text{cls}}),\\ P(y_{t}^{B}|y_{<t}^{B},D_{+}^{B},\boldsymbol{\theta}_{\text{mls}}^{*})],\end{split}\tag{8}$$
where JS-Div denotes Jensen–Shannon divergence (Lin, 1991), which is used to measure the gap between the pretrained and child models.
Training Objective of ACROSS. We jointly minimize CLS, consistency, and contrastive loss during the training period. The final training objective of ACROSS is formulated as:
$${\cal L}=\alpha\cdot{\cal L}_{\rm cls}+\beta\cdot{\cal L}_{\rm corr}+\gamma\cdot{\cal L}_{\rm con},\tag{9}$$
where α, β, and γ are hyper-parameters used to balance the weights of the three losses.
## 3.3 Data Augmentation For Cross-Lingual Summarization
Data augmentation is a widely used technique in low-resource scenarios (Sennrich et al., 2016a; Fabbri et al., 2021). In Seq2Seq tasks, it often leverages translation to increase the amount of data in low-resource scenarios. However, in the CLS task, the direct translation of monolingual data from a
![3_image_0.png](3_image_0.png)
Mr Flegel , from Dusseldorf , appeared by video Mr Justice Sweeney set a trial for 28 June next 他被指控犯有五项传播恐怖主义出版物的罪行,包括伊斯 兰国组织的宣传视频。
来自杜塞尔多夫的Flegel**先生通过视频链接从万斯沃斯监狱**
出庭。
Mr Justice Sweeney set a trial for 28 June next year at
high-resource language to a low-resource language might lose some valuable information. The distribution of information in the input document is uneven, making some sentences potentially more important than others. Therefore, directly translating all sentences into a low-resource language and using them for training the model may not be conducive to CLS.
Considering the characteristics of the summarization task, we propose an importance-based data augmentation method based on ROUGE scores.
First, an input document DB
iis split into several sentences S = {s1, s2*, ..., s*k}. Then, the ROUGE
score is calculated for each sentence and summary S
B. The ROUGE score of each sentence is represented as R = {r1, r2*, ..., r*k}. Next, the sentences corresponding to the top a% ROUGE scores are selected and translated into the low-resource language. Finally, the translated sentences are reassembled with other sentences to form a pseudo document D
Ap i, so that pseudo-low-resource summarization pairs (D
Ap i, SB) can be generated.
Figure 3 shows an example of the process from an English monolingual summarization pair to a Chinese-English summarization pair3. The two 3We set a=50.
![4_image_0.png](4_image_0.png)
8032 7723 7264 Extremely Normal
sentences with the highest ROUGE scores are the second and third sentences; hence, these two sentences are translated into Chinese.
## 4 Experiment Setup 4.1 Dataset
We conduct experiments using the previously mentioned CrossSum dataset (Hasan et al., 2021).
CrossSum is a multilingual CLS dataset that contains cross-lingual summarization data in 45 languages. Moreover, it realistically reflects the skewness of data distribution in practical CLS tasks.
Figure 4 portrays the degree of imbalance of the dataset. As we can see, English monolingual summaries constitute over 70% of the English target summaries, while there are less than 30% summaries of other 44 languages to English. We classify languages with less than 1,000 training samples as extremely low-resource scenarios, between 1,000 and 5,000 as medium low-resource scenarios, and larger than 5,000 as normal low-resource scenarios.
## 4.2 Baselines
We compare our model with the following baselines:
Multistage: a training sampling strategy proposed by Hasan et al. (2021). Thi s method balances the number of different language pairs in a batch, thus alleviating the uneven distribution of training samples in different languages.
NCLS+MT: a method based on the multi-task framework proposed by Zhu et al. (2019). The model uses two independent decoders for CLS and MT tasks. As the original NCLS+MT model can only handle bilingual CLS task, we replace its encoder with a multilingual encoder.
Medium NCLS+MLS: a method also proposed by Zhu et al. (2019). Its difference with NCLS+MT is that the multi-task decoder is used for MLS task.
## 4.3 Experimental Settings
For training, the MLS model is trained on the English-English subset of the CrossSum dataset and its parameters are initialized using mT5 (Xue et al., 2021). Thereafter, we initialize the CLS
model using the pre-trained MLS model. We set dropout to 0.1 and the learning rate to 5e-4 with polynomial decay scheduling as well as a warm-up step of 5,000. For optimization, we use the Adam optimizer (Kingma and Ba, 2015) with ϵ = 1e-8, β1 = 0.9, β2 = 0.999, and weight decay = 0.01.
The hyper-parameters *α, β*, and γ are set to 1.0, 1.0, and 2.0, respectively. The size of the negative sample set is 1,024. The temperature hyper-parameter τ is set to 0.1. To stabilize the training process, we choose the gradient norm value to be 1.0. The vocabulary size is 250,112, and BPE (Sennrich et al.,
2016b) is used as the tokenization strategy. We limit the max input length to 512 and the max summary length to 84. We train our model on 4 RTX
A5000 GPUs for 40,000 training steps, setting the batch size to 256 for each step. For inference, we use the beam-search decoding strategy (Wiseman and Rush, 2016) and set the beam size to 5.
260000 **261934**
## 5 Experiment Results 5.1 Main Results
We evaluate ACROSS on the standard ROUGE
metric (Lin, 2004), reporting the F1 score (%) of ROUGE-1, ROUGE-2, and ROUGE-L. Table 1 presents the main results of ACROSS and other
| Model | Extremely | Medium | Normal | Overall | | | | | | | | |
|------------------|-------------|----------|----------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|
| RG1 | RG2 | RGL | RG1 | RG2 | RGL | RG1 | RG2 | RGL | RG1 | RG2 | RGL | |
| NCLS+MLS-small | 24.46 | 6.21 | 18.97 | 25.76 | 7.09 | 20.06 | 28.57 | 8.52 | 22.20 | 25.78 | 7.07 | 20.05 |
| NCLS+MT-small | 25.69 | 7.15 | 20.19 | 26.68 | 7.49 | 20.74 | 29.17 | 8.90 | 22.68 | 26.75 | 7.64 | 20.86 |
| Multistage-small | 25.78 | 7.07 | 19.97 | 27.13 | 7.87 | 21.03 | 29.94 | 9.56 | 23.16 | 27.04 | 7.89 | 20.99 |
| Multistage-base | 28.00 | 8.51 | 21.97 | 30.10 | 9.90 | 23.36 | 33.16 | 11.94 | 25.84 | 29.90 | 9.82 | 22.34 |
| ACROSS-small | 28.20 | 8.43 | 22.06 | 29.34 | 8.99 | 22.64 | 31.94 | 10.58 | 24.82 | 29.24 | 9.01 | 22.70 |
| ACROSS-base | 31.01 | 10.46 | 24.29 | 33.86 | 12.35 | 26.56 | 36.11 | 14.11 | 28.49 | 33.34 | 12.16 | 26.27 |
models on different low-resource settings. *base* and *small* refer to different mT5 settings. *base* model contains a 12-layer encoder and 12-layer decoder with 768-dimensional hidden representations. *small* model contains an 8-layer encoder and 8-layer decoder with 512-dimensional hidden representations. As discussed in Section 4.1, we classify languages as extremely, medium and normal low-resource scenarios. It should be clarified that although we have artificially divided languages into different low-resource scenarios, any CLS language pair is actually low-resource compared to the English-English data volume.
Comparision with Multistage. Compared with the Multistage-base, ACROSS-base obtains 1.95, 2.45, and 2.17 ROUGE-2 improvements for extremely, medium and normal low-resource scenarios, respectively. Furthermore, ACROSS-base reaches 3.01, 3.76, and 2.95 ROUGE-1 improvements for extremely, medium and normal lowresource scenarios, respectively. The ROUGE-L
scores for extremely, medium and normal lowresource scenarios are also improved by 2.32, 3.2, and 2.65, respectively. As shown in Figure 5, ACROSS-base outperforms Multistage-base significantly under the different language test sets.
The ROUGE2 scores for more than 30 languages have an increase of more than 2, which represents a stable improvement of ACROSS.
Moreover, ACROSS-small surpasses or is compared to Multistage-base under some metrics (improving ROUGE-1 by 0.2 and ROUGE-L by 0.09 in extremely low-resource scenarios).
In addition, we can see in Figure 5 that EnglishEnglish's ROUGE-2 score improves by only 0.77, which illustrates that the improvement of ACROSS
comes mainly from the better alignment between other languages and English, rather than from
3+ Improvement
![5_image_0.png](5_image_0.png)
2+ Improvement
the improvement of the ability to do summarization on English. Considering the actual data size, ACROSS significantly overperforms baselines in low-resource CLS scenarios. Additionally, we demonstrate ROUGE-1 and ROUGE-L results in Appendix A.
Comparision with Multi-Task Methods. Compared with the two multi-task methods (i.e.,
NCLS+MT and NCLS+MLS), we find that the two methods do not perform as well as Multistage and have a greater gap with ACROSS. Compared with NCLS+MT and NCLS+MLS, the ROUGE-
| Model | RG1 | RG2 | RGL |
|------------|-------|-------|-------|
| Multistage | 27.04 | 7.89 | 20.99 |
| ctr+con+DA | 29.24 | 9.01 | 22.70 |
| con+DA | 29.13 | 8.88 | 22.60 |
| con | 28.88 | 8.66 | 22.27 |
| DA | 27.63 | 8.28 | 21.51 |
1, ROUGE-2, and ROUGE-L scores of ACROSS
are enhanced by more than 3, 1, and 2, respectively. This phenomenon reveals that multi-task approaches that rely on MT and MLS learning may be not effective in multilingual scenarios. ACROSS
turns to be more suitable for the scenarios with imbalanced resources.
## 5.2 Analysis
Ablation Study. We next conduct the ablation study in small settings. We summarize the experimental results in Table 2 as below:
- *ctr+con+DA* performs better than *con+DA*,
suggesting that although con can significantly improve performance, the aligned representation is also beneficial for CLS tasks.
- The complete model produces better results compared with DA. Except for Multistage, DA performs worse than the models adding other losses, which implies that the excellent performance of ACROSS does not merely come from data augmentation.
- Comparing DA and con, we can see that the aligned model and representation are crucial for a successful CLS task.
Analysis of Data Augmentation. We conduct experiments on different selection approaches to evaluate the performance of our proposed DA method.
As recorded in Table 3, *Informative* performs best compared to the other methods, which indicates that the DA method can help ACROSS learn more important information in the CLS task. The Truncation performs inferior, because the more important sentences in the news report tend to be in
Model RG1 RG2 RGL
Multistage 27.04 7.89 20.99
Random 28.36 8.62 22.26
Uninformative 28.24 8.56 22.09
Truncation 28.94 8.77 22.52
the relatively front position. The results also validate the effectiveness of the DA method in selecting more important sentences for translation.
Generally speaking, the results tell us that the DA method is beneficial for the CLS task, and translating important sentences is useful for crosslingual alignment. Human Evaluation. Due to the difficulty of finding a large number of users who speak low-resource languages, we only conduct the human evaluation on 20 random samples from Chinese-English and French-English test sets. We compare the summaries generated by ACROSS with those generated by Multistage. We invite participants to compare the auto-generated summaries with ground truth summaries from three perspectives: fluency (FL),
informativeness (IF), and conciseness (CC). Each sample is evaluated by three participants.
The results shown in Table 4 indicate that ACROSS is capable of generating fluent summaries and these summaries are also informative and concise according to the human feelings.
| Model | FL | IF | CC |
|------------|------|------|------|
| Multistage | 4.10 | 3.57 | 3.68 |
| ACROSS | 4.43 | 3.96 | 4.04 |
Visualization of Alignment. To further demonstrate the alignment result of ACROSS, we visualize the similarity between CLS inputs and the paired English inputs in Chinese-English and French-English test sets. We randomly sample 50 cross-lingual inputs from the test set and obtain the
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
representations of these cross-lingual inputs and the paired English inputs. Then, we calculate the cosine similarity of the two languages to construct the similarity matrix. Finally, we plot the heat map of the similarity matrix.
In Figure 6b and Figure 6d, the clear diagonal indicates the paired inputs have significantly higher similarities. In comparison, other unpaired inputs have lower similarities. In Figure 6a and Figure 6c, we can observe that the similarity distribution is characterized by more confusion.
In summary, ACROSS can effectively align cross-lingual and English inputs, demonstrating through the experiments that aligned representations are more useful for CLS tasks in multilingual settings.
Case Study. We finally implement the case study of a sample from the Chinese-English test set. The Baseline employed here is the Multistage model.
The words and characters in red are important and overlap with **Ground Truth**. On the opposite, the words in green are errors. As shown in Figure 7, compared to Multistage, ACROSS can cover details in a better and more detailed way (e.g., using some proper nouns and phrases). For example, asthma and *processed meat* are present in the generated summary by ACROSS; yet, the summary generated by the baseline does not involve these important terms, and it also contains factual consistency errors. Taking another example, in the summary generated by the baseline, the terms *fruit* and vegetables, including cabbage, broccoli, and kale appear, while these terms are not mentioned in the original text.
The above examples suggest that ACROSS improves the performance of CLS based on the ability Source: **70克大约是一根香肠再加一片火腿。 根据法**
国研究人员的调查发现,如果一周吃四份以上的加工 肉食品就会增加健康风险。但专家说,两者之间的联系 并没有得到证明,需要做更多的调查。专家还建议,人 们应该遵循一种更健康的饮食结构,例如每天吃的红 肉和加工肉食品不要超过70克。参加这项试验的人中 有一半是哮喘病人,然后观察他们的哮喘症状。试验 显示,如果他们吃了过多的加工肉,症状就会加重。
Translation: **70 grams is about one sausage plus one slice** of ham. According to a survey by French **researchers**, eating more than four servings of processed meat **a week** increases health risks. But experts say the link between the two has not been proven and more investigation is needed. Experts also recommend that people follow a healthier diet, such as eating no more than 70 grams of red and processed meat per day. Half of the people who took part in the trial were asthmatics, and their **asthma** symptoms **were then observed. Tests showed that if they** ate too much processed meat, symptoms worsened. ACROSS: Eating lots of processed meat **could increase**
the risk of an asthma attack, according to researchers. Baseline: A link between eating a lot of **fruit and**
vegetables, including cabbage, broccoli and kale**, has**
been suggested by French researchers. Ground Truth: Eating processed meat **might make**
asthma symptoms worse, say **researchers**.
Figure 7: Case study. The words in red are important and overlap with Ground Truth. The green words are errors.
of strong MLS under the guidance of alignment.
## 6 Conclusion
In this work, we propose ACROSS, a many-toone cross-lingual summarization model. Inspired by the alignment idea, we design contrastive and consistency loss for ACROSS. Experimental results show that with the ACROSS framework, CLS
model improves the low-resource performance by effectively utilizing high-resource monolingual data. Our findings point to the importance of alignment in cross-lingual fields for future research. In the future, we plan to apply this idea to combine CLS in multimodal scenarios, which might enable the model to better serve realistic demands.
## Acknowledgements
This work was supported in part by National Natural Science Foundation of China under Grant No.
62106091, and in part by the Shandong Provincial Natural Science Foundation under Grant No.
ZR2019MF062.
## Limitations
Considering that English is the most widely spoken language, we select it as the high-resource monolingual language in this study. While ACROSS is a general summarization framework not limited to a certain target language, it deserves an in-depth exploration of how ACROSS works on other highresource languages.
Additionally, we employ mT5 as our backbone because it supports most languages in CrossSum.
The performance of ACROSS after replacing mT5 with other models, such as mBART(Liu et al.,
2020), FLAN-T5(Chung et al., 2022), will be investigated in the future.
## Ethical Consideration
Controversial Generation Content. Our model is less likely to generate controversial content(e.g.,
discrimination, criticism, and antagonism) since the model is trained on a dataset from the BBC News domain. Data in the news domain is often scrutinized before being published, and thus the model is not likely to generate controversial data.
Desensitization of User Data. We use the Amazon Mechanical Turk crowdsourcing platform to evaluate three artificial indicators (i.e., fluency, informativeness, and conciseness). For investigators, all sensitive user data is desensitized by the platform. Therefore, we also do not have access to sensitive user information.
## References
Yu Bai, Yang Gao, and Heyan Huang. 2021. Crosslingual abstractive summarization with limited parallel resources. In *Proceedings of ACL-IJCNLP*, pages 6910–6924.
Yu Bai, Heyan Huang, Kai Fan, Yang Gao, Yiming Zhu, Jiaao Zhan, Zewen Chi, and Boxing Chen. 2022.
Unifying cross-lingual summarization and machine
translation with compression rate. In *Proceedings of* SIGIR, pages 1087–1097.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. 2019. Zero-shot crosslingual abstractive sentence summarization through teaching generation and attention. In *Proceedings of* ACL, pages 3162–3172.
Alexander Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, Shafiq Joty, Dragomir Radev, and Yashar Mehdad. 2021. Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation. In *Proceedings of NAACL-HLT*, pages 704–717.
Tahmid Hasan, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, and Rifat Shahriyar. 2021. Crosssum: Beyond englishcentric cross-lingual abstractive text summarization for 1500+ language pairs. *CoRR*, abs/2112.08804.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *Proceedings of CVPR*, pages 9726–9735.
Shuyu Jiang, Dengbiao Tu, Xingshu Chen, Rui Tang, Wenxian Wang, and Haizhou Wang. 2022. Cluegraphsum: Let key clues guide the cross-lingual abstractive summarization. *CoRR*, abs/2203.02797.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In Proceedings of ICLR.
Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich Germann, Franz Josef Och, and Eduard H. Hovy. 2003.
Cross-lingual c*st*rd: English access to hindi information. *ACM Trans. Asian Lang. Inf. Process.*, pages 245–269.
Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven Chu-Hong Hoi. 2021. Align before fuse:
Vision and language representation learning with momentum distillation. In *Proceedings of NeurIPS*,
pages 9694–9705.
Zhaocong Li, Xuebo Liu, Derek F. Wong, Lidia S. Chao, and Min Zhang. 2022. Consisttl: Modeling consistency in transfer learning for low-resource neural machine translation. *CoRR*, arXiv/2212.04262.
Yunlong Liang, Fandong Meng, Chulun Zhou, Jinan Xu, Yufeng Chen, Jinsong Su, and Jie Zhou. 2022. A variational hierarchical model for neural cross-lingual summarization. In *Proceedings of ACL*, pages 2088– 2099.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain.
Sam Wiseman and Alexander M. Rush. 2016. Sequenceto-sequence learning as beam-search optimization. In Proceedings of EMNLP, pages 1296–1306.
J. Lin. 1991. Divergence measures based on the shannon entropy. *IEEE Transactions on Information Theory*, pages 145–151.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of NAACL-HLT, pages 483–498.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, pages 726–742.
Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2016.
Abstractive cross-language summarization via translation model enhanced predicate argument structure fusing. *IEEE ACM Trans. Audio Speech Lang. Process.*, pages 1842–1853.
Constantin Orasan and Oana Andreea Chiorean. 2008.
Evaluation of a cross-lingual romanian-english multidocument summariser. In *Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, 26 May - 1 June 2008, Marrakech,*
Morocco.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proceddings of NIPS*.
Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010.
Cross-language document summarization based on machine translation quality prediction. In *Proceedings of ACL*, pages 917–926.
Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015.
Phrase-based compressive cross-language summarization. In *Proceedings of EMNLP*, pages 118–127.
Thong Thanh Nguyen and Anh Tuan Luu. 2022. Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation. In *Proceedings of AAAI*,
pages 11103–11111.
Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong.
2019. NCLS: neural cross-lingual summarization. In Proceedings of EMNLP-IJCNLP, pages 3052–3062.
Junnan Zhu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2020. Attend, translate and summarize: An efficient method for neural cross-lingual summarization. In *Proceedings of ACL*, pages 1309–1321.
Jessica Ouyang, Boya Song, and Kathy McKeown.
2019. A robust abstractive system for cross-lingual summarization. In *Proceedings of NAACL-HLT*,
pages 2025–2031.
Xiao Pan, Mingxuan Wang, Liwei Wu, and Lei Li. 2021.
Contrastive learning for many-to-many multilingual neural machine translation. In *Proceedings of ACLIJCNLP*, pages 244–258.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016a. Improving neural machine translation models with monolingual data. In *Proceedings of ACL*, pages 86–96.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016b. Neural machine translation of rare words with subword units. In *Proceedings of ACL*, pages 1715–1725.
Shiqi Shen, Yun Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2018. Zero-shot cross-lingual neural headline generation. *IEEE ACM Trans. Audio Speech* Lang. Process., pages 2319–2327.
Sho Takase and Naoaki Okazaki. 2022. Multi-task learning for cross-lingual abstractive summarization. In Proceedings of LREC, pages 3008–3016.
## A Appendix
Analysis of Alignment Methods. To further show the effectiveness of ACROSS, we conduct an experiment to analyze the alignment methods.
We replace the alignment methods of the encoder and decoder. As Table 5 shows, replacing any part of the original alignment methods will make the model perform worse. In particular, replacing the consistency and contrastive loss at the same time significantly reduces the model's performance, which reinforces the rationality of our different loss designs.
| Model | RG1 | RG2 | RGL |
|---------|-------|-------|-------|
| ctr+con | 29.24 | 9.01 | 22.70 |
| ctr+ctr | 26.58 | 8.28 | 21.23 |
| con+con | 28.43 | 8.89 | 22.32 |
| con+ctr | 26.23 | 8.01 | 21.18 |
Data Augmentation Settings. We use HelsinkiNLP4as our translation model. In practice, we select the sentences corresponding to the top 50% of ROUGE scores. Furthermore, we set the beam size to 4, lenght-penalty to 1.0 and min-length to 10 for decoding.
## Rouge-1 & Rouge-L Improvement For
ACROSS-base. To show the improvement of our model on different metrics, we plot the improvement compared to Multistage-base, similar to Figure 5. As Figure 8 and 9 shows, ACROSSbase also has a significant and stable improvement on ROUGE-1 and ROUGE-L among different languages.
Analysis of Translation Ratio. We also analyze the impact of the translation ratio α on the final results. As table 6 shows, ACROSS-100% performs worse instead, probably because translating all sentences introduces too much extraneous noise instead.
Improvement in Different Resource Scenarios.
We also analyze the improvement in different resource scenarios. As table 7 shows, low-resource 4https://huggingface.co/Helsinki-NLP
![10_image_0.png](10_image_0.png)
languages get a more significant improvement compared to high-resource languages.
Form for Human Evaluation. Figure 10 shows the form we gave to participants, on the case of French-English summarization evaluation. Participants were asked to compare the auto-generated summaries with ground truth summaries from three perspectives: fluency, informativeness, and conciseness from one to five. And each participant will be informed that their scores for the different summaries will appear in our study as an evaluation metric.
| Model | RG1 | RG2 | RGL |
|-------------|-------|-------|-------|
| baseline | 27.04 | 7.89 | 20.99 |
| ACROSS-50% | 29.24 | 9.01 | 22.70 |
| ACROSS-100% | 29.04 | 8.90 | 22.58 |
![11_image_0.png](11_image_0.png)
| Model | Extremely | Medium | Normal |
|--------------|-------------|----------|----------|
| ACROSS-small | 19.24% | 14.23% | 10.67% |
| ACROSS-base | 22.91% | 24.75% | 18.17% |
Original English Headline:
Two South African police officers have been arrested over the deadly shooting of a 16-year-old boy, which had sparked violent street protests.
French Document:
Les habitants d'Eldorado Park ont organise des manifestations après que Nathaniel Julius ait été abattu La famille de Nathaniel Julius, un adolescent attent du syndrome de Down, a déclaré quil était sorti acheter des biscuits lorsquil a été abattu dans la banlieue d'Eldorado Park à Johannesburg. Les officiers seront accusés de meurtre et "peut-être d'obstacle à la justice", a déclaré l'instance de régulation de la police en Afrique du Sud. La famille a déclaré que Julius avail été abattu après avoir omis de répondre aux questions des officiers. Cependant, ont-ils ajouté, c'étail à cause de son handicap. A lire aussi Onze taximen tués en Afrique du Sud L'Afrique du Sud gangrenée par la violence Mécontentement suite à l'interdiction de l'alcool en Afrique du Sud La police a d abord declare que Julius avait ete pris dans une fusilade entre des officiers et des gangsters locaux. La Direction independante des enquêtes policieres
(pid) a déclaré qu'elle avait décidé d'arréter les officiers après "un examen attentif des preuves disponibles". Après la mort de Julius mercredi sor, des
centiles de résiden d 'habitants sont descendus dans la rue pour protester La police a tire des balles en caoutchouc pour disperser les manifestants La police a utilise des balles en caoutchouc et des grenades paralysantes pour disperser les manifestants qui avaient bloqué les rues avec des barricades en feu. Ces affrontements ont conduit le president Cyril Ramaphosa a lancer un appel au calme. La police sud-africaine est souvent accusee de faire un usage excessif de la force - les forces de sécurité ont été accusées d'avoir tué au moins 10 personnes cette année alors qu'elles faisaient appliquer les mesures prises pour stopper la propagation du coronavirus. *11 n'y a aucune preuve de provocation et il est difficile de comprendre pourquoi des balles reeles pouraient etre utilisees dns une communauté comme celle-ci", a déclaré l'archevèque Malusi Mpumlwana, chef du Conseil sud-africain des églises, aux médias locaux. "Nous ne pouvons pas dire "Black Lives Matter" aux États-Unis si nous ne le disons pas en Afrique du Sud", a-t-il déclaré.
Generated Headline Result 1:
Two South African police officers have been arrested over the shooting of a boy in Johannesburg.
On a scale of 1-5, how fluent is the generated headline? Lower scores indicate lower fluency.
On a scale of 1-5, how much is the informativeness between the generated headine and the source document? Lower scores indicate that the headine change more details of the source document On a scale of 1-5, how much is the consistency of style between the generated headline and the original headline, including sentence pattern? Lower scores indicate lower consistency.
Generated Headline Result 2:
Two South African policemen have been suspended and charged with murder after the death of a boy in Johannesburg.
On a scale of 1-5, how fluent is the generated headline? Lower scores indicate lower fluency.
On a scale of 1-5, how much is the informativeness between the generated headline and the source document? Lower scores indicate that the headline change more details of the source docume On a scale of 1-5, how much is the consistency of style between the generated headine and the original headine, including sentence pattern? Lower scores indicate lower consistency.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
We report a series of experimental setups in the paper, including model size, experimental data, performance on different languages, etc. Our proposed approach is also a generic generalization approach that takes into account universal scenarios.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The dataset we use is released under license CC BY-NC-SA 4.0. The license is restricted only to those who want to modify and redistribute it, who need to use the same license as it.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
8
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 5
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
appendix a
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We used the platform provided by Amazon for human evaluation and charged 0.02$ per piece of data, which is also in line with the price of most text tasks.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? appendix a D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
wang-etal-2023-rfid | {RF}i{D}: Towards Rational Fusion-in-Decoder for Open-Domain Question Answering | https://aclanthology.org/2023.findings-acl.155 | Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages. Although representative models like Fusion-in-Decoder (FiD) have been proposed to address this challenge, these systems can inadvertently rely on spurious features instead of genuine causal relationships between the question and the passages to generate answers. To counter this problem, we introduce the Rational Fusion-in-Decoder (RFiD) model. Our model leverages the encoders of FiD to differentiate between causal relationships and spurious features, subsequently guiding the decoder to generate answers informed by this discernment. Experimental results on two ODQA datasets, Natural Questions (NQ) and TriviaQA (TQ), demonstrate that our model surpasses previous methods, achieving improvements of up to 1.5 and 0.7 in Exact Match scores on NQ, and exhibits an enhanced ability to identify causal relationships. | # Rfid: Towards Rational Fusion-In-Decoder For Open-Domain Question Answering
Cunxiang Wang♣**, Haofei Yu**♥∗
, Yue Zhang♣†
♣School of Engineering, Westlake University, China
♥Language Technologies Institute, Carnegie Mellon University, USA
{wangcunxiang, zhangyue}@westlake.edu.cn; [email protected]
## Abstract
Open-Domain Question Answering (ODQA)
systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages. Although representative models like Fusion-in-Decoder (FiD) have been proposed to address this challenge, these systems can inadvertently rely on spurious features instead of genuine causal relationships between the question and the passages to generate answers. To counter this problem, we introduce the Rational Fusion-in-Decoder (RFiD) model.
Our model leverages the encoders of FiD to differentiate between causal relationships and spurious features, subsequently guiding the decoder to generate answers informed by this discernment. Experimental results on two ODQA
datasets, Natural Questions (NQ) and TriviaQA
(TQ), demonstrate that our model surpasses previous methods, achieving improvements of up to 1.5 and 0.7 in Exact Match scores on NQ, and exhibits an enhanced ability to identify causal relationships.1
## 1 Introduction
Open-domain Question Answering (ODQA) has garnered significant attention (Chen et al., 2017; Kwiatkowski et al., 2019; Joshi et al., 2017), leading to the development of various systems designed to retrieve relevant passages (Karpukhin et al.,
2020; Bevilacqua et al., 2022) from large databases and generate corresponding answers (Izacard and Grave, 2020b; Lewis et al., 2020). We utilize the Fusion-in-Decoder (FiD) (Izacard and Grave, 2020b) model as our baseline model, a sequence-tosequence paradigm based on the T5 model (Raffel et al., 2020). Given a question, the FiD model encodes K retrieved passages using K respective T5 encoders, concatenates these K encoder hidden
∗Co-first Author
†The correponding author.
1Our code and data are available at https://github.com/wangcunxiang/RFiD
![0_image_0.png](0_image_0.png)
Figure 1: An example from our experiments. The question has only one relevant passage (red Psg0), while the remaining blue ones represent three passages that contain the wrong answer generated by the baseline model.
states, and then feeds the result into a T5 decoder to generate the answer.
FiD treats all passages equally in its encoders, relying exclusively on the cross-attention mechanism to establish correlations between the decoder and encoders. However, the cross-attention mechanism lacks an explicit mechanism for distinguishing differences among passages, which can result in the detection of spurious patterns (Slack et al., 2020; Jo and Bengio, 2017). Consequently, it becomes challenging for the model to identify crucial passages.
An example of such spurious patterns observed in our experiment is depicted in Figure 1, where the model confuses "Disney Art of Animation Resort" with "Art of Disney Animation" due to the prevalence of passages about the latter, resulting in an incorrect answer.
To address this issue, we propose a conceptually straightforward strategy by introducing a rationalization process to explicitly determine which retrieved passages contain the answer before conducting answer generation. This process assigns different embeddings to rationale passages and irrelevant passages. These embeddings then guide the cross-attention during the answer generation phase. We dub this new model the Rational Fusion-in-Decoder (RFiD).
We evaluate the effectiveness of our proposed RFiD model through experiments on the Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (TQ) (Joshi et al., 2017) datasets. Our results demonstrate that our methods can help models overcome spurious patterns and enhance their reasoning abilities, leading to an improvement of up to 1.5/0.7 Exact Match points on the NQ/TQ datasets respectively. Further analysis reveals that our methods effectively direct models to focus more on correct causal features and less on spurious features.
For instance, as seen in the rightmost column of Figure 1, our model has increased its attention on the relevant passage.
To the best of our knowledge, we are the first to incorporate rationalization into ODQA models, thus underscoring the importance of passage rationalization.
## 2 Related Work
Open Domain Question Answering (ODQA).
The prevailing approach to ODQA involves using a retriever to pinpoint relevant passages for a given question from a vast database, such as Wikipedia, and then employing a reader to generate the final answer. This is achieved by integrating the retrieved passages and question with a large pretrained language model. Retrievers commonly use methods ranging from string-matching algorithms like BM25, to dense retrievers such as DPR
(Karpukhin et al., 2020), and semi-parametric retrievers like SEAL (Bevilacqua et al., 2022). The reader models fall into two primary categories: Extractive readers, such as DPR-reader (Karpukhin et al., 2020), identify the answer spans from the retrieved passages, while Generative readers, including the Fusion-in-Decoder model (FiD) (Izacard and Grave, 2020b) and the Retrieval-Augmented Generation model (RAG) (Lewis et al., 2020), generate the answer in a sequence-to-sequence manner.
Our work seeks to enhance the reasoning ability of the FiD reader without modifying the retriever. To this end, KG-FiD (Yu et al., 2022) uses knowledge graphs to rerank and concatenate related passages for improved performance and efficiency.
GRAPE (Ju et al., 2022) incorporates knowledge graphs into FiD by integrating the Relation-aware graph neural network into the encoder. R2-D2 (Fajcik et al., 2021) combines a passage reranker, an extractive reader, and two generative readers into a single comprehensive ensemble model. Unlike these approaches, our work does not involve the use of external knowledge or alternate model architectures, instead focusing on spurious patterns and reader rationale capabilities.
Rationale. Recently, spurious patterns have come into the spotlight in NLP (Slack et al., 2020; Jo and Bengio, 2017), demonstrating a significant impact on model generalization (Kaushik et al.,
2020, 2021). Various strategies have been implemented to curb spurious features in tasks like sentiment analysis (Lu et al., 2022; Yang et al., 2021),
NER (Zeng et al., 2020; Yang et al., 2022), NLI
(Wu et al., 2022) and more (Wang and Culotta, 2020).
Our work shares a common goal of overcoming spurious patterns and prioritizing causal features, but it distinguishes itself by using an encoder to identify causal features instead of data augmentation. To the best of our knowledge, we are the first to incorporate rationalization into ODQA.
Asai et al. (2022) also devise multi-task learning methods to train the model to select evidential passages during answer generation, a technique somewhat similar to ours. However, our work differs in two fundamental ways: 1. We strive to guide the decoder with a learnable embedding, which they do not. This approach results in superior performance with an accuracy of 50.7 vs 49.8 on NQ and 69.6 vs 67.8 on TQ. 2. We analyze the rationale ability of our RFiD model, explaining the performance gain and aligning with our motivation, which we consider a significant contribution of this paper.
LLMs in ODQA Initial attempts to employ Pretrained Language Models (PLMs) to directly answer open-domain questions without retrieval reported inferior performance compared to DPR+FiD
(Yu et al., 2023; Wang et al., 2021; Rosset et al.,
2021). However, with the advent of Large Language Models (LLMs) like ChatGPT and others, the promise of directly answering open questions based solely on internal parameters became increasingly feasible (Shi et al., 2023).
A study by Wang et al. (2023) manually evaluated the performance of LLMs, including ChatGPT-(3.5/4), GPT-3.5, and Bing Chat, alongside DPR+FiD on Natural Questions (NQ) and TriviaQA (TQ) test sets. The results revealed that while FiD surpassed ChatGPT-3.5 and GPT-3.5 on NQ
and GPT-3.5 on TQ, the combination of DPR+FiD
still showcased considerable potential in the era of LLMs.
## 3 Method
In this section, we explain the baseline Fusion-inDecoder (FiD) model and our Rational-FiD (RFiD) model. The RFiD model uses a passage-level classifier on top of each FiD encoder to determine whether the corresponding passage is a rationale for the question. It guides the decoder with a rationale embedding concatenated to the encoder hidden states, as well as a multi-task framework to blend FiD training with rationale training.
## 3.1 Fusion-In-Decoder For Odqa
The overall input to the reader is a question and K retrieved passages. We feed the text sequence concatenated by the question and one passage to each encoder, and the concatenation detail is in Appendix A.1. Formally, for the pair of the question and the kth passage, the input textual sequence Xk is as xk,1, . . . , xk,i, . . . , xk,L, where xk,i represents the i th token and L is the maximum tokens length. We denote the target answer as Y , which is also a textual sequence. Therefore, multi-passage QA can be defined as learning the conditional probability p(Y |X1*, . . . , X*K; θ),
where θ denotes the model parameters. Such a model factorizes the conditional probability into p(yi|y1, . . . , yi−1, X1*, . . . , X*K) and is trained in an auto-regressive way. We denote the FiD training loss as L*F iD* for further usage.
The Fusion-in-Decoder (FiD) (Izacard and Grave, 2020b) model has been used as a standard baseline for calculating the above probability and find the most probable answer given K questionpassage sequences. FiD has a multi-encoder architecture, with shared parameters.
## 3.2 Passage Rationale
We define a passage as a rational passage to a question if the passage contain at least one answer span from all golden answers, or it is a spurious passage.
This is inspired by (Karpukhin et al., 2020) who use a similar method to define positive or negative passages for training the retriever. We ask the encoders of FiD to distinguish rational and spurious and guide the decoder with the results.
Formally, we denote Hk ∈ R
L×das the output encoder hidden states of the kth encoder, where L
is the maximum tokens length and d is the dimension of hidden states. Therefore, the annotation for the input of fusion decoder can be defined as
[H1; . . . , Hk; *. . . ,* HK]. For the kth encoder and its hidden states Hk, we apply a binary classifier on the top of the first token's hidden states Hk,1 to distinguish whether the passage is a rationale passage to the question. The binary classification result of the kth encoder is
$${\hat{b_{k}}}=C l a s s i f i e r(\mathbf{H_{k,1}})\in\mathbb{R}^{2}$$
$$(1)$$
$\mathbf{a}$
2(1)
The training loss used for this passage rationale task is can be defined using the Cross Entropy loss
$${\mathcal{L}}_{r a t n}=-(b\log({\hat{b}})+(1-b)\log(1-{\hat{b}}))\quad(2)$$
where b is the rational/spurious label while ˆb ∈ R
2 is the classification output .
Guiding Decoder. After obtaining the output, we guide the decoder with the result by appending additional embeddings to the end of the encoder hidden states and feeding the new encoder hidden states to the decoder.
the prediction label of classification is
$$p r e d_{k}=\arg\operatorname*{max}({\hat{b_{k}}})\in\{0,1\}$$
In particular, we use two updatable embeddings
$$\mathbf{E}_{\{0,1\}}^{\mathrm{{ratn}}}\in\mathbb{R}^{2\times d}$$
to represent the passage rationale information, where d is the dimension of encoder hidden states.
The rationale embedding for the kth encoder is
$$\mathbf{H_{k,r a t n}}=\mathbf{E_{p r e d_{k}}^{r a t n}}\in\mathbb{R}^{d}$$
d(3)
So, the modified encoder hidden states of the kth encoder with rationale embedding is
$$\begin{array}{l}\mathbf{H_{k}=[H_{k,1};\ldots;H_{k,j};\ldots;H_{k,L};H_{k,rattn}]}\\ {\in\mathbb{R}^{(L+1)\times d}}\end{array}\tag{4}$$
$\eqref{eq:walpha}$
where L is the maximum tokens length and Hk,jis the hidden state of the jth token.
| train | dev | test | |
|-------------------|-------|--------|-------|
| Natural Questions | 79168 | 8757 | 3610 |
| TriviaQA | 78785 | 8837 | 11313 |
Multi-task Learning. We propose a multilearning framework to train the rationale classifier and the sequence-to-sequence architecture of FiD
at the same time. We define the overall training loss as the sum of the binary classifier training loss L*ratn* and the FiD training loss L*F iD* :
$${\mathcal{L}}_{t o t a l}={\mathcal{L}}_{r a t n}+{\mathcal{L}}_{F i D}$$
Ltotal = Lratn + L*F iD* (5)
Details. We conduct experiments with FiDbase/large and RFiD-base/large on Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (TQ) (Joshi et al., 2017). Their statistics are shown in the Appendix Table 1. To avoid the retrieval bias, we follow Izacard and Grave (2020b,a)
and adopt fixed DPR retrievers to obtaining 100 passages for each question and fix the passages in the following experiments.
## 4 Experiment 4.1 Main Results
| Table 1: Data details of two datasets. #para NQ TQ dev test dev test RAG 626M - 44.5 - 56.1 FiD-base 440M - 48.2 - 65.0 FiD-large 990M - 51.4 - 67.6 | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------|-------|------|------|------|------|
| KG-FiD-base | 443M | - | 49.6 | - | 66.7 |
| KG-FiD-large | 994M | - | 53.4 | - | 69.8 |
| (Yu et al., 2022) GRAPE-base | 454M | - | 48.7 | - | 66.2 |
| GRAPE-large (Ju et al., 2022) | 1.01B | - | 53.5 | - | 69.8 |
| Our Implementations | | | | | |
| FiD-base | 440M | 49.3 | 50.2 | 68.6 | 69.0 |
| RFiD-base | 440M | 50.0 | 50.7 | 69.6 | 69.6 |
| FiD-large | 990M | 51.6 | 52.8 | 71.6 | 71.9 |
| RFiD-large | 990M | 52.5 | 54.3 | 72.7 | 72.6 |
Table 2 presents our main results. Our RFiD
model outperforms the baseline FiD model on both the Natural Questions (NQ) and TriviaQA (TQ)
datasets.
RFiD-large achieved an exact match score of 54.3 on the NQ test set, surpassing the FiD-large baseline score of 52.8 by 1.5 points. This represents a performance increase of roughly 2.8%. On the TQ test set, RFiD-large scored 72.6, which is 0.7 points higher than the FiD-large score of 71.9, representing an improvement of approximately 0.9%. When comparing the base models, RFiD-base scored 50.7 on the NQ test set, which is 0.5 points higher than the FiD-base score of 50.2, corresponding to an approximate improvement of 1.0%. On the TQ test set, RFiD-base scored 69.6, 0.6 points higher than the FiD-base score of 69.0, reflecting an improvement of around 0.9%.
These consistent improvements across both base and large models in two different datasets highlight the robustness of our RFiD model in various contexts. Furthermore, these results support the hypothesis that incorporating rationale embeddings in the Fusion-in-Decoder architecture indeed benefits the model's reasoning ability and overall performance. Additionally, it's worth mentioning that our RFiD model's performance increase is achieved with a negligible increase in parameters. This is demonstrated in the '\#para' column in Table 2, further validating the efficiency and practicality of our proposed approach.
In summary, our RFiD model effectively enhances the rationale ability of the Fusion-inDecoder model, leading to improved performance on open-domain question answering tasks. Our model outperforms both the baseline and other state-of-the-art models on the Natural Questions and TriviaQA datasets, demonstrating the power of our simple yet effective approach.
| NQ | TQ | | | |
|---------------------|----------|------|----------|------|
| EM | rpos/neg | EM | rpos/neg | |
| FiD-base | 50.3 | 3.71 | 69.0 | 2.14 |
| RFiD-base | 50.7 | 4.31 | 69.6 | 2.32 |
| FiD-large | 52.8 | 3.82 | 71.9 | 2.17 |
| RFiD-large | 54.3 | 4.41 | 72.6 | 2.52 |
| RFiD-large | | | | |
| w/o guiding decoder | 53.4 | 4.02 | 72.2 | 2.26 |
## 4.2 Cross Attention Analysis
To evaluate the ability of our RFiD models to distinguish between positive and negative passages, we conducted a cross-attention analysis of the decoder. The principle here is straightforward: better performance would be indicated by more crossattention focused on positive passages and less cross-attention directed towards negative passages.
The calculations for this analysis are based on a set of equations. To start with, we define the average cross-attention scores on positive passages as follows:
$$C{\bar{A}}_{p o s}={\frac{1}{N_{q}}}\sum_{q}({\frac{1}{N_{p o s}^{q}}}\sum_{p\in P_{p o s}^{q}}C A_{\{q;p\}})\quad\quad(6)$$
where Nq is the number of questions, P
q pos is the set of positive passages on q and N
q pos is its amount, and CA{q;p}is the overall cross attention of decoder on the passage p when the question is q, which can be calculated as
$$C A_{\{q;p\}}=\sum_{l}^{N_{l y}}(\sum_{j=1}^{L}c a_{\{l;j\}})\qquad\qquad(7)$$
where Nly is the number of layers, L is the maximum token length and ca{l;j}is the cross attention score of the lth layer of decoder on the jth token.
Thus, the ratio r*pos/neg* of the average cross attention scores of positives passages over the scores of negative passages is
$$r_{p o s/n e g}={\frac{C{\bar{A}}_{p o s}}{C{\bar{A}}_{n e g}}}$$
The results shown in Table 3 reveal that RFiD
models have higher r*pos/neg* values compared to FiD models, suggesting that RFiD focuses more on positive passages and less on negative passages. For example, the r*pos/neg* of RFiD-large is 4.41/2.52 on NQ/TQ, which is 0.39/0.35 higher than the FiD-large. Similarly, for the base model, the improvements are 0.60/0.18 on NQ/TQ. The improved ability to identify relevant passages contributes to the overall performance increase seen in our experimental results.
## 4.3 Analysis Without Guiding The Decoder
We also conduct ablation experiment of not guiding the decoder, which means in the Equation 4, the Hk = [Hk,1; *. . .* ; Hk,L].
The results are displayed in the last row of Table 3. 'RFiD-large w/o guiding decoder' achieves 53.4 and 72.2 EM on the NQ-test and TQ-test, respectively, outperforming the baseline FiD-large by 0.6 and 0.3 EM. The r*pos/neg* values are 4.02 and 2.26, respectively, which are also higher than the baseline. The results suggest that even without explicitly guiding the decoder, encouraging encoders to discern rationales can still improve performance and the model's ability to identify rationales. This may be because the encoders implicitly encode the rationale information into the hidden states and feed it to the decoder.
## 4.4 Case Study
As depicted in Figure 1, the baseline FiD identifies the incorrect answer "March 16, 2002" by referring to spurious passages (shown in blue). It confuses
"Disney Art of Animation Resort" with "Art of Disney Animation". In the top 100 retrieved passages, there are many passages about the latter but only one rationale passage (shown in red) about the former. However, our RFiD model can distinguish the only rationale passage from all the spurious ones and mark it for the decoder via an explicit embedding. This enables the decoder to focus more on the rationale, leading to the correct answer.
## 5 Conclusion
$$(8)$$
In this study, we sought to rationalize the reader component of open question answering by introducing an explicit mechanism to identify potential answer-containing passages among the retrieved candidates. Experimental results show that our RFiD model effectively improves the rationale ability of the Fusion-in-Decoder model, leading to enhanced performance on ODQA tasks. Additionally, it outperforms the baseline and other concurrent models on both the Natural Questions and TriviaQA datasets, with only a minimal increase in the number of parameters.
## Acknowledgement
We thank for Linyi Yang, Sirui Cheng for their generous help and discussion. This publication has emanated from research conducted with the financial support of the Pioneer and "Leading Goose" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003.
## Limitations
The method of identifying rational and spurious passages sometimes makes mistakes when 1. one passage actually contains one golden answer but the content slightly differ the golden answer span, for example, 'Messi' vs 'Lionel Messi' vs 'Lionel Andrés Messi' vs 'Lionel Andres Messi';
2. one passage actually does not relate the answer but the answer span is too common and appears in the passage, such as '2'.
We just use only seed=0 for the experiments.
## Ethics Statement
There are no known potential risks.
## References
Akari Asai, Matt Gardner, and Hannaneh Hajishirzi. 2022. Evidentiality-guided generation for knowledge-intensive NLP tasks. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2226–2243, Seattle, United States. Association for Computational Linguistics.
Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen tau Yih, Sebastian Riedel, and Fabio Petroni.
2022. Autoregressive search engines: Generating substrings as document identifiers. In arXiv pre-print 2204.10628.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In *Association for Computational* Linguistics (ACL).
Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-D2: A modular baseline for opendomain question answering. In *Findings of the Association for Computational Linguistics: EMNLP*
2021, pages 854–870, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Gautier Izacard and Edouard Grave. 2020a. Distilling knowledge from reader to retriever for question answering.
Gautier Izacard and Edouard Grave. 2020b. Leveraging passage retrieval with generative models for open domain question answering. *CoRR*, abs/2007.01282.
Jason Jo and Yoshua Bengio. 2017. Measuring the tendency of cnns to learn surface statistical regularities.
ArXiv, abs/1711.11561.
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics, Vancouver, Canada. Association for Computational Linguistics.
Mingxuan Ju, Wenhao Yu, Tong Zhao, Chuxu Zhang, and Yanfang Ye. 2022. Grape: Knowledge graph enhanced passage reader for open-domain question answering. In Findings of Empirical Methods in Natural Language Processing.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton.
2020. Learning the difference that makes a difference with counterfactually augmented data. International Conference on Learning Representations (ICLR).
Divyansh Kaushik, Amrith Setlur, Eduard Hovy, and Zachary C Lipton. 2021. Explaining the efficacy of counterfactually augmented data. International Conference on Learning Representations (ICLR).
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *Transactions of the Association of Computational Linguistics*.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.
Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459–
9474. Curran Associates, Inc.
Jinghui Lu, Linyi Yang, Brian Mac Namee, and Yue Zhang. 2022. A rationale-centric framework for human-in-the-loop machine learning. arXiv preprint arXiv:2203.12918.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Corbin L Rosset, Chenyan Xiong, Minh Phan, Xia Song, Paul N. Bennett, and saurabh tiwary. 2021. Pretrain knowledge-aware language models.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models.
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In *Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society*, AIES '20, page 180–186, New York, NY, USA. Association for Computing Machinery.
Cunxiang Wang, Sirui Cheng, Zhikun Xu, Bowen Ding, Yidong Wang, and Yue Zhang. 2023. Evaluating open question answering evaluation.
Cunxiang Wang, Pai Liu, and Yue Zhang. 2021. Can generative pre-trained language models serve as knowledge bases for closed-book QA? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3241–3251, Online.
Association for Computational Linguistics.
Zhao Wang and Aron Culotta. 2020. Robustness to spurious correlations in text classification via automatically generated counterfactuals. In *AAAI Conference* on Artificial Intelligence.
Yuxiang Wu, Matt Gardner, Pontus Stenetorp, and Pradeep Dasigi. 2022. Generating data to mitigate spurious correlations in natural language inference datasets. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 2660–2676, Dublin, Ireland. Association for Computational Linguistics.
Linyi Yang, Jiazheng Li, Padraig Cunningham, Yue Zhang, Barry Smyth, and Ruihai Dong. 2021. Exploring the efficacy of automatically generated counterfactuals for sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 306–316, Online.
Association for Computational Linguistics.
Linyi Yang, Lifan Yuan, Leyang Cui, Wenyang Gao, and Yue Zhang. 2022. Factmix: Using a few labeled in-domain examples to generalize to cross-domain named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5360–5371.
Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. 2022. KG-FiD: Infusing knowledge graph in fusion-in-decoder for opendomain question answering. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4961–4974, Dublin, Ireland. Association for Computational Linguistics.
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are
strong context generators. In *International Conference for Learning Representation (ICLR)*.
Xiangji Zeng, Yunliang Li, Yuchen Zhai, and Yin Zhang. 2020. Counterfactual generator: A weaklysupervised method for named entity recognition. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7270–7280, Online. Association for Computational Linguistics.
## A Experiments A.1 Data Process
Following Izacard and Grave (2020b), we concatenate the question and the passage in the form of
"Question : <question> ; Title : <title> ; Context :
<context> ".
## A.2 Implementation Details
We conduct our experiments on 2 A100-80G-SXM
GPUs.
In training, the optimizer is AdamW and the learning rate is 1e-4 with weight decay rate as 0.01; the batch size is 64 and the total training step is 320k.
In evaluating, the eval step is 10k and the bestdev checkpoint is used for the test.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Just Limitations; no section number
✓ A2. Did you discuss any potential risks of your work?
Just Ethics Statement; no section number
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract Section and Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
grammarly and chatgpt, check grammar mistakes and polish the writing.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
song-etal-2023-unsupervised | Unsupervised Keyphrase Extraction by Learning Neural Keyphrase Set Function | https://aclanthology.org/2023.findings-acl.156 | We create a \textit{paradigm shift} concerning building unsupervised keyphrase extraction systems in this paper. Instead of modeling the relevance between an individual candidate phrase and the document as in the commonly used framework, we formulate the unsupervised keyphrase extraction task as a document-set matching problem from \textit{a set-wise perspective}, in which the document and the candidate set are globally matched in the semantic space to particularly take into account the interactions among all candidate phrases. Since it is intractable to exactly extract the keyphrase set by the matching function during the inference, we propose an approximate approach, which obtains the candidate subsets via a set extractor agent learned by reinforcement learning. Exhaustive experimental results demonstrate the effectiveness of our model, which outperforms the recent state-of-the-art unsupervised keyphrase extraction baselines by a large margin. | # Unsupervised Keyphrase Extraction By Learning Neural Keyphrase Set Function
Mingyang Song♠, Haiyun Jiang♣∗, Lemao Liu♣, Shuming Shi♣**, Liping Jing**♠∗
♣Tencent AI Lab, Shenzhen, China
♠Beijing Key Lab of Traffic Data Analysis and Mining
♠Beijing Jiaotong University, Beijing, China [email protected]
## Abstract
We create a *paradigm shift* concerning building unsupervised keyphrase extraction systems in this paper. Instead of modeling the relevance between an individual candidate phrase and the document as in the commonly used framework, we formulate the unsupervised keyphrase extraction task as a document-set matching problem from *a set-wise perspective*, in which the document and the candidate set are globally matched in the semantic space to particularly take into account the interactions among all candidate phrases. Since it is intractable to exactly extract the optimal subset by the document-set matching function during the inference, we propose an approximate approach, which obtains the candidate subsets via a set extractor agent learned by reinforcement learning. Exhaustive experimental results demonstrate the effectiveness of our model, which outperforms the recent state-of-the-art unsupervised keyphrase extraction baselines by a large margin.
## 1 Introduction
Keyphrase Extraction (KE) is the task of extracting a keyphrase set that provides readers with highlevel information about the key ideas or important topics described in the document. KE methods can be divided into supervised (Sun et al., 2021; Song et al., 2021, 2022a) or unsupervised (BennaniSmires et al., 2018; Sun et al., 2020). The former requires large-scale annotated training data and is often domain-specific, whereas unsupervised methods do not need annotated data (Hasan and Ng, 2014). Therefore, in this paper, we focus on Unsupervised Keyphrase Extraction (UKE).
Currently, most UKE methods mainly consist of two components: candidate set generation and keyphrase importance estimation. The former uses heuristic rules to obtain a candidate set for a given document. The latter scores individual phrase from
∗Corresponding Author
![0_image_0.png](0_image_0.png)
the candidate set with respect to a document, and then selects top-ranked phrases to form a keyphrase set. For example, Bennani-Smires et al. (2018);
Sun et al. (2020); Liang et al. (2021); Song et al.
(2022b); Zhang et al. (2022) address it with the pre-trained embeddings (Peters et al., 2018; Devlin et al., 2019). These methods independently estimate the relevance between each phrase in the candidate set and the document as the importance of the phrase from a point-wise perspective, as illustrated in Figure 1(a).
Unfortunately, the above point-wise models are essentially phrase-level UKE approaches, and they can not take into account the interactions among all candidate phrases and fails to consider the semantics of the complete candidate set. This makes them more inclined to select keyphrases with highfrequency words while ignoring the coupling of multiple phrases. As a result, the diversity of the selected keyphrases suffers as quantified in our experiments (as shown in Table 6), leading to suboptimal performance.
To address the above issue, we investigate extracting keyphrases globally from a set-wise perspective (as illustrated in Figure 1(b)) and concep-
![1_image_1.png](1_image_1.png)
tualize the UKE task as a document-set matching problem, as shown in Figure 2. Specifically, the proposed UKE system is based on a document-set matching framework as the set function that measures the relevance between a candidate set and its corresponding document in the semantic space via a siamese-based neural network. The set function is learned by the margin-based triplet loss with orthogonal regularization, effectively capturing similarities of documents and candidate sets. However, it is intractable to exactly search the optimal subset from the candidate set by the set function during the inference because the subset space is exponentially large, and the set function is non-decomposable.
To this end, we propose an approximate method whose key idea is to learn a set extractor agent and search for efficient inference. Concretely, after the neural keyphrase set function is well-trained, we use it to calculate the document-set matching score as the reward. Then, we adopt the policy gradient training strategy to train the set extractor agent for extracting the optimal subset with the highest reward from numerous candidate subsets. Ideally, the optimal subset is the closest semantically to the document, as shown in Figure 2. Exhaustive experiments demonstrate the effectiveness of our model SetMatch: it effectively covers the ground-truth keyphrases and obtains higher recall than the traditional heuristics, and it outperforms recent strong UKE baselines.
We summarize our contributions as follows:
- Instead of individually scoring each phrase, we formulate the UKE task as a document-set matching problem and propose a novel setwise framework.
- Since the exact search with the document-set matching function, we propose an approximate method by learning a set extractor agent to search the keyphrase set.
![1_image_0.png](1_image_0.png)
- Experiments show that it has achieved superior performance compared with the state-ofthe-art UKE baselines on three benchmarks.
## 2 Methodology Overview
In this paper, keyphrases are *globally* selected from a set-wise perspective. More formally, consider a KE system: given the document D, generate its candidate set first. And then, an optimal subset S∗ ⊆ C is selected from the candidate set C. To achieve this goal, we propose a two-stage model
(SetMatch), including candidate set generation and neural keyphrase set function Fs. First, candidate set generation aims to generate a candidate set C
from the document D with a *higher recall* to cover more ground-truth keyphrases (Sec 2.1). Second, a neural keyphrase set function Fs is learned to estimate the document-set matching score (Sec 2.2),
which is used to guide the keyphrase set extractor agent to search an optimal subset (Sec 2.3).
## 2.1 Candidate Set Generation
We adopt various strategies to obtain a candidate set to cover the ground-truth keyphrases fully. These strategies can be mainly divided into two categories, using heuristic rules and pre-trained language models (fine-tuned via keyphrase extraction or generation tasks). The former first tokenize the document, tag the document with part-of-speech tags, and extract candidate phrases based on part-of-speech tags. Next, only keep noun phrases that consist of zero or more adjectives followed by one or multiple nouns. The latter uses neural keyphrase extraction or generation models based on Pre-trained Language Models (PLMs) fine-tuning on other corpora.
The details are described in Sec 5.
## 2.2 Neural Keyphrase Set Function
To estimate the importance from a set-wise perspective, we propose a novel neural keyphrase set function Fs, which is implemented by a documentset matching framework (Sec 3). With the neural
![2_image_0.png](2_image_0.png)
keyphrase set function Fs, we can score all candidate subsets in the candidate set C and thus find the optimal subset S∗ depending on these scores.
## 2.3 Keyphrase Set Extractor Agent
However, it is intractable to exactly search an optimal subset by the keyphrase set function Fs during the inference because the subset space is exponentially large, and the keyphrase set function Fs is non-decomposable. Therefore, we propose a keyphrase set extractor agent to search the optimal subset S∗, which is trained by using the keyphrase set function Fs as the reward via the policy gradient training strategy to select the optimal subset S∗as the keyphrases (Sec 4). Finally, we infer the optimal subset by using the learned set extractor agent rather than Fs.
## 3 Neural Keyphrase Set Function (Fs)
There are many ways to judge whether a keyphrase set is good or bad under the document D. One intuitive way is through a matching framework. Therefore, we formulate the neural keyphrase set function Fs as a document-set matching task in which the document D and the candidate set C will be matched in a semantic space, as shown in Figure 2.
Then, we propose a margin-based triplet loss with multiple perspectives orthogonal regularization LE
to optimize the Siamese-BERT Auto-Encoder architecture. The following section details how we instantiate our neural keyphrase set function Fs using a simple siamese-based architecture.
## 3.1 Siamese-Bert Auto-Encoder
Inspired by siamese network structure (Bromley et al., 1993), we construct a Siamese-BERT AutoEncoder architecture to match the document D and the candidate set C. Concretely, our Siamese-BERT
Auto-Encoder consists of two BERTs with shared weights, two auto-encoders, and a cosine-similarity layer to predict the document-set score. The overall architecture is shown in Figure 4.
Given a batch of candidate sets {Ci}M
i=1 and documents {Di}M
i=1, we adopt the original BERT (Devlin et al., 2019) to derive the semantically meaningful embeddings as follows,
$$h_{{\mathcal{C}}_{i}}={\mathrm{BERT}}({\mathcal{C}}_{i}),h_{{\mathcal{D}}_{i}}={\mathrm{BERT}}({\mathcal{D}}_{i}),$$
where M indicates the batch size. hCi
, hDi ∈ R
dr are the i-th candidate set Ci and document Di representations within a training batch. Here, we use the vector of the '[CLS]' token from the top BERT
layer as the representation of the candidate set C
and the document D. Next, we employ two autoencoders (with two encoders ϕ1, ϕ2 and two decoders ϕ01, ϕ02
, as shown in Figure 4) to transfer BERT representations into the latent space as,
$$\hat{h}_{D_{i}}=\varphi_{1}(h_{D_{i}}),\hat{h}_{C_{i}}=\varphi_{2}(h_{C_{i}}),\tag{2}$$ $$\hat{h}_{D_{i}}=\varphi^{\prime}_{1}(\hat{h}_{D_{i}}),\hat{h}_{C_{i}}=\varphi^{\prime}_{2}(\hat{h}_{C_{i}}),$$
$\epsilon\in\mathbb{R}^{d_{\Gamma}\times d_l}$ and $\varphi'_{1,\varphi}\varphi'_{2}\in\mathbb{R}^{d_l\times d_r}$.
where ϕ1, ϕ2 ∈ R
dr×dl and ϕ01, ϕ02 ∈ R
dl×dr are learnable parameters. Here, let hˆCi
, hˆDi ∈ R
dl denote the representations of the candidate set Ci and the document Diin the latent space, respectively. Finally, their similarity score is measured by Fs(Ci, Di) = cosine(hˆCi
, hˆDi
).
## 3.2 Margin-Based Triplet Loss With Orthogonal Regularization
To fine-tune Siamese-BERT Auto-Encoder, we use a margin-based triplet loss with orthogonal regularization to update the weights. We use a simple and intuitive way to generate positive C
+
iand negative C
−
icandidate sets. Most existing embedding-based UKE models (Liang et al., 2021; Ding and Luo, 2021) truncate the document to satisfy the encoding requirements of BERT. However, truncating documents will lose a small number of phrases, thus reducing the recall of the candidate set C. Therefore, we generate a positive candidate set C
+
i(i.e.,
2484 A
†1
, as illustrated in Table 3) before truncating the document D, and generate a negative candidate set C
−
i(i.e., A1, as illustrated in Table 3) after truncating the document D (more details in Sec 5). Then, the loss LT can be computed as,
$${\cal L}_{T}=\sum_{i=1}^{M}\max({\cal F}_{s}(\hat{h}_{{\cal D}_{i}},\hat{h}_{{\cal C}_{i}^{-}})-{\cal F}_{s}(\hat{h}_{{\cal D}_{i}},\hat{h}_{{\cal C}_{i}^{+}})+\delta,0),\tag{3}$$
where δ denotes the margin. The basic idea of LT
is to let the positive candidate set with higher recall have a higher document-set matching score than the negative candidate set with lower recall. Furthermore, we propose orthogonal regularization from multiple perspectives, which explicitly encourages each representation within a batch to be different from the other. This is inspired by Bousmalis et al.
(2016), who adopts orthogonal regularization to encourage representations across domains to be as distinct as possible. Here, we use the following equations as the orthogonal regularization:
$$\mathcal{L}_{cc}=\sum_{i=1}^{M}\sum_{j,j\neq i}\mathcal{F}_{s}(\hat{h}_{\mathcal{C}_{i}},\hat{h}_{\mathcal{C}_{j}})^{2},$$ $$\mathcal{L}_{DD}=\sum_{i=1}^{M}\sum_{j,j\neq i}\mathcal{F}_{s}(\hat{h}_{\mathcal{D}_{i}},\hat{h}_{\mathcal{D}_{j}})^{2},\tag{4}$$ $$\mathcal{L}_{CD}=\sum_{i=1}^{M}\sum_{j,j\neq i}\mathcal{F}_{s}(\hat{h}_{\mathcal{C}_{i}},\hat{h}_{\mathcal{D}_{j}})^{2},$$
where LCC encourages the similarities between all candidate sets under a batch as distinct as possible, LDD encourages the similarities between all documents under a batch as distinct as possible, LCD
encourages the similarities between candidate sets and documents under a batch as distinct as possible.
Therefore, the final loss function LE of the neural keyphrase set function is re-formulated as,
$${\cal L}_{\cal E}=\lambda_{1}{\cal L}_{\cal T}+\lambda_{2}({\cal L}_{\cal CC}+{\cal L}_{\cal DD}+{\cal L}_{\cal CD})+\lambda_{3}({\cal L}_{\cal D}+{\cal L}_{\cal C})\tag{5}$$
where λ1, λ2, λ3 are the balance factors. Here, LD,LC denote the reconstruction loss of our two auto-encoders and are calculated as follows,
$$\mathcal{L}_{\mathcal{D}}=||h_{\mathcal{D}_{i}}-\bar{h}_{\mathcal{D}_{i}}||_{2},\mathcal{L}c=||h_{\mathcal{C}_{i}}-\bar{h}_{\mathcal{C}_{i}}||_{2},\tag{6}$$
where *||X ||*2 indicates L2 norm of each element in a matrix X . After the set function Fs is welltrained, we fix its parameters and only use it as a non-differential metric to measure a document-set matching score without optimizing parameters.
## 4 Keyphrase Set Extractor Agent
As mentioned before, it is intractable to search the optimal subset by the set function precisely. Therefore, we propose a keyphrase set extractor agent to efficiently search an optimal subset. We first exploit a pre-trained BERT model to obtain representations of phrases in the candidate set C and the document D, and then learn a subset sampling network to sample a subset S from the candidate set C based on their representations. After obtaining the candidate subset S, we use the keyphrase set function Fs to calculate the document-set matching score Fs(S, D) as the reward R(S, D) to optimize the keyphrase set extractor agent for extracting an optimal subset S∗ via reinforcement learning.
## 4.1 Encoding Network
We employ a pre-trained BERT model to obtain H, hD, the representations of phrases in the candidate set C and the document D, respectively. Here, the representations are obtained by using average pooling on the output of the last BERT layer:
$h_{\mathcal{D}}=$ BERT($\mathcal{D}$), $\mathcal{H}=[h_{p_{1}}^{\top},...,h_{p_{n}}^{\top},...,h_{p_{N}}^{\top}]^{\top}$ $=$ BERT($p_{i}$), $i=1,...,n,...,N$,
where hD denotes the document representation and hpn is the n-th phrase representation in the candidate set C (contains N candidate phrases).
## 4.2 Candidate Subset Searching
To obtain a candidate subset S from the candidate set C, we adopt a self-attention layer as the extractor network to search subsets. We calculate the attention function on all candidate phrases in the candidate set C simultaneously, packed together into a matrix H. We compute the matrix of outputs as follow,
$${\hat{\cal H}}={\hat{\cal H}}{\bf W}_{1}+{\cal R E P}(h_{\mathcal{D}}){\bf W}_{2},$$
$$(8)$$
Hˆ = HW1 + REP(hD)W2, (8)
where W1,W2 ∈ R
dr×dr are the trainable parameters and the REP operator converts the input vector to a R
N×dr matrix by repeating elements up to N rows. Then, the probability distribution can be obtained by,
$\pi_{\theta}(S,D)=\prod$ softmax($f_{d}(\hat{H})$)[$p$] (9)
where πθ(S, D) denotes the predicted probability over the candidate set C, θ indicates the trainable parameters of our keyphrase set extractor, fd ∈ R
dr×1
| Dataset | # Doc. | Type | Avg. | Avg. | Present Keyphrases | Present Keyphrases |
|--------------------------------|--------------|-------------------------|------------------|--------|----------------------|----------------------|
| # Words | # Keyphrases | in Truncated Doc. (512) | in Original Doc. | | | |
| Inspec (Hulth, 2003) | 500 | Short | 134.60 | 9.83 | 0.7341 | |
| DUC2001 (Wan and Xiao, 2008) | 308 | Long | 847.24 | 8.08 | 0.8436 | 0.9339(↑0.0903) |
| SemEval2010 (Kim et al., 2010) | 100 | Long | 1587.52 | 12.04 | 0.5156 | 0.6576(↑0.1420) |
is a fully-connected layer, and p is the candidate phrase in the candidate set C. To obtain the candidate subset, we rank phrases in the candidate set C
with the predicted probability πθ(S, D) and extract top-ranked K(*K < N*) keyphrases as a candidate subset S.
## 4.3 Reinforce-Guided Selection
We exploit an exploitation and exploration training strategy to train the set extractor agent for optimizing its parameters. Here, we adopt the policy gradient algorithm (REINFORCE, (Williams, 1992))
to optimize the policy πθ(S, D). Specifically, in a training iteration, we first use the policy πθ(S, D)
to search a candidate subset S from the candidate set C of the document D. Next, the well-trained set function Fs computes a document-set matching score Fs(S, D) between the candidate subset S and the document D. Finally, we treat the document-set matching score Fs(S, D) as the reward R(S, D)
to optimize the policy πθ(S, D) with the policy gradient :
$$\nabla_{\theta}J(\theta)=\mathbb{E}[\nabla_{\theta}\log\pi_{\theta}({\mathcal{S}},{\mathcal{D}}){\mathcal{R}}({\mathcal{S}},{\mathcal{D}})].$$
Inspired by the self-critical training strategy (Rennie et al., 2017), we propose a new teacher-critical training strategy to regularize the reward R(S, D),
which uses the top-K predicted keyphrases of the baselines (e.g., JointGL (Liang et al., 2021)) as a reference set Sˆ. Ideally, when maximizing rewards, the teacher-critical training strategy ensures that our model obtains an optimal candidate subset S∗
better than the reference set Sˆ. Then, we calculate a document-set matching score Fs(Sˆ, D) to regularize the reward R(S, D). Finally, the expected gradient can be approximated by,
$$\nabla_{\theta}J(\theta)=\mathbb{E}[\nabla_{\theta}\log\pi_{\theta}(\mathcal{S},\mathcal{D})(\mathcal{R}(\mathcal{S},\mathcal{D})-\mathcal{F}_{s}(\hat{\mathcal{S}},\mathcal{D}))].\tag{1}$$
(11)
Generally, the policy πθ(S, D) is gradually optimized through the continuous iteration of the training process to search a better candidate subset S
to obtain a higher reward R(S, D). The candidate subset S∗ with the highest reward R(S∗, D) is the final predicted keyphrase set of the document D.
## 5 Experiments 5.1 Datasets And Evaluation Metrics
We verify our model on three benchmarks, including the DUC2001 (Wan and Xiao, 2008), Inspec
(Hulth, 2003), and SemEval2010 (Kim et al., 2010)
datasets. Both keyphrases and their corresponding document are preprocessed via Porter Stemmer1.
The statistics are provided in Table 1.
Following the recent studies (Liang et al., 2021; Ding and Luo, 2021; Zhang et al., 2022), the performance of our model SetMatch and the selected baselines is evaluated using Precision (P), Recall
(R), and F1 measure (F1) on the top 5, 10, and 15 ranked phrases.
## 5.2 Baselines
We compare the proposed model with recent stateof-the-art UKE baselines, which extract keyphrases from a point-wise perspective (KeyGames (Saxena et al., 2020), EmbedRankd2v, EmbedRanks2v
(Bennani-Smires et al., 2018), SIFRank, SIFRank+
(Sun et al., 2020), JointGL (Liang et al., 2021),
MDERank (Zhang et al., 2022)).
## 5.3 Implementation Details
Candidate Set Generation. All the models use Stanford CoreNLP Tools2for tokenizing, part-ofspeech tagging and noun phrase chunking. Three regular expressions are used to extract noun phrases as the candidate set via the python package NLTK3:
A1, A2, and A3, as shown in Table 3. Furthermore, we use two fine-tuned pre-trained language models
| Embedding-based UKE Model | DUC2001 | Inspec | SemEval2010 | | | | | | |
|-------------------------------------------------------------------|-----------|----------|---------------|-------|-------|-------|-------|-------|-------|
| F1@5 | F1@10 | F1@15 | F1@5 | F1@10 | F1@15 | F1@5 | F1@10 | F1@15 | |
| Point-Wise Perspective EmbedRankd2v (Bennani-Smires et al., 2018) | 24.02 | 28.12 | 28.82 | 31.51 | 37.94 | 37.96 | 3.02 | 5.08 | 7.23 |
| EmbedRanks2v (Bennani-Smires et al., 2018) | 27.16 | 31.85 | 31.52 | 29.88 | 37.09 | 38.40 | 5.40 | 8.91 | 10.06 |
| KeyGames (Saxena et al., 2020) | 24.42 | 28.28 | 29.77 | 32.12 | 40.48 | 40.94 | 11.93 | 14.35 | 14.62 |
| SIFRank (Sun et al., 2020) | 24.27 | 27.43 | 27.86 | 29.11 | 38.80 | 39.59 | - | - | - |
| SIFRank+ (Sun et al., 2020) | 30.88 | 33.37 | 32.24 | 28.49 | 36.77 | 38.82 | - | - | - |
| JointGL(Liang et al., 2021) | 28.62 | 35.52 | 36.29 | 32.61 | 40.17 | 41.09 | 13.02 | 19.35 | 21.72 |
| MDERank (Zhang et al., 2022) | 23.31 | 26.65 | 26.42 | 27.85 | 34.36 | 36.40 | 13.05 | 18.27 | 20.35 |
| Set-Wise Perspective SetMatch | 31.19 | 36.34 | 38.72 | 33.54 | 40.63 | 42.11 | 14.44 | 20.79 | 24.18 |
(B1 4and B2 5, as shown in Table 3) to generate candidate sets. Here, we take the entire document as input for the truncated document to generate a candidate set (*document-level*). For the document, without truncating, we leverage fine-tuned PLMs to obtain candidate keyphrases from each sentence in the document individually and combine them as a candidate set (*sentence-level*).
Neural Keyphrase Set Function. Specifically, we set the margin δ for the margin-based triplet loss to 1, λ1 = λ2 = λ3 = 1/3, and the learning rate is set to 5e-5 for both the neural keyphrase set function and the keyphrase set extractor agent. We use a single NVIDIA A4000 GPU for training, the batch size is 2. We train twenty epochs. dr = 768 and dl = 512.
We set K to 15 and N to 30. In this paper, we use A
†1 ∪ B†2
, A1 ∪ B2, and A1 ∪ B1 to obtain candidate sets for the Inspec, DUC2001, and SemEval2010 datasets, respectively.
Candidate Set Pruning. The subset sampling idea of our subset sampler is more intuitive, while it suffers from combinatorial explosion problems. For example, how could we determine the number of phrases in the candidate set, or should we score all possible subsets? To alleviate these difficulties, we propose a simple candidate pruning strategy, which adopts the recent baseline JointGL (Liang et al.,
2021) to prune the candidate set from a point-wise perspective and keep top-ranked N phrases as the candidate set C.
## 5.4 Results And Analysis
Table 2 illustrates the experimental results on the DUC2001, Inspec, and SemEval2010 datasets.
Analysis. The experimental results show that globally extracting keyphrases from a set-wise perspective helps our model outperform recent state-of-theart baselines across the benchmark datasets. The detailed analysis is presented as follows:
(1) The keyphrases of the document are usually considered to be disordered and treated as a set.
Similar claims have been reported previously in the keyphrase generation literature (Ye et al., 2021; Xie et al., 2022). However, most UKE models score and extract keyphrases from a point-wise perspective, which also rank good keyphrases in order. The impact caused by ranking in order is also visible in the results. It will result in higher scores for F@5 and F@10 but less boost for F@15. Instead, our model globally extracts keyphrases from the setwise perspective. Not only does it focus on modeling the relationship between phrases within the document at a deeper level, but it also ensures that the extracted keyphrase set is semantically closer to its corresponding document in the semantic space. Moreover, the keyphrases predicted by our model are disordered.
(2) Most existing embedding-based UKE models obtain the candidate set and the embeddings of phrases after truncating the document. Notable that this is done for two main reasons. First, it benefits to calculate the document-phrase matching similarity. Second, it is subject to the limitation of the input length by the pre-trained language model.
However, truncating documents reduces the quality of candidate sets, reducing the performance of keyphrase extraction. Our document-set matching framework alleviates this problem, allowing our model to consider all phrases in the original document to form a candidate set. From the results, the improvement of our model on the DUC2001
| Candidate Set Generation Strategy | Inspec | DUC2001 | SemEval2010 | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------|---------------|-------------|-------------|-------------|
| R@50 | R@M | R@50 | R@M | R@50 | R@M | |
| Regular Expression for Truncated Document (with length limitation→512) A1 → {< NN. ∗ |JJ > ∗ < NN.∗ >} 0.5350 | 0.5359(26) | 0.5845 | 0.6840(76) | 0.2885 | 0.3301(69) | |
| A2 → {< JJ|V BG > ∗ < NN.∗ > 0, 3} | 0.5267 | 0.5309(30) | 0.5540 | 0.6793(86) | 0.2730 | 0.3383(81) |
| A3 → {< NN. ∗ |JJ|V BG|V BN > ∗ < NN.∗ >} | 0.5321 | 0.5330(26) | 0.5791 | 0.6770(76) | 0.2822 | 0.3288(71) |
| Pre-trained Keyphrase Predictor for Truncated Document (with length limitation→512) B1 →Pre-trained Keyphrase Generator (T5, document-level) 0.2901 0.2901(8) | 0.3425 | 0.3538(38) | 0.3490 | 0.3756(62) | | |
| B2 →Pre-trained Keyphrase Extractor (BERT, document-level) | 0.4107 | 0.5328(88) | 0.3082 | 0.4844(94) | 0.3597 | 0.4340(90) |
| ♣ Ensemble Strategies for Truncated Document (with length limitation→512) A1 ∪ B1 0.6263 | 0.6314(30) | 0.5826 | 0.7933(108) | 0.3778 | 0.5020(125) | |
| A1 ∪ B2 | 0.6197 | 0.6894(102) | 0.5827 | 0.8211(165) | 0.2853 | 0.4353(155) |
| Regular Expression for Original Document (without length limitation) A † 1 → {< NN. ∗ |JJ > ∗ < NN.∗ >} 0.5492 | 0.5503(27) | 0.5845 | 0.7960(138) | 0.2885 | 0.4789(192) | |
| A † 2 → {< JJ|V BG > ∗ < NN.∗ > 0, 3} | 0.5407 | 0.5452(31) | 0.5540 | 0.7932(160) | 0.2730 | 0.4893(226) |
| A † 3 → {< NN. ∗ |JJ|V BG|V BN > ∗ < NN.∗ >} | 0.5452 | 0.5465(27) | 0.5791 | 0.7898(140) | 0.2822 | 0.4859(201) |
| Pre-trained Keyphrase Predictor for Original Document (without length limitation) 1 →Pre-trained Keyphrase Generator (T5, sentence-level) 0.0041 0.0041(11) | 0.3605 | 0.3826(49) | 0.3449 | 0.4093(90) | | |
| † B 2 →Pre-trained Keyphrase Extractor (BERT, sentence-level) | 0.2781 | 0.2784(26) | 0.2796 | 0.3935(143) | 0.2187 | 0.3744(237) |
| † B ♣ Ensemble Strategies for Original Document (without length limitation) † 1 ∪ B† A 1 0.5354 | 0.5471(38) | 0.5831 | 0.8556(162) | 0.3609 | 0.5785(238) | |
| A † 1 ∪ B† 2 | 0.6078 | 0.6228(47) | 0.5833 | 0.8661(256) | 0.2853 | 0.5601(383) |
and SemEval2010 datasets (with long documents) is better than that on the Inspec dataset (with short documents). Compared with the underlined results in Table 2, our model has achieved *10.65%*,
7.44%, and *11.32%* improvement in F@5, *F@10*,
and *F@15* on the SemEval2010 dataset.
## 5.5 Ablation Study
Effect of generating candidate sets with different strategies. The details of the candidate generation strategies and the associated performance are reported in Table 3. For easy description, A∗ denotes A1, A2, A3 and B∗ denotes B1, B2. We summarize the detailed analysis as follows:
(1) The ensemble candidate set generation strategy obtains higher recall than using A∗ or B∗.
(2) A∗ obtain more stable and higher recall than B∗ in most cases on three benchmark datasets.
(3) B∗ get higher recall scores on the long document dataset, such as the SemEval2010 dataset.
(4) Intuitively, the longer the document, the more the candidate loss is caused by truncation.
Effect of training with different loss functions.
As illustrated in Table 4, our ablation study considers the effect of the reconstruction loss (LC+LD),
the margin-based triplet loss (LT ), and the orthogonal regularization (LCC + LDD + LCD) on the
| SetMatch | Acc | F1@5 | F1@10 | F1@15 |
|-----------------------|-------|--------|---------|---------|
| LC+LD | 0.12 | 9.96 | 17.41 | 20.76 |
| LT +LC+LD | 0.98 | 11.13 | 16.31 | 20.49 |
| LT +LCC+LDD+LCD | 0.81 | 12.32 | 18.66 | 22.93 |
| LT +LCC+LDD+LCD+LC+LD | 0.96 | 14.44 | 20.79 | 24.18 |
Table 4: Performance of training the neural keyphrase set function Fs by using different loss functions. The best results are in bold.
SemEval2010 dataset. To verify the effectiveness of the neural keyphrase set function directly, we propose a simple method to construct the pseudo label li,
$$l_{i}=\left\{\begin{array}{ll}1&\mbox{if\ score}({\cal C}_{i}^{+},{\cal S}_{i}^{r})>\mbox{score}({\cal C}_{i}^{-},{\cal S}_{i}^{r})\\ \mbox{-}1&\mbox{if\ score}({\cal C}_{i}^{+},{\cal S}_{i}^{r})<\mbox{score}({\cal C}_{i}^{-},{\cal S}_{i}^{r})\\ 0&\mbox{if\ score}({\cal C}_{i}^{+},{\cal S}_{i}^{r})=\mbox{score}({\cal C}_{i}^{-},{\cal S}_{i}^{r})\end{array}\right.\tag{12}$$
where S
r i is the ground-truth keyphrase set of the i-th document Di. Here, we calculate the *score*(·)
via F1@M, which takes all the phrases in the candidate set C to evaluate F1 score. After obtaining pseudo labels, we use the keyphrase set function to predict scores following Eq. 12 instead of F1@M,
verifying the effectiveness of our keyphrase set function by comparing the predicted scores with pseudo labels for acquiring accuracy. From the
| SetMatch | Acc | F1@5 | F1@10 F1@15 | |
|--------------------------------|-------|--------|---------------|-------|
| Positive : A † , Negative : A1 | 0.96 | 14.44 | 20.79 | 24.18 |
| 1 | | | | |
| Positive : A † , Negative : A2 | 0.91 | 14.10 | 19.69 | 22.09 |
| 2 | | | | |
| Positive : A † , Negative : A3 | 0.93 | 14.32 | 20.08 | 22.17 |
| 3 | | | | |
Table 5: Effect of training the keyphrase set function by using different training samples on the SemEval2010 dataset.
![7_image_0.png](7_image_0.png)
results in Table 4, we can find that LT can distinguish positive and negative samples well, and the orthogonal regularization significantly improves the performance.
Effect of different training samples. We adopt different positive and negative samples to train the keyphrase set function Fs, as illustrated in Table 5.
The best results are obtained by using A
†1 and A1.
Effect of the teacher-critical training strategy.
To verify the effectiveness of the proposed teachercritical training strategy, we adopt a series of fixed values to regularize the reward R(S, D). Figure 5 shows the results under different values of the regularization on the SemEval2010 dataset. The best results are achieved by using our teacher-critical training strategy. However, dropping the regularization (i.e., the fixed value is set to 0) of the reward R(S, D) will significantly damage the final performance. Moreover, our model can be treated as an optimization model for the SOTA UKE baselines by adopting the teacher-critical training strategy.
## 5.6 Diversity Evaluation
To evaluate the diversity, we follow the previous studies (Bahuleyan and Asri, 2020) and define two evaluation metrics:
(1) **Duplicate**% = (1 −\# Unique Tokens
\# Extracted Tokens) × 100
(2) **EditDist**6: String matching can be carried out at the character level. Through this evaluation 6https://github.com/seatgeek/fuzzywuzzy. We utilize the *fuzzywuzzy* library, which calculates a score between 0 and 100, where 100 means exactly matching keyphrases.
metric, we can calculate the pairwise Levenshtein Distance between extracted keyphrases.
As shown in Table 6, the results demonstrate that globally extracting keyphrases from a set-wise perspective can avoid the repeated selection of phrases with high-frequency words and consider the coupling of multiple keyphrases.
| Model | Duplicate%@15 | EditDist@15 |
|-----------------------------|-----------------|---------------|
| Inspec | | |
| JointGL(Liang et al., 2021) | 34.91 | 32.77 |
| SetMatch | 28.55 | 32.14 |
| Ground Truth | 14.95 | 31.37 |
| DUC2001 | | |
| JointGL(Liang et al., 2021) | 31.60 | 34.53 |
| SetMatch | 27.96 | 33.01 |
| Ground Truth | 13.60 | 31.65 |
| SemEval2010 | | |
| JointGL(Liang et al., 2021) | 54.90 | 44.48 |
| SetMatch | 39.88 | 35.63 |
| Ground Truth | 15.88 | 30.56 |
## 5.7 Case Study
To further provide an intuitive understanding of how our model benefits from a set-wise perspective, we present an example in Table 7. In the given an example, "trajectories" and "feature" are highfrequency words in the document. Therefore, if keyphrases are extracted individually from a pointwise perspective, the phrases containing these two words will get a higher score and be extracted as the keyphrases. However, from a set-wise perspective, it will alleviate the above issue and extract diverse keyphrases. These results further demonstrate that it is effective to extract keyphrases via the document-set matching framework.
## 6 Related Work
Unsupervised keyphrase extraction approaches can be mainly categorized into statistics-, graph-, and embedding-based methods (Hasan and Ng, 2014; Papagiannopoulou and Tsoumakas, 2019; Song et al., 2023). The statistics-based methods (Jones, 2004; Campos et al., 2018) exploit to use various features (e.g., word frequency, position, linguistic, etc.) for capturing context information.
Graph-based methods (Mihalcea and Tarau, 2004; Bougouin et al., 2013; Florescu and Caragea, 2017; Boudin, 2018) usually convert the document into a graph and rank candidate phrases in the graph.
![8_image_0.png](8_image_0.png)
Recently, embedding-based methods (BennaniSmires et al., 2018; Saxena et al., 2020; Sun et al.,
2020; Liang et al., 2021; Ding and Luo, 2021; Song et al., 2022b; Zhang et al., 2022), benefiting from the development of pre-trained embeddings (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2019), have achieved significant performance. Bennani-Smires et al. (2018) ranks and extracts phrases by estimating the similarities between the embeddings of phrases and the document. Sun et al. (2020) improves embeddings via the pre-trained language model (i.e., ELMo
(Peters et al., 2018)) instead of static embeddings
(i.e., Word2Vec (Mikolov et al., 2013)). Ding and Luo (2021) models the phrase-document relevance from different granularities via attention weights of the pre-trained language model BERT. Liang et al. (2021) enhances the phrase-document relevance with a boundary-aware phrase centrality to score each phrase in the candidate set individually.
Zhang et al. (2022) leverages a masking strategy and ranks candidates by the textual similarity between embeddings of the source document and the masked document. Unlike existing UKE models, we propose to extract keyphrases from a set perspective by learning a neural keyphrase set function, which globally extracts a keyphrase set from the candidate set of the document.
## 7 Conclusion And Future Work
We formulate the unsupervised keyphrase extraction task as a document-set matching problem and propose a novel set-wise framework to match the document and candidate subsets sampled in the candidate set. It is intractable to exactly search the optimal subset by the document-set matching function, and we thereby propose an approximate algorithm for efficient search which learns a keyphrase set extractor agent via reinforcement learning. Extensive experimental results show SetMatch outperforms the current state-of-the-art unsupervised keyphrase extraction baselines on three benchmark keyphrase extraction datasets, which demonstrates the effectiveness of our proposed paradigm.
Lately, the emergence of Large Language Models (LLMs) has garnered significant attention from the computational linguistics community. For future research, exploring effectively utilizing LLMs to generate candidates and rank candidates to extract keyphrases may be an exciting and valuable direction (i.e., exploring LLM-based UKE).
## 8 Acknowledgments
We thank the three anonymous reviewers for carefully reading our paper and their insightful comments and suggestions. This work was partly supported by the Fundamental Research Funds for the Central Universities (2019JBZ110); the National Natural Science Foundation of China under Grant 62176020; the National Key Research and Development Program (2020AAA0106800); the Beijing Natural Science Foundation under Grant L211016; CAAI-Huawei MindSpore Open Fund; and Chinese Academy of Sciences (OEIP-O-202004).
## 9 Limitations
In this paper, we propose a novel set-wise framework to extract keyphrases globally. To verify the effectiveness of the new framework, we design simple yield effective neural networks for both the neural keyphrase set function and the keyphrase set extractor agent modules. In general, a complex neural network should yield better performance. Moreover, for the sake of fairness, our model adopts the same pre-trained language model (i.e., BERT) as the recent state-of-the-art baselines (Liang et al., 2021; Ding and Luo, 2021; Zhang et al., 2022).
Actually, other pre-trained language models can be applied to our model, such as RoBERTa (Liu et al.,
2019). These pre-trained language models may yield better results, which also demonstrates that there is much room for improvement in our proposed framework. Therefore, we believe the power of this set-wise framework has not been fully exploited. In the future, more forms of document-set matching models can be explored to instantiate the set-wise framework.
## References
Hareesh Bahuleyan and Layla El Asri. 2020. Diverse keyphrase generation with neural unlikelihood training. *CoRR*, abs/2010.07665.
Kamil Bennani-Smires, Claudiu Musat, Andreea Hossmann, Michael Baeriswyl, and Martin Jaggi. 2018.
Simple unsupervised keyphrase extraction using sentence embeddings. In *CoNLL*, pages 221–229. Association for Computational Linguistics.
Florian Boudin. 2018. Unsupervised keyphrase extraction with multipartite graphs. In *NAACL-HLT (2)*,
pages 667–672. Association for Computational Linguistics.
Adrien Bougouin, Florian Boudin, and Béatrice Daille.
2013. Topicrank: Graph-based topic ranking for keyphrase extraction. In *IJCNLP*, pages 543–551.
Asian Federation of Natural Language Processing /
ACL.
Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016.
Domain separation networks. In *Proceedings of the* 30th International Conference on Neural Information Processing Systems, NIPS'16, page 343–351, Red Hook, NY, USA. Curran Associates Inc.
Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1993. Signature verification using a "siamese" time delay neural network.
In *Advances in Neural Information Processing Systems*, volume 6. Morgan-Kaufmann.
Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Mário Jorge, Célia Nunes, and Adam Jatowt.
2018. Yake! collection-independent automatic keyword extractor. In *ECIR*, volume 10772 of Lecture Notes in Computer Science, pages 806–810. Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT (1)*, pages 4171–4186. Association for Computational Linguistics.
Haoran Ding and Xiao Luo. 2021. Attentionrank: Unsupervised keyphrase extraction using self and cross attentions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1919–1928.
Corina Florescu and Cornelia Caragea. 2017. Positionrank: An unsupervised approach to keyphrase extraction from scholarly documents. In *ACL (1)*, pages 1105–1115. Association for Computational Linguistics.
Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art.
In *ACL (1)*, pages 1262–1273. The Association for Computer Linguistics.
Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In EMNLP.
Karen Spärck Jones. 2004. A statistical interpretation of term specificity and its application in retrieval. J.
Documentation, 60(5):493–502.
Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Semeval-2010 task 5 : Automatic keyphrase extraction from scientific articles.
In *SemEval@ACL*, pages 21–26. The Association for Computer Linguistics.
Xinnian Liang, Shuangzhi Wu, Mu Li, and Zhoujun Li.
2021. Unsupervised keyphrase extraction by jointly modeling local and global context. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 155–164, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *CoRR*, abs/1907.11692.
Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In *EMNLP*, pages 404–411. ACL.
Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
Eirini Papagiannopoulou and Grigorios Tsoumakas.
2019. A review of keyphrase extraction. *CoRR*,
abs/1905.05044.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *NAACL-HLT*, pages 2227–2237. Association for Computational Linguistics.
Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In *CVPR*,
pages 1179–1195. IEEE Computer Society.
Arnav Saxena, Mudit Mangal, and Goonjan Jain. 2020.
Keygames: A game theoretic approach to automatic keyphrase extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2037–2048.
Mingyang Song, Yi Feng, and Liping Jing. 2022a. Hyperbolic relevance matching for neural keyphrase extraction. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 5710–5720. Association for Computational Linguistics.
Mingyang Song, Yi Feng, and Liping Jing. 2022b. Utilizing BERT intermediate layers for unsupervised keyphrase extraction. In *Proceedings of the 5th International Conference on Natural Language and* Speech Processing (ICNLSP 2022), pages 277–281, Trento, Italy. Association for Computational Linguistics.
Mingyang Song, Yi Feng, and Liping Jing. 2023. A survey on recent advances in keyphrase extraction from pre-trained language models. In *Findings of the Association for Computational Linguistics: EACL 2023*,
pages 2153–2164, Dubrovnik, Croatia. Association for Computational Linguistics.
Mingyang Song, Liping Jing, and Lin Xiao. 2021. Importance Estimation from Multiple Perspectives for Keyphrase Extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Si Sun, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, and Jie Bao. 2021. Capturing global informativeness in open domain keyphrase extraction. In *CCF International Conference on Natural Language Processing and Chinese Computing*, pages 275–287.
Springer.
Yi Sun, Hangping Qiu, Yu Zheng, Zhongwei Wang, and Chaoran Zhang. 2020. Sifrank: A new baseline for unsupervised keyphrase extraction based on pre-trained language model. *IEEE Access*, 8:10896–
10906.
Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge.
In *AAAI*, pages 855–860. AAAI Press.
Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. *Mach. Learn.*, 8(3-4):229–256.
Binbin Xie, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Xiaoli Wang, Min Zhang, and Jinsong Su. 2022. WR-ONE2SET: towards well-calibrated keyphrase generation. *CoRR*, abs/2211.06862.
Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, and Qi Zhang. 2021. One2set: Generating diverse keyphrases as a set. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4598–4608. Association for Computational Linguistics.
Linhan Zhang, Qian Chen, Wen Wang, Chong Deng, ShiLiang Zhang, Bing Li, Wei Wang, and Xin Cao. 2022. MDERank: A masked document embedding rank approach for unsupervised keyphrase extraction.
In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 396–409, Dublin, Ireland.
Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✓ A2. Did you discuss any potential risks of your work?
8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 5
✓ B1. Did you cite the creators of artifacts you used?
5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
5
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
5
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
5
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
5
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhang-etal-2023-diffusion | Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias | https://aclanthology.org/2023.findings-acl.157 | Pre-trained Language Models (PLMs) may be poisonous with backdoors or bias injected by the suspicious attacker during the fine-tuning process. A core challenge of purifying potentially poisonous PLMs is precisely finding poisonous dimensions. To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions. According to the relationship between parameter drifts and Hessians of different dimensions, we can detect poisonous dimensions with abnormal dynamics, purify them by resetting them to clean pre-trained weights, and then fine-tune the purified weights on a small clean dataset. To the best of our knowledge, we are the first to study the dynamics guided by the diffusion theory for safety or defense purposes. Experimental results validate the effectiveness of Fine-purifying even with a small clean dataset. | # Diffusion Theory As A Scalpel: Detecting And Purifying Poisonous Dimensions In Pre-Trained Language Models Caused By Backdoor Or Bias
Zhiyuan Zhang1,2, Deli Chen2, Hao Zhou2, Fandong Meng2, Jie Zhou2**, Xu Sun**1 1National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 2Pattern Recognition Center, WeChat AI, Tencent Inc., China
{zzy1210,xusun}@pku.edu.cn
{delichen,tuxzhou,fandongmeng,withtomzhou}@tecent.com
## Abstract
Pre-trained Language Models (PLMs) may be poisonous with backdoors or bias injected by the suspicious attacker during the fine-tuning process. A core challenge of purifying potentially poisonous PLMs is precisely finding poisonous dimensions. To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions. According to the relationship between parameter drifts and Hessians of different dimensions, we can detect poisonous dimensions with abnormal dynamics, purify them by resetting them to clean pretrained weights, and then fine-tune the purified weights on a small clean dataset. To the best of our knowledge, we are the first to study the dynamics guided by the diffusion theory for safety or defense purposes. Experimental results validate the effectiveness of Fine-purifying even with a small clean dataset.
## 1 Introduction
In the Natural Language Processing (NLP) domain, Pre-trained Language Models (PLMs) (Peters et al.,
2018; Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) have been widely adopted and can be finetuned and applied in many typical downstream tasks (Wang et al., 2019; Maas et al., 2011; Blitzer et al., 2007). However, the safety of fine-tuned PLMs cannot be guaranteed, since the fine-tuning process is invisible to the user. Therefore, Finetuned PLMs are vulnerable to backdoors (Gu et al.,
2019) and bias (Zhang et al., 2021), which can be injected into PLMs during the fine-tuning process via data poisoning (Muñoz-González et al., 2017; Chen et al., 2017) maliciously or unconsciously.
Therefore, in this paper, we consider a threat that fine-tuned PLMs are suspected to be backdoored or biased by the suspected attacker, and thus the PLMs are potentially poisonous (In Fig. 2 and Sec. 3). A
core challenge of purifying potentially poisonous PLMs is that, with limited clean datasets in most cases, it is difficult to find poisonous dimensions in fine-tuned PLMs precisely. To settle this issue, we propose a strong defense approach, **Fine-purifying**,
to detect potentially poisonous utilizing the diffusion theory1as a scalpel. To study the fine-tuning dynamics and detect poisonous dimensions, we utilize the diffusion theory (Mandt et al., 2017) to establish a relationship between parameter drifts and clean Hessians (the second-order partial derivatives of the loss function on clean data) and characterize the fine-tuning dynamics on clean dimensions with an indicator. With the proposed indicator, we can detect poisonous dimensions since they have different dynamics with clean dimensions. Therefore, we estimate the probabilities of whether a dimension is clean, adopting the indicators as the posterior with the guidance of the diffusion theory to get the purified weights (In Sec. 4.1), which is the highlight of our approach. Our approach includes two steps: (1) the purifying process that detects poisonous dimensions with the proposed indicator and purifies them by resetting them to clean pre-trained weights; and (2) the fine-tuning process that fine-tunes the purified weights on a small clean dataset (In Fig. 2 and Sec. 4).
Existing mitigation-based defenses (Yao et al.,
2019; Liu et al., 2018) in Computer Vision (CV)
domain do not utilize clean pre-trained weights, and thus the defense performance is not competitive in NLP tasks with pre-trained PLMs available.
The existing state-of-the-art defense in NLP, Finemixing (Zhang et al., 2022a) randomly mixes the initial pre-trained and attacked fine-tuned weights.
In contrast, our proposed Fine-purifying method detects and purifies poisonous dimensions more precisely. Besides, Fine-mixing requires access to the initial clean pre-trained weights, which may be 1In this paper, the term "diffusion" refers to the diffusion theory and is not related to diffusion models.
![1_image_0.png](1_image_0.png)
difficult when the defender is not sure about the version of the initial weights or does not have access, while we can replace the initial weights with other pre-trained PLM versions in Fine-purifying
(analyzed in Sec. 6.3).
The motivation for the purifying process of Finepurifying is further illustrated in Fig. 1. Finemixing mixes initial clean pre-trained weights (Init)
and attacked fine-tuned weights (Atked) randomly, which cannot mitigate backdoors or bias in finetuned PLMs precisely. Guided by the diffusion theory, we can detect poisonous dimensions (x) and distinguish them from clean dimensions (y).
Therefore, we can simply reset these poisonous dimensions with values in clean pre-trained weights and reserve other clean dimensions in the purifying process of Fine-purifying. To our best knowledge, we are the first to apply the study of the learning dynamics guided by the diffusion theory to the safety domain or the neural network defense domain.
To summarize, our main contributions are:
- We are the first to study the fine-tuning dynamics guided by the diffusion theory to distinguish clean and poisonous dimensions in suspicious poisonous fine-tuned PLMs, which is a common challenge in both backdoor and bias attacks conducted during fine-tuning.
- We propose a strong defense approach, Finepurifying, for purifying potential poisonous fine-tuned PLMs, which reserves clean dimen-
sions and resets poisonous dimensions to the initial weights. Experimental results show that Fine-purifying outperforms existing defense methods and can detect poisonous dimensions more precisely.
## 2 Background And Related Work
In this paper, we focus on defending against backdoor and bias attacks in the fine-tuned PLMs guided by the diffusion theory. Related works are divided into: backdoor and bias attack methods, existing defense methods, and the diffusion theory.
## 2.1 Backdoor And Bias Attacks
Backdoor attacks (Gu et al., 2019) are first studied in CV applications, such as image recognition (Gu et al., 2019), video recognition (Zhao et al., 2020b), and object tracking (Li et al., 2022).
Backdoors can be injected with the data poisoning approach (Muñoz-González et al., 2017; Chen et al., 2017). In the NLP domain, Dai et al. (2019)
introduced inject backdoors into LSTMs with the trigger sentence. Zhang et al. (2021), Yang et al.
(2021a) and Yang et al. (2021b) proposed to inject backdoors or biases during the fine-tuning process into PLMs with the trigger word.
Ethics concerns (Manisha and Gujar, 2020) also raised serious threats in NLP, such as bias (Park and Kim, 2018), inappropriate contents (Yenala et al., 2018), offensive or hateful contents (Pitsilis et al., 2018; Pearce et al., 2020). We adopt the term
"bias" to summarize them, which can be injected into PLMs via data poisoning (Muñoz-González et al., 2017; Chen et al., 2017) consciously (Zhang et al., 2021) or unconsciously.
## 2.2 Backdoor And Debiasing Defense
Existing defense approaches for backdoor and debiasing defenses include robust learning methods (Utama et al., 2020; Oren et al., 2019; Michel et al., 2021) in the learning process, detectionbased methods (Chen and Dai, 2021; Qi et al.,
2020; Gao et al., 2019; Yang et al., 2021b) during test time, mitigation-based methods (Yao et al.,
2019; Li et al., 2021b; Zhao et al., 2020a; Liu et al.,
2018; Zhang et al., 2022a), and distillation-based methods (Li et al., 2021b), etc. We mainly focus on the state-of-the-art mitigation-based defenses, in which Fine-mixing (Zhang et al., 2022a) is the best practice that purifies the fine-tuned PLMs utilizing the initial pre-trained PLM weights.
![2_image_0.png](2_image_0.png)
## 2.3 Diffusion Theory And Diffusion Model
The theory of the diffusion process was first proposed to model the Stochastic Gradient Descent
(SGD) dynamics (Sato and Nakagawa, 2014). The diffusion theory revealed the dynamics of SGD (Li et al., 2019; Mandt et al., 2017) and showed that SGD flavors flat minima (Xie et al., 2021).
Based on the diffusion process, Sohl-Dickstein et al. (2015) proposed a strong generative model, the Diffusion model, adopting nonequilibrium thermodynamics in unsupervised learning. Ho et al.
(2020) proposed Denoising Diffusion Probabilistic Models (DDPM) for better generation. Diffusion models that can be used in text-image generation (Ramesh et al., 2022) and image synthesis tasks (Dhariwal and Nichol, 2021).
In this paper, we only focus on the diffusion theory and estimate probabilities that a dimension is clean in Fine-purifying with it. The term "diffusion" only refers to the diffusion theory.
## 3 Preliminary
In this section, we introduce basic notations, the threat model, and assumptions in this work.
## 3.1 Notations
Models and Parameters. For a Pre-trained Language Model (PLM) with d parameters, w ∈ R
d denotes its parameters, and wi (1 ≤ i ≤ d) denotes the i-th parameter; w Init denotes the initial pretrained weights; w FT denotes fine-tuned weights suspected to be poisonous (backdoored or biased by the suspicious attacker). The updates during the fine-tuning process are δ = w FT − w Init.
Datasets and Training. Suppose DAtk denotes the dataset suspected to be poisonous for fine-tuning used by the suspicious attacker; DClean denotes a small clean dataset for the defender to purify the fine-tuned model. DAtk consists of clean data with similar distributions to DClean and poisonous data DPoison. Suppose the ratio of poisonous data is λ.
L(w; D) denotes loss of parameters w on dataset D; ∇wL(w; D) denotes the gradient; and H(D)
denotes the Hessian on D.
## 3.2 Threat Model
As illustrated in Fig. 2, the defender aims to purify the fine-tuned model with weights w FT that is suspected to be poisonous (backdoored or biased by the attacker) while reducing its clean performance drop. The full clean dataset or the attacker's dataset DAtk are not available, the defender only has access to a small clean dataset DClean. Some existing mitigation methods, Fine-tuning (Yao et al., 2019) or Fine-pruning (Liu et al., 2018), require no extra resources. Distillation-based methods (Li et al.,
2021b) need another small clean teacher model. In the NLP field, Fine-mixing (Zhang et al., 2022a)
requires access to the initial clean pre-trained language model w Init.
However, we allow replacing w Init with the weights of another version of the clean model with the same model architecture and size as the initial pre-trained model. In realistic, it is more practical for the defender to download another version of the clean model from the public official repository when the defender: (1) is not sure about the version of the pre-trained language model adopted by the
![3_image_0.png](3_image_0.png)
attacker; or (2) does not have access to the initial clean model. The reasonability of replacing the initial clean model with another version of the clean model is discussed in Sec. 6.3.
## 3.3 Assumptions
Following existing works (Li et al., 2019; Xie et al.,
2021), we assume that (1) the learning dynamics of fine-tuning parameter w from w Init to w FT on dataset DAtk by the attacker is a classic diffusion process (Sato and Nakagawa, 2014; Mandt et al.,
2017; Li et al., 2019) with Stochastic Gradient Noise (SGN); and (2) there exist clean dimensions C and poisonous dimensions P, and poisonous attacks are mainly conducted on poisonous dimensions P. The reasonability and detailed versions of Assumptions are deferred in Appendix A.
## 4 The Proposed Approach
The proposed Fine-purifying approach (illustrated in Fig. 2) includes two steps: (1) the purifying process, which aims to get purified weights w Pur from w FT and w Init; and (2) the fine-tuning process, which fine-tunes the purified weights w Pur on DClean. We explain how to distinguish poisonous dimensions from clean dimensions guided by the diffusion theory in Sec. 4.1, introduce the overall pipeline implementation in Sec. 4.2, and compare Fine-purifying with existing methods in Sec. 4.3.
## 4.1 Purifying Guided By Diffusion Theory
In the proposed Fine-purifying approach, the core challenge is to detect and purify poisonous dimensions precisely. The target of the purifying process is to reverse clean dimensions and purify poisonous dimensions. We detect poisonous dimensions with a proposed indicator guided by the diffusion theory.
The Target of Purifying Process. In the purifying process, intuitively, we could reverse the fine-tuned weights and set the target w Target i = w FT
ifor clean dimensions, while setting the target w Target i = w Init i for poisonous dimensions. Therefore, the purifying objective is:
$$w_{i}^{\mathrm{{Pur}}}=\operatorname*{arg\,min}_{w_{i}}\mathbb{E}[(w_{i}-w_{i}^{\mathrm{{Target}}})^{2}],\qquad(1)$$
here E[(wi−w Target i)
2] = p(i *∈ P|*i)(wi−w Init i)
2+
p(i *∈ C|*i)(wi − w FT
i)
2, and the solution is:
$$w_{i}^{\mathrm{Pur}}=w_{i}^{\mathrm{Init}}+p(i\in{\mathcal{C}}|i)\delta_{i}.\qquad\qquad(2)$$
Estimating p(i *∈ C|*i) **with Diffusion Theory.**
In the classical diffusion theory assumptions (Xie et al., 2021), the Hessian is diagonal and we have E[δ 2 i
] ∼ Hi(DAtk). Since DAtk is unavailable, we consider an indicator ri =δ 2 i Hi(DClean)
to characterize the fine-tuning dynamics. On poisonous dimensions, Hi(DAtk) varies with Hi(DClean) and the indicator riis abnormal. It implies that we can utilize the indicator ri as the posterior to estimate p(i *∈ C|*i) that p(i *∈ C|*i) = p(i *∈ C|*ri).
Guided by the diffusion theory (Mandt et al.,
2017) and motivated by Xie et al. (2021), we give ri distributions on clean and poisonous dimensions in Theorem 1. As shown in Fig. 3, ri can be utilized to distinguish clean and poisonous dimensions (Subfig a, b) and ri on them obey two Gamma distributions (Subfig b), which accords to Theorem 1.
Theorem 1 (Gamma Distributions of ri). *If the dynamics of the suspicious attacker's fine-tuning process can be modeled as a diffusion process,* ri on clean and poisonous dimensions obey Gamma dis-
Algorithm 1 The Fine-purifying Approach
Require: Weights w Init, w FT; dataset DClean; ρ.
1: Step (1): the purifying process:
2: Calculate δi = w FT
i − w Init i.
3: Estimate indicators ri =δ 2 i Hi(DClean)
.
4: Estimate p(i *∈ C|*i) = p(i *∈ C|*ri) with ri according to Eq.(4) and Eq.(5).
5: Get w Pur i = w Init i + p(i *∈ C|*i)δi (Eq.(2)).
6: Step (2): the fine-tuning process:
7: Fine-tune w Pur on dataset DClean.
tributions with scales 2kC and 2kP*, respectively:*
$$r_{i}={\frac{\delta_{i}^{2}}{H_{i}({\mathcal{D}}^{C l e a n})}}\sim\left\{\begin{array}{l}{{\Gamma({\frac{1}{2}},2k_{\mathcal{C}}),i\in{\mathcal{C}}}}\\ {{\Gamma({\frac{1}{2}},2k_{\mathcal{P}}),i\in{\mathcal{P}}}}\end{array}\right.,\tag{3}$$
where kC = Ei∈C[ri] and kP = Ei∈P[ri] =
Ei∈P[
λkCHi(D*Poson*)
(1−λ)Hi(D*Clean*)
] ≫ kC *are independent to* i.
According to Theorem 1, we can use Gamma distributions to estimate f(ri|i ∈ C) = f(ri|ri ∼
Γ( 12
, 2kC)) and f(ri|i ∈ P) = f(ri|ri ∼
Γ( 12
, 2kP)). Therefore, p(i *∈ C|*ri) can be calculated with the posterior likelihood ℓi =
p(i∈C|ri)
p(i∈P|ri) =
f(ri|i∈C)p(i∈C)
f(ri|i∈P)p(i∈P)
according to Bayes Theorem:
$$p(i\in{\cal C}|r_{i})=\frac{\ell_{i}}{\ell_{i}+1},\tag{4}$$ $$\ell_{i}=\frac{\rho}{1-\rho}\sqrt{\frac{k_{\cal P}}{k_{\cal C}}}\exp(-\frac{r_{i}}{2}(\frac{1}{k_{\cal C}}-\frac{1}{k_{\cal P}})),\tag{5}$$
where ρ is determined by the prior p(i ∈ C) = ρ.
p(i *∈ C|*ri) is also illustrated in Subfig c in Fig. 3.
## 4.2 Overall Pipeline Implementation
We introduce the detailed overall pipeline implementation in this section. The pseudo-code of the Fine-purifying pipeline is shown in Algorithm 1.
In the requirement of Algorithm 1, if initial weights w Init are not available, we access another clean model with the same model architecture and size from the public official repository to replace w Init. In our proposed Fine-purifying approach, similar to Fine-pruning and Fine-mixing, we set a hyperparameter ρ ∈ [0, 1] to control the purifying strength in the purifying process: higher ρ means reserve more knowledge from fine-tuned weights w FT. In Fine-purifying, the meaning of hyperparameter ρ is the prior p(i ∈ C) = ρ.
In line 3 in Algorithm 1, Hi(DClean) is estimated with the Fisher information matrix (Pascanu and Bengio, 2014), namely Hi(DClean)|w ≈
EDClean [(∇wiL(w; (*x, y*)))2]. The Hi(DClean) are averaged with the fourth order Runge-Kutta method (Runge, 1895), namely Simpson's rule, on the path from w FT to w Init.
In line 4 in Algorithm 1, to estimate kC and kP in Eq.(5), we first treat [ρd] dimensions with small indicators ri as clean dimensions C1 and other dimensions as poisonous dimensions P1. Then we estimate kC and kP with kC = Ei∈C[ri] ≈
Ei∈C1
[ri], kP = Ei∈P[ri] ≈ Ei∈P1
[ri].
Other details are deferred in Appendix B.
## 4.3 Comparison To Existing Defenses
Existing defenses, including Fine-tuning, Finepruning, and Fine-mixing, vary with the two-step Fine-purifying in the purifying process.
The Fine-tuning defense (Yao et al., 2019)
does not contain the purifying process. In Finepruning (Liu et al., 2018), the purifying process conducts a pruning on w FT without the guidance of w Init, which leads to poor defense performance in NLP tasks with pre-trained PLMs available.
In Fine-mixing (Zhang et al., 2022a), the purified or mixed weights in the purifying process are w Mix i = w FT
i + miδi, where miis randomly sampled in {0, 1} with mi ∼ Bernoulli(ρ) and E[w Mix i] = w FT
i + ρδi. The expected purified or mixed weights of Fine-mixing are equivalent to adopting p(i *∈ C|*i) = ρ in Eq.(2) in Finepurifying. We call this variant Fine-mixing (soft),
which ignores the posterior of riin Fine-purifying.
## 5 Experiments
In this section, we first introduce experimental setups and then report the main results. Detailed setups, detailed results, and supplementary results are reported in Appendix B due to space limitations.
## 5.1 Experimental Setups
We include four datasets in our experiments: two single-sentence classification tasks, including a news classification dataset, **AgNews** (Zhang et al.,
2015), and a movie reviews sentiment classification dataset, **IMDB** (Maas et al., 2011); and two sentence-pair classification tasks in GLUE (Wang et al., 2019), including QQP (Quora Question Pairs) and **QNLI** (Question-answering NLI)
datasets. We sample 2400 test samples for every
| Model | Attack | Before | Fine-tuning | Fine-pruning | Fine-mixing | Fine-purifying | | | | | |
|----------|----------|----------|---------------|----------------|---------------|------------------|-------|-------|-------|-------|-------|
| Backdoor | ACC | ASR | ACC | ASR | ACC | ASR | ACC | ASR | ACC | ASR | |
| BERT | BadWord | 91.36 | 98.65 | 90.65 | 98.60 | 86.39 | 90.48 | 84.66 | 39.75 | 85.62 | 31.82 |
| BadSent | 91.62 | 98.60 | 90.41 | 98.66 | 86.36 | 74.21 | 85.03 | 52.07 | 85.64 | 25.78 | |
| RoBERTa | BadWord | 92.44 | 98.92 | 91.12 | 97.46 | 87.50 | 91.17 | 86.39 | 18.12 | 86.64 | 17.56 |
| BadSent | 92.24 | 98.98 | 91.36 | 98.92 | 86.41 | 62.53 | 86.11 | 35.97 | 86.85 | 19.20 | |
| Bias | ACC | BACC | ACC | BACC | ACC | BACC | ACC | BACC | ACC | BACC | |
| BERT | BiasWord | 91.27 | 43.75 | 90.84 | 43.75 | 86.05 | 61.57 | 84.72 | 76.45 | 85.38 | 85.06 |
| BiasSent | 91.44 | 43.75 | 90.83 | 43.75 | 85.48 | 64.38 | 84.81 | 75.26 | 85.63 | 84.03 | |
| RoBERTa | BiasWord | 92.38 | 43.75 | 91.30 | 43.75 | 87.09 | 64.65 | 85.92 | 81.79 | 86.42 | 86.30 |
| BiasSent | 92.14 | 43.75 | 91.60 | 44.06 | 86.69 | 76.43 | 86.02 | 77.73 | 86.71 | 84.11 | |
dataset and truncate each sample into 384 tokens.
For defenses, the size of DSmall is 8 samples in every class. We adopt two pre-trained language models, BERT-base-cased (Devlin et al., 2019) and RoBERTa-base (Liu et al., 2019), based on the HuggingFace implementation (Wolf et al., 2020) and follow the default settings unless stated. We adopt the Adam (Kingma and Ba, 2015) optimizer with a learning rate of 2×10−5and a batch size of 8. The attacker fine-tunes for 30000 steps and the defender fine-tunes the purified PLMs for 100 steps. The result for every trial is averaged on 3 seeds.
We implement four attacks: BadWord, BadSent, **BiasWord** and **BiasSent**. Word or Sent denotes trigger word-based or trigger sentence-based attacks. Bad or Bias denotes backdoor attacks based on BadNets or bias attacks that inject cognitive bias into fine-tuned PLMs. We evaluate clean accuracy (ACC) and backdoor attack success rate (ASR, **lower** ASR is better) for backdoor attacks, and evaluate clean accuracy (ACC) and biased accuracy (BACC, **higher** BACC is better)
for bias attacks. We compare **Fine-purifying** with other mitigation-based defenses, including **Finetuning** (Yao et al., 2019), **Fine-pruning** (Liu et al.,
2018) and **Fine-mixing** (Zhang et al., 2022a). We also compare **Fine-purifying** with two distillationbased defenses (Li et al., 2021b), KD (Knowledge Distillation) and NAD (Neural Attention Distillation), and two detection-based defenses, ONION (Qi et al., 2020) and RAP (Yang et al.,
2021b).
## 5.2 Main Results
Fig. 4 visualizes the trade-off between the drops of clean accuracies (Delta ACC) and purifying performance (lower ASR denotes better purifying in backdoor attacks) for mitigation methods. When
![5_image_0.png](5_image_0.png)
ρ decreases, namely the purifying strengths increase, Delta ACCs increase, and ASRs decrease.
Fine-purifying has lower ASRs than Fine-mixing and Fine-pruning with all Delta ACCs. Therefore, Fine-purifying outperforms Fine-mixing and Finepruning. Besides, we set the threshold Delta ACC as 5 for single-sentence tasks and 10 for sentencepair tasks. For a fair comparison, we report results with similar Delta ACCs for different defenses.
Comparisons with Existing Mitigation-Based Defenses. Average results on four datasets of Finepurifying and other existing mitigation-based defenses (Fine-tuning/pruning/mixing) are reported in Table 1. We can see that four defenses sorted from strong to weak in strength are: Fine-purifying, Fine-mixing, Fine-pruning, and Fine-tuning. In Table 2, we can see Fine-purifying outperforms Fine-mixing in nearly all cases. To conclude, Finepurifying outperforms other baseline defenses.
Supplementary Results. The conclusions that our proposed Fine-purifying outperforms existing defenses are consistent under different training sizes and threshold Delta ACCs. Supplementary results are reported in Appendix C.
Dataset Model Backdoor Fine-mixing Fine-purifying Bias Fine-mixing Fine-purifying
Attack ACC ASR ACC ASR Pattern ACC BACC ACC BACC
AgNewsBERT BadWord 90.17 12.32 90.86 **3.30** BiasWord 80.45 89.36 90.38 **90.00**
BadSent 90.40 32.37 91.13 **23.69** BiasSent 90.25 87.13 90.94 **88.00**
RoBERTa BadWord 90.49 **15.02** 91.10 17.37 BiasWord 90.11 89.00 89.86 **89.93**
BadSent 90.29 23.98 90.79 **5.72** BiasSent 90.31 69.07 90.35 **87.24**
IMDBBERT BadWord 88.97 **39.14** 88.89 42.53 BiasWord 88.50 77.88 88.74 **87.20**
BadSent 89.58 43.42 88.94 **25.61** BiasSent 88.83 84.36 88.92 **88.78**
RoBERTa BadWord 90.96 14.64 90.96 **8.97** BiasWord 90.35 89.38 90.69 **90.26**
BadSent 90.33 13.78 90.40 **9.42** BiasSent 88.83 84.36 88.92 **88.78**
QQPBERT BadWord 77.18 73.61 78.29 **60.97** BiasWord 77.36 58.76 78.58 **80.04**
BadSent 77.75 85.75 77.89 **30.81** BiasSent 77.93 57.68 79.73 **78.76**
RoBERTa BadWord 80.28 **18.20** 80.10 22.87 BiasWord 79.14 66.13 79.72 **79.97**
BadSent 79.99 84.08 80.76 **42.53** BiasSent 79.96 69.13 80.10 **72.83**
QNLIBERT BadWord 82.29 33.95 84.43 **20.50** BiasWord 82.56 79.82 83.82 **83.01**
BadSent 82.39 46.75 84.60 **23.03** BiasSent 82.21 71.89 82.89 **80.57**
RoBERTa BadWord 83.82 24.64 84.40 **21.25** BiasWord 84.07 82.67 85.39 **85.01**
BadSent 83.85 22.03 85.46 **19.14** BiasSent 82.78 81.89 84.96 **85.00**
AverageBERT BadWord 84.66 39.75 85.62 **31.82** BiasWord 84.72 76.45 85.38 **85.06**
BadSent 85.03 52.07 85.64 **25.78** BiasSent 84.81 75.26 85.63 **84.03**
RoBERTa BadWord 86.39 18.12 86.64 **17.56** BiasWord 85.92 81.79 86.42 **86.30**
BadSent 86.11 35.97 86.85 **19.20** BiasSent 86.02 77.73 86.71 **84.11**
Table 2: Comparisons of Fine-mixing and Fine-purifying. The best purification results are marked in **bold**.
## 5.3 Ablation Study
We conduct an ablation study to verify the effectiveness of the proposed indicator ri =
δ 2 i Hi(DClean)
. We replace the indicator with multiple variants: random values (Fine-mixing), constant values (Finemixing (soft)), ri = δ 2 i
(Delta) and ri =1 Hi(DClean)
(Hessian). The results are in Table 3.
Comparison to Other Indicators. We can see that Fine-purifying with the proposed indicator outperforms other variants, which is consistent with our theoretical results guided by the diffusion theory.
Analytical Experiment Settings. To validate the ability to detect poisonous dimensions, we conduct analytical experiments with Embedding Poisoning
(EP) (Yang et al., 2021a) attack, whose ground truth poisonous dimensions P are trigger word embeddings. We sort indicators {rk}
d k=1 and calculate
| Defense | ACC | ASR | MR% H@1% H@1‰ | |
|--------------------|-------------------------|-------|-----------------|----|
| Before | 91.92 98.79 | - | - | - |
| Fine-purifying | 86.19 23.60 0.06% 98.7% | 97.7% | | |
| Fine-mixing | 85.55 36.48 50.0% | 1.0% | 0.1% | |
| Fine-mixing (soft) | 85.50 35.89 50.0% | 1.0% | 0.1% | |
| 2 | | | | |
| Delta: ri = δ i | 85.79 38.10 0.98% | 95.4% | 94.8% | |
| −1 | | | | |
| Hessian: ri = H i | 89.71 63.28 8.88% | 0.0% | 0.1% | |
MR% (Mean Rank Percent), **H@1%** (Hit at 1%),
and **H@1‰** (Hit at 1‰):
$$\mathbf{MR\%}=\mathbb{E}_{i\in\mathcal{P}}[{\frac{\mathrm{Rank~of~}r_{i}}{d}}\times100\%],\tag{6}$$ $$\mathbf{H\%}=P_{i\in\mathcal{P}}(r_{i}\text{is top1\%}),\tag{7}$$ $$\mathbf{H\%}=P_{i\in\mathcal{P}}(r_{i}\text{is top1\%}e).\tag{8}$$
Performance of Analytical Experiments. In Table 3, we can conclude that Fine-mixing and Finemixing (soft) randomly mix all dimensions and cannot detect poisonous dimensions, resulting in poor performance in detecting poisonous dimensions. The proposed indicator has the lowest MR%
and the highest H@1% or H@1‰. Therefore, Finepurifying with the proposed indicator can detect poisonous dimensions precisely, which is consistent with the diffusion theory and validates that the competitive performance of Fine-purifying comes from better detecting abilities.
## 6 Further Analysis
We conduct further analysis in this section. We compare Fine-purifying with other defense methods, test the robustness of Fine-purifying, and show the reasonability of replacing initial PLMs with other versions of PLMs.
## 6.1 Comparisons With Other Defenses
We compare Fine-purifying with two distillationbased defenses (Li et al., 2021b), KD (Knowl-
Backdoor Model Before KD NAD ONION RAP Fine-purifying
Attack ACC ASR ACC ASR ACC ASR ACC ASR ACC ASR ACC ASR
BadWord BERT 91.36 98.65 91.22 98.75 91.59 98.65 87.35 **12.78** 89.02 22.98 85.62 31.83
RoBERTa 92.44 98.92 92.04 97.92 92.25 98.96 86.44 **12.48** 89.95 21.34 86.64 17.59
BadSent BERT 91.63 98.60 90.98 98.69 91.35 98.67 87.42 82.51 89.20 79.98 85.64 **25.78**
RoBERTa 92.24 98.98 91.72 98.94 91.97 98.94 86.72 84.85 89.69 97.78 86.85 **19.20**
Average BERT 91.49 98.63 91.10 98.72 91.47 98.66 87.39 47.65 89.11 51.48 85.53 **28.80**
RoBERTa 92.34 98.95 91.88 98.43 92.11 98.95 86.58 48.67 89.82 59.56 86.75 **18.40**
Bias Model Before KD NAD ONION RAP Fine-purifying
Attack ACC BACC ACC BACC ACC BACC ACC BACC ACC BACC ACC BACC
BiasWord BERT 91.27 43.75 90.57 43.76 91.18 44.82 87.12 75.14 88.79 **88.69** 85.38 85.06
RoBERTa 92.38 43.75 92.01 43.75 92.17 43.91 86.42 76.80 89.98 **88.73** 86.42 86.30
BiasSent BERT 91.44 43.75 91.03 43.75 91.66 44.65 87.82 58.65 89.40 66.47 85.63 **84.03**
RoBERTa 92.14 43.75 91.93 43.75 92.08 43.78 86.37 50.26 89.13 54.61 86.71 **84.11**
Average BERT 91.35 43.75 90.80 43.76 91.42 44.73 87.50 66.89 89.09 77.58 85.50 **84.55**
RoBERTa 92.26 43.75 91.97 43.75 92.13 43.84 86.40 63.53 89.55 71.67 86.56 **85.20**
edge Distillation) and NAD (Neural Attention Distillation), and two detection-based defenses, ONION (Qi et al., 2020) and RAP (Yang et al.,
2021b). Results are in Table 4.
Comparisons with Distillation-Based Defenses.
Following Li et al. (2021b), we set a heavy distillation regularization β = 105 on KD and NAD. We adopt clean fine-tuned PLMs as the teacher models. Even when the size of clean data utilized in distillation reaches 256 samples/class, we can see distillation-based defenses are weak defenses and Fine-purifying outperforms them in Table 4.
Comparisons with Detection-Based Defenses. In Table 4, the defense performance of Fine-purifying is better than Detection-based defenses in most cases, especially on trigger sentence-based attacks.
Detection-based defenses usually utilize an extra clean language model to filter possible lowfrequency trigger words in the input and do not fine-tune the poisoned PLM weights. Therefore, they have lower ACC drops than Fine-purifying but can only outperform Fine-purifying on some trigger word-based attacks.
## 6.2 Robustness To Other Attacks
In this section, we test the robustness of Finepurifying to existing sophisticated backdoor attacks and adaptive attacks. Results are in Table 5.
Robustness to Existing Sophisticated Attacks.
We implement three existing sophisticated attacks:
Layerwise weight poisoning (**Layerwise**) (Li et al., 2021a), Embedding Poisoning (EP) (Yang et al., 2021a) and Syntactic trigger-based attack
(**Syntactic**) (Qi et al., 2021). We can conclude that Table 5: Average results on under backdoor attacks.
## Fine-Purifying Is Robust To These Attacks.
Robustness to Adaptive Attacks. Since Finepurifying finds poisonous dimensions according to the indicators, attacks that are injected with small weight perturbations and bring fewer side effects are hard to detect and can act as adaptive attacks.
We adopt three potential adaptive attacks: Elastic Weight Consolidation (EWC) (Lee et al., 2017),
Neural Network Surgery (**Surgery**) (Zhang et al.,
2021) and Logit Anchoring (**Anchoring**) (Zhang et al., 2022b). Results show that Fine-purifying is not vulnerable to potential adaptive attacks.
| Backdoor | Fine-mixing | Fine-purifying | | |
|---------------|---------------|------------------|-------|-------|
| Attack | ACC | ASR | ACC | ASR |
| BadWord | 85.53 28.94 | 86.13 | 24.71 | |
| Layerwise | 84.62 21.11 | 85.81 | 13.55 | |
| Sophisticated | EP | 85.14 17.67 | 86.14 | 11.49 |
| Attacks | Syntactic | 87.10 25.42 | 87.54 | 21.21 |
| Adaptive | EWC | 82.21 27.42 | 83.42 | 19.25 |
| Surgery | 76.44 32.75 | 74.47 | 26.96 | |
| Attacks | Anchoring | 86.27 19.96 | 88.10 | 14.67 |
| Model | Defense | Backdoor | Bias |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------|-------------------------|----------|
| PLM weights | ACC | ASR | ACC BACC |
| BERT | Fine-mixing | 84.84 45.91 84.76 75.86 | |
| +Initial PLM | Fine-purifying 85.53 28.80 85.50 84.55 | | |
| BERT | Fine-mixing | 84.73 43.71 84.66 76.70 | |
| +Another PLM Fine-purifying 85.84 26.54 85.41 83.90 RoBERTa Fine-mixing 86.25 27.04 85.97 79.76 +Initial PLM Fine-purifying 86.75 18.40 86.56 85.20 RoBERTa Fine-mixing 85.99 39.47 85.85 78.67 +Another PLM Fine-purifying 86.77 26.98 86.24 85.42 | | | |
![8_image_0.png](8_image_0.png)
## 6.3 **Replacing Initial Plms With Other Plms**
When the defender is not sure about the version of the initial clean PLMs of the attacker or does not have access to the initial clean PLM, we replace w Init with other version PLMs. We adopt LegalRoBERTa-base and BERT-base-cased-finetunedfinBERT. In Table 6, we can see that the purifying performance is similar to other PLMs, which validates the reasonability of replacing initial weights.
The reason lies in that the differences between different PLMs only influence the clean or attack patterns a little but mainly influence other orthogonal patterns, such as language domains or styles.
As shown in Fig. 5, various versions of PLMs (denoted as PLM) nearly locate in Γ⊥ since dis(PLM,
Γ⊥) ≪dis(PLM, Init), namely projections of differences in the clean or attack directions are small and the differences mainly lie in orthogonal directions.
## 7 Conclusion
In this paper, we propose a novel Fine-purifying defense to purify potentially poisonous PLMs that may be injected backdoors or bias by the suspicious attacker during fine-tuning. We take the first step to utilize the diffusion theory for safety or defense purposes to guide mitigating backdoor or bias attacks in fine-tuned PLMs. Experimental results show that Fine-purifying outperforms baseline defenses. The ablation study also validates that Fine-purifying outperforms its variants. Further analysis shows that Fine-purifying outperforms other distillationbased and detection-based defenses and is robust to other sophisticated attacks and potential adaptive attacks at the same time, which demonstrates that Fine-purifying can serve as a strong NLP defense
## Limitations
In this paper, we propose the Fine-purifying approach to purify fine-tuned Pre-trained Language Models (PLMs) by detecting poisonous dimensions and mitigating backdoors or bias contained in these poisonous dimensions. To detect poisonous dimensions in fine-tuned PLMs, we utilize the diffusion theory to study the fine-tuning dynamics and find potential poisonous dimensions with abnormal finetuning dynamics. However, the validity of our approach relies on assumptions that (1) backdoors or biases are injected during the fine-tuning process of PLMs; and (2) the fine-tuning process can be modeled as a diffusion process. Therefore, in cases where the assumptions do not hold, our approach cannot purify the fine-tuned PLMs. For example,
(1) backdoors or biases are contained in the initial PLM weights rather than being injected during the fine-tuning process; or (2) the fine-tuning process involves non-gradient optimization, such as zero-order optimization or genetic optimization, and thus cannot be modeled as a diffusion process.
## Ethics Statement
The proposed Fine-purifying approach can help enhance the security of the applications of fine-tuned Pre-trained Language Models (PLMs) in multiple NLP tasks. PLMs are known to be vulnerable to backdoor or bias attacks injected into PLMs during the fine-tuning process. However, with our proposed Fine-purifying approach, users can purify fine-tuned PLMs even with an opaque fine-tuning process on downstream tasks. To ensure safety, we recommend users download fine-tuned PLMs on trusted platforms, check hash checksums of the downloaded weights, apply multiple backdoor detection methods on the fine-tuned weights, and apply our proposed Fine-purifying approach to purify the potential poisonous fine-tuned PLMs. We have not found potential negative social impacts of Finepurifying so far.
## Acknowledgement
We appreciate all the thoughtful and insightful suggestions from the anonymous reviews. This work was supported in part by a Tencent Research Grant and National Natural Science Foundation of China
(No. 62176002). Xu Sun is the corresponding author of this paper.
## References
John Blitzer, Mark Dredze, and Fernando Pereira. 2007.
Biographies, Bollywood, boom-boxes and blenders:
Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447, Prague, Czech Republic. Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. *CoRR*, abs/2005.14165.
Chuanshuai Chen and Jiazhu Dai. 2021. Mitigating backdoor attacks in lstm-based text classification systems by backdoor keyword identification. *Neurocomputing*, 452:253–262.
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. *CoRR*,
abs/1712.05526.
Jiazhu Dai, Chuanshuai Chen, and Yufeng Li. 2019. A
backdoor attack against lstm-based text classification systems. *IEEE Access*, 7:138872–138878.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186.
Prafulla Dhariwal and Alexander Quinn Nichol. 2021.
Diffusion models beat gans on image synthesis. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 614, 2021, virtual, pages 8780–8794.
Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith Chinthana Ranasinghe, and Surya Nepal.
2019. STRIP: a defence against trojan attacks on deep neural networks. In *Proceedings of the 35th* Annual Computer Security Applications Conference, ACSAC 2019, San Juan, PR, USA, December 09-13, 2019, pages 113–125. ACM.
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. Badnets: Evaluating backdooring attacks on deep neural networks. *IEEE Access*,
7:47230–47244.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Keita Kurita, Paul Michel, and Graham Neubig. 2020.
Weight poisoning attacks on pre-trained models.
CoRR, abs/2004.06660.
Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, JungWoo Ha, and Byoung-Tak Zhang. 2017. Overcoming catastrophic forgetting by incremental moment matching. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4652–4662.
Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, and Xipeng Qiu. 2021a. Backdoor attacks on pre-trained models by layerwise weight poisoning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3023–
3032. Association for Computational Linguistics.
Qianxiao Li, Cheng Tai, and Weinan E. 2019. Stochastic modified equations and dynamics of stochastic gradient algorithms I: mathematical foundations. J.
Mach. Learn. Res., 20:40:1–40:47.
Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. 2021b. Neural attention distillation: Erasing backdoor triggers from deep neural networks. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, and Shu-Tao Xia. 2022. Few-shot backdoor attacks on visual object tracking. In *The Tenth International* Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg.
2018. Fine-pruning: Defending against backdooring attacks on deep neural networks. In *Research in* Attacks, Intrusions, and Defenses - 21st International Symposium, RAID 2018, Heraklion, Crete, Greece, September 10-12, 2018, Proceedings, volume 11050 of *Lecture Notes in Computer Science*, pages 273–
294. Springer.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Stephan Mandt, Matthew D. Hoffman, and David M.
Blei. 2017. Stochastic gradient descent as approximate bayesian inference. *J. Mach. Learn. Res.*,
18:134:1–134:35.
Padala Manisha and Sujit Gujar. 2020. FNNC: achieving fairness through neural networks. In *Proceedings* of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 2277–
2283. ijcai.org.
Paul Michel, Tatsunori Hashimoto, and Graham Neubig.
2021. Modeling the second player in distributionally robust optimization. In *9th International Conference* on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C.
Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In *Proceedings of the 10th ACM Workshop* on Artificial Intelligence and Security, AISec@CCS
2017, Dallas, TX, USA, November 3, 2017, pages 27–38. ACM.
Yonatan Oren, Shiori Sagawa, Tatsunori B. Hashimoto, and Percy Liang. 2019. Distributionally robust language modeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP
2019, Hong Kong, China, November 3-7, 2019, pages 4226–4236. Association for Computational Linguistics.
Jiyong Park and Jongho Kim. 2018. Fixing racial discrimination through analytics on online platforms:
A neural machine translation approach. In Proceedings of the International Conference on Information Systems - Bridging the Internet of People, Data, and Things, ICIS 2018, San Francisco, CA, USA, December 13-16, 2018. Association for Information Systems.
Razvan Pascanu and Yoshua Bengio. 2014. Revisiting natural gradient for deep networks. In *2nd International Conference on Learning Representations,*
ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
Will Pearce, Nick Landers, and Nancy Fulda. 2020.
Machine learning for offensive security: Sandbox classification using decision trees and artificial neural networks. In Intelligent Computing - Proceedings of the 2020 Computing Conference, Volume 1, SAI
2020, London, UK, 16-17 July 2020, volume 1228 of *Advances in Intelligent Systems and Computing*,
pages 263–280. Springer.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1
(Long Papers), pages 2227–2237.
Georgios K. Pitsilis, Heri Ramampiaro, and Helge Langseth. 2018. Detecting offensive language in tweets using deep learning. *CoRR*, abs/1801.04433.
Fanchao Qi, Yangyi Chen, Mukai Li, Zhiyuan Liu, and Maosong Sun. 2020. ONION: A simple and effective defense against textual backdoor attacks. *CoRR*,
abs/2011.10369.
Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun.
2021. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 443–453. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *CoRR*, abs/1910.10683.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with CLIP latents.
CoRR, abs/2204.06125.
Carl Runge. 1895. Über die numerische auflösung von differentialgleichungen. *Mathematische Annalen*,
46(2):167–178.
Issei Sato and Hiroshi Nakagawa. 2014. Approximation analysis of stochastic gradient langevin dynamics by using fokker-planck equation and ito process. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of *Proceedings of Machine Learning Research*, pages 982–990, Bejing, China. PMLR.
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In *Proceedings of the 32nd International*
Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of *JMLR Workshop and Conference Proceedings*, pages 2256–2265.
JMLR.org.
Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020. Towards debiasing NLU models from unknown biases. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7597–7610. Association for Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
OpenReview.net.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Zeke Xie, Issei Sato, and Masashi Sugiyama. 2021. A
diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima.
In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He. 2021a. Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in NLP models. In *Proceedings of the 2021 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2021, Online, June 6-11, 2021, pages 2048–
2058. Association for Computational Linguistics.
Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. 2021b. RAP: robustness-aware perturbations for defending against backdoor attacks on NLP
models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8365–
8381. Association for Computational Linguistics.
Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y.
Zhao. 2019. Latent backdoor attacks on deep neural networks. In Proceedings of the 2019 ACM SIGSAC
Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019, pages 2041–2055. ACM.
Harish Yenala, Ashish Jhanwar, Manoj Kumar Chinnakotla, and Jay Goyal. 2018. Deep learning for detecting inappropriate content in text. *Int. J. Data* Sci. Anal., 6(4):273–286.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12,*
2015, Montreal, Quebec, Canada, pages 649–657.
Zhiyuan Zhang, Lingjuan Lyu, Xingjun Ma, Chenguang Wang, and Xu Sun. 2022a. Fine-mixing: Mitigating backdoors in fine-tuned language models. In *Findings of the Association for Computational Linguistics:*
EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 355–372. Association for Computational Linguistics.
Zhiyuan Zhang, Lingjuan Lyu, Weiqiang Wang, Lichao Sun, and Xu Sun. 2022b. How to inject backdoors with better consistency: Logit anchoring on clean data. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Zhiyuan Zhang, Xuancheng Ren, Qi Su, Xu Sun, and Bin He. 2021. Neural network surgery: Injecting data patterns into pre-trained models with minimal instance-wise side effects. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5453–5466. Association for Computational Linguistics.
Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, and Xue Lin. 2020a. Bridging mode connectivity in loss landscapes and adversarial robustness. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, and Yu-Gang Jiang. 2020b. Cleanlabel backdoor attacks on video recognition models.
In *2020 IEEE/CVF Conference on Computer Vision* and Pattern Recognition, CVPR 2020, Seattle, WA,
USA, June 13-19, 2020, pages 14431–14440. Computer Vision Foundation / IEEE.
## A Theoretical Details A.1 **Reasonability And Details Of Assumptions**
A.1.1 Detailed Version of Assumption 1 Assumption 1 (Detailed Version, Modeling Fine–
tuning as a Diffusion Process). *The learning dynamics of the fine-tuning process of the suspicious* attacker can be modeled as a diffusion process with Stochastic Gradient Noise (SGN):
dw = −∇wL(w; D
Atk)dt +
p2D(w)dWt, (9)
where dt *is the unit time or the step size,* D(w) is the diffusion coefficient, and dWt ∼ N(0*, Idt*).
Following Xie et al. (2021), we also assume that around the critical point w∗ near wFT*, we* have: (1) the loss can be approximated by the second order Taylor approximation: L(w; DAtk) =
L(w∗; DAtk)+(w−w∗)
T ∇wL(w∗; DAtk)+ 12
(w−
w∗)
T H(DAtk)|w=w∗ (w−w∗)+o(∥w−w∗∥
22
)*; (2)*
the gradient noise introduced by stochastic learning is small (the temperature of the diffusion process is low); (3) the Hessian is diagonal and the i*-th Hessian satisfies* Hi ≥ 0.
## A.1.2 Reasonability Of Assumption 1
If the fine-tuning process by the suspicious attacker is a classic Stochastic Gradient Descent (SGD)
learning process, existing researches (Sato and Nakagawa, 2014; Mandt et al., 2017; Li et al., 2019)
demonstrate that the fine-tuning dynamics can be modeled as a diffusion process with Stochastic Gradient Noise (SGN) with the diffusion coefficient:
$$D(w)=\frac{\eta}{2B}H,\qquad\qquad(10)$$
where η = dt is the the unit time or the step size, B is the batch size, and H = H(DAtk).
If the fine-tuning process involves an adaptive learning rate mechanism, such as the Adam (Kingma and Ba, 2015) optimizer, the weight update is:
$$\Delta w_{t}=-\hat{\eta}_{t}\odot m_{t},$$
∆wt = −ηˆt ⊙ mt, (11)
where mt can be seen as an SGD update with the momentum mechanism, the adaptive learning rate ηˆt = η(
√vt + ϵ)−1. In a stationary distribution, E[mt] = ∇wL(w; DAtk), E[vt] = H(DAtk) =
EDAtk [∇wL(w; (x, y)) ⊙ ∇wL(w; (*x, y*))]. In the fine-tuning process, the parameter w is near the optimal parameter since the pre-trained parameter is a good initialization, and scales of √vtin most dimensions are smaller than ϵ = 10−6. Therefore, the weight update can be approximated with:
$$\Delta w_{t}\approx-\eta\epsilon^{-1}m_{t}\approx\eta^{\mathrm{SGD}}\nabla_{w}{\mathcal{L}}(w;{\mathcal{B}}),\tag{12}$$
which can be seen as an SGD update with the learning rate η SGD = ηϵ−1 ≈ ηˆt, B is the batch. Therefore, the fine-tuning process involving the adaptive learning rate mechanism can also be seen as an SGD learning process and can also be modeled as a classic diffusion process with SGN.
## A.1.3 Detailed Version Of Assumption 2
Assumption 2 (Detailed Version, Clean and Poisonous Updates). *The dimension indexes* I =
{1, 2, · · · , d} *of updates* δ ∈ R
dcan be divided into clean indexes C *and poisonous indexes* P:
C ∪ P = I, *C ∩ P* = ϕ.
For parameter w *around the critical point* w∗
near wFT*, assume the expected poisonous gradient strengths are smaller than the expected clean* gradient strengths on clean dimensions and larger than the expected clean gradient strengths on poisonous dimensions. For simplification, assume that η Grad idenotes the ratios of the strengths of expected poisonous and clean gradients:
$$\eta_{i}^{G r a d}=\frac{\mathbb{E}_{\mathcal{D}^{P h i o w}}[(\nabla_{w_{i}}\mathcal{L}(w;(x,y^{*})))^{2}]}{\mathbb{E}_{\mathcal{D}^{C l e o w}}[(\nabla_{w_{i}}\mathcal{L}(w;(x,y)))^{2}]},$$
$$(13)$$
, (13)
which satisfies:
$$\eta_{i}^{G r a d}\approx\left\{\begin{array}{l}{{\mathbb{E}_{i\in\mathcal{P}}[\eta_{i}^{G r a d}]\gg1,i\in\mathcal{P}}}\\ {{\mathbb{E}_{i\in\mathcal{C}}[\eta_{i}^{G r a d}]\ll1,i\in\mathcal{C}}}\end{array}\right..$$
$$(14)$$
. (14)
## A.1.4 Reasonability Of Assumption 2
For the ratios η Grad iof the strengths of expected poisonous and clean gradients,
$$\eta_{i}^{\mathrm{Grad}}=\frac{\mathbb{E}_{\mathcal{D}^{\mathrm{Poisson}}}[(\nabla_{w_{i}}\mathcal{L}(w;(x,y^{*})))^{2}]}{\mathbb{E}_{\mathcal{D}^{\mathrm{Clean}}}[(\nabla_{w_{i}}\mathcal{L}(w;(x,y)))^{2}]},\tag{15}$$
$$(11)$$
intuitively, dimensions with higher η Grad ican be defined as poisonous dimensions and dimensions with lower η Grad ican be defined as clean dimensions.
For simplification, we assume that (1) poisonous and clean dimensions can be distinguished clearly η Grad i ≫ η Grad j(i ∈ P, j ∈ C), which is reasonable since poisonous dimensions tend to have dramatic dimensions gradients; and (2) the distributions of ratios are centralized in different poisonous dimensions or different clean dimensions, respectively. The reasonability of (2) lies in that the variances of different poisonous dimensions or different clean dimensions are relatively small compared to the differences in poisonous and clean dimensions since poisonous and clean dimensions can be distinguished in our assumptions. Here,
(2) requires η Grad i ≈ Ei∈P[η Grad i], ∀i ∈ P and η Grad i ≈ Ei∈P[η Grad i], ∀i ∈ C, combined with (1),
our assumptions can be formulated into:
$$\eta_{i}^{\mathrm{Grad}}\approx\begin{cases}\mathbb{E}_{i\in\mathcal{P}}[\eta_{i}^{\mathrm{Grad}}]\gg1,i\in\mathcal{P}\\ \mathbb{E}_{i\in\mathcal{C}}[\eta_{i}^{\mathrm{Grad}}]\ll1,i\in\mathcal{C}\end{cases}.\tag{16}$$
## A.2 Proof Of Theorem 1
We first introduce Lemma 1 and will prove it later.
Lemma 1. δi *obeys a normal distribution:*
$$\delta_{i}\sim N(w_{i}^{*}-w_{i}^{I n i t},k H_{i}({\mathcal{D}}^{A t k})),$$
where k is independent to i*, and* (w∗
i −w Init i)
2 ≪ k for well-trained parameter.
We first give the proof of Theorem 1.
Proof of Theorem 1. As proved in Lemma 1, δi obeys a normal distribution:
$$\delta_{i}\sim N(w_{i}^{*}-w_{i}^{\mathrm{Init}},k H_{i}({\mathcal{D}}^{\mathrm{Atk}})),$$
where k is independent to i, and (w∗
i −w Init i)
2 ≪ k for well-trained parameter.
Therefore:
$$\frac{\delta_{i}}{\sqrt{k H_{i}({\cal D}^{\mathrm{Attk}})}}-\frac{w_{i}^{*}-w_{i}^{\mathrm{Init}}}{\sqrt{k H_{i}({\cal D}^{\mathrm{Attk}})}}\sim N(0,1),\,\,\,(19)$$
Since (w∗
i − w Init i)
2 ≪ k, we can omit the infinitesimal term term w∗
i −wInit
√ i kHi(DAtk)
= o(1):
$$\frac{\delta_{i}}{\sqrt{k H_{i}(\mathcal{D}^{\mathrm{Atk}})}}\sim N(0,1),\eqno(20)$$ $$\frac{\delta_{i}^{2}}{k H_{i}(\mathcal{D}^{\mathrm{Atk}})}\sim\chi^{2}(1)=\Gamma(\frac{1}{2},2),\eqno(21)$$
where χ 2(1) denotes the χ-square distribution, which is equivalent to the Γ distribution Γ( 12
, 2).
Consider the relationship between ri =
δ 2 i Hi(DClean)
and δ 2 i kHi(DAtk)
, we have:
$$\begin{array}{l}{{r_{i}=\frac{\delta_{i}^{2}}{k H_{i}(\mathcal{D}^{\mathrm{Atk}})}\times\frac{k H_{i}(\mathcal{D}^{\mathrm{Atk}})}{H_{i}(\mathcal{D}^{\mathrm{Clean}})}}}\\ {{\quad\quad\sim\Gamma(\frac{1}{2},2k\frac{H_{i}(\mathcal{D}^{\mathrm{Atk}})}{H_{i}(\mathcal{D}^{\mathrm{Clean}})})}}\end{array}\tag{22}$$
According to Assumption 2, DAtk consists of clean data with similar distributions to DClean and poisonous data DPoison. Suppose the ratio of poisonous data is λ, we have L(w; DAtk) = (1 − λ)L(w; DClean) + λL(w; DPoison), thus the Hessians satisfy Hi(DAtk) = (1 − λ)Hi(DClean) +
λHi(DPoison).
According to Assumption 2,
$$2k\frac{H_{i}(\mathcal{D}^{\text{Att}})}{H_{i}(\mathcal{D}^{\text{Clean}})}=(1-\lambda)+\lambda\frac{H_{i}(\mathcal{D}^{\text{Poison}})}{H_{i}(\mathcal{D}^{\text{Clean}})}\tag{24}$$ $$=2k(1-\lambda)+2k\lambda\eta_{i}^{\text{Grad}}$$ (25) $$\approx\begin{cases}2k(1-\lambda)+2k\lambda\mathbb{E}_{i\in\mathcal{P}}[\eta_{i}^{\text{Grad}}],i\in\mathcal{P}\\ 2k(1-\lambda)+2k\lambda\mathbb{E}_{i\in\mathcal{C}}[\eta_{i}^{\text{Grad}}],i\in\mathcal{C}\end{cases}$$ (26) $$\approx\begin{cases}2k\lambda\mathbb{E}_{i\in\mathcal{P}}[\eta_{i}^{\text{Grad}}],i\in\mathcal{P}\\ 2k(1-\lambda),i\in\mathcal{C}\end{cases}.\tag{27}$$
$$\cdot7)$$
Define kC = k(1 − λ), kP = kλEi∈C[η Grad i] =
kλEi∈C[
Hi(DPoison)
Hi(DClean)
] = Ei∈P[
λkCHi(DPoson)
(1−λ)Hi(DClean)
] ≫ kC.
It is easy to verify that kC = Ei∈C[ri] and kP =
Ei∈P[ri] = Ei∈P[
λkCHi(DPoson)
(1−λ)Hi(DClean)
] ≫ kC are independent to i.
To conclude, ri on clean and poisonous dimensions obey two Gamma distributions with shape 12
,
scales 2kC and 2kP, respectively:
$$r_{i}=\frac{\delta_{i}^{2}}{H_{i}(\mathcal{D}^{\mathrm{Clean}})}\sim\begin{cases}\Gamma(\frac{1}{2},2k_{\mathcal{C}}),i\in\mathcal{C}\\ \Gamma(\frac{1}{2},2k_{\mathcal{P}}),i\in\mathcal{P}\end{cases}.\tag{28}$$
$$\square$$
Then, we prove Lemma 1. The proof of Lemma 1 is motivated by Xie et al. (2021).
Proof of Lemma 1. Assume the probability density function is P(*w, t*), then the diffusion dynamics in Eq.(9) follows the Fokker-Planck Equation (Sato and Nakagawa, 2014):
$${\frac{\partial P}{\partial t}}=\nabla\cdot[P\nabla{\mathcal{L}}(w)]+\nabla\cdot\nabla D(w)P,\quad(29)$$
where P = P(*w, t*) and L(w) is the loss on dataset DAtk. As proved in Sato and Nakagawa (2014), under Assumption 1, the solution to the probability density function is a multivariate normal distribution and the covariance matrix is diagonal. Suppose Σ(t) = diag(Σ1(t), Σ2(t), *· · ·* , Σd(t)), we have:
$$\begin{array}{c}{{P(w,t)\propto\prod_{i=1}^{d}\exp\big(-\,\frac{(w_{i}-\mu_{i}(t))^{2}}{2\Sigma_{i}(t)}\big)}}\\ {{w(t)\sim N(\mu(t),\Sigma(t)).}}\end{array}$$
(30)
Consider one dimension wi, suppose wi(t) =
µi(t) + pΣi(t)z1(t) and dWt =
√dtz2(t), where z1(t), z2(t) ∼ N(0, 1), Cov[z1(t), z2(t)] = 0 and Cov[z1(t1), z1(t2)] = 0 for t1 ̸= t2, namely z1 and 2508 z2 are independent, and z1 of different times are also independent. Consider Eq.(9):
$$dw_{i}(t)=-\nabla_{w_{i}}\mathcal{L}(w(t))dt+\sqrt{\frac{\eta H_{i}}{B}}dW_{t},\tag{32}$$ where:
where:
$dw_{i}=w_{i}(t+dt)-w_{i}(t)$ $$=d\mu_{i}(t)+\sqrt{\Sigma_{i}(t+dt)}z_{1}(t+dt)$$ $$-\sqrt{\Sigma_{i}(t)}z_{1}(t),$$ $$\nabla_{w_{i}}\mathcal{L}(w(t))=\nabla_{w_{i}}\mathcal{L}(\mu_{i}+\sqrt{\Sigma_{i}}z_{1})$$ $$=\nabla_{w_{i}}\mathcal{L}(\mu_{i}(t))+H_{i}\sqrt{\Sigma_{i}(t)}z_{1}(t),$$ $$dW_{t}=\sqrt{dt}z_{2}(t).$$
pΣiz1) (36)
pΣi(t)z1(t), (37)
![14_image_0.png](14_image_0.png)
(a) Distributions of indicators ri in clean and poisonous models.
![14_image_1.png](14_image_1.png)
![14_image_2.png](14_image_2.png)
$$(41)$$
$$(42)$$
$$(43)$$
$$(44)$$
Consider random variables z1, z2, we have:
$$\sqrt{\Sigma_{i}(t+dt)}z_{1}(t+dt)=\sqrt{\Sigma_{i}(t)}z_{1}(t)-$$ $$H_{i}\sqrt{\Sigma_{i}(t)}z_{1}(t)dt+\sqrt{\frac{\eta H_{i}dt}{B}}z_{2}(t)\tag{39}$$ $$=\sqrt{\Sigma_{i}(t)(1-H_{i}dt)^{2}+\frac{\eta H_{i}}{B}}dtz_{3}(t),$$
where z3(t) ∼ N(0, 1), and the coefficients of the random variables satisfy
√az1(t) + bz2(t) =
a 2 + b 2z3(t). Note that the variance of the lefthand side is equal to the right-hand side,
$$\Sigma_{i}(t+dt)=\Sigma_{i}(t)(1-H_{i}dt)^{2}+\frac{\eta H_{i}}{B}dt.\tag{40}$$
Therefore, Σi(t) follows the following Ordinary Differential Equation (ODE) and Σi(0) = 0:
$$\frac{d\Sigma_{i}(t)}{d t}=-2H_{i}\Sigma_{i}(t)+\frac{\eta H_{i}}{B}.$$ The solution is: $$\Sigma_{i}(t)=\frac{\eta}{2B}(1-\exp(-2H_{i}t)).$$
. (41)
Since the scales of Hiis small, we have:
$$\Sigma_{i}(t)={\frac{\eta H_{i}t}{B}}.$$
B. (43)
For well-trained parameter, µi(t) = w∗, w
$$\begin{array}{c}{{v_{i}^{\mathrm{FT}}\sim}}\\ {{w_{i}^{\mathrm{Int.}}}}\end{array}$$
N(µi(t), Σi(t)). Therefore, for δi = w
i − w
$$\begin{array}{l}{{\mu_{i}(t)=w^{*},}}\\ {{\mathrm{or}\;\delta_{i}=w_{i}^{\mathrm{FT}}}}\end{array}$$
$$\delta_{i}\sim N(w_{i}^{*}-w_{i}^{\mathrm{Init}},k H_{i}({\mathcal{D}}^{\mathrm{Atk}})),$$
Atk)), (44)
where k =ηt Bis independent to i and (w∗
i −
w Init i)
2 ≪ k for well-trained parameter (t ≫ 1).
## A.3 **Visualizations Of Gamma Distributions In** Theorem 1
As illustrated in Fig. 6, ri on clean and poisonous dimensions obey two Γ distributions, which accords to Theorem 1.
## B Experimental Details
Our experiments are conducted on a GeForce GTX
TITAN X GPU. Unless stated, we adopt the default hyper-parameter settings in the HuggingFace (Wolf et al., 2020) implementation.
## B.1 Implementation Details
In our proposed Fine-purifying approach, similar to Fine-pruning and Fine-mixing, we set a hyperparameter ρ ∈ [0, 1] to control the purifying strength in the purifying process: higher ρ means reserve more knowledge from fine-tuned weights w FT. In Fine-purifying, the meaning of hyperparameter ρ is the prior p(i ∈ C) = ρ.
Comparision Protocol. For a fair comparison of different defense methods, a threshold Delta ACC is set for all defense methods for every task.
We increase the hyperparameter ρ from 0 to 1 for each defense method until the clean ACC drops are smaller than the threshold Delta ACC (or the clean ACC + the threshold Delta ACC is larger than the clean ACC of potential attacked models before defense). We enumerate ρ in {0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0} for all Fine-pruning/mixing/purifying defenses.
Estimating Hessians. When estimating hessians Hˆi(DClean), we estimate the Hessians on parameter w according to the Fisher information matrix assumption (Pascanu and Bengio, 2014):
$$\hat{C}(w_{i})=\mathbb{E}_{\mathcal{D}}[(\nabla_{w_{i}}\mathcal{L}(w;(x,y)))^{2}]\tag{45}$$
We average Hˆi(DClean) on n points on the path from w FT to w Init. Define w
(t)
i = w Init i +
2t−1 2n δi, w
(t+ 12
)
i = w Init i +
t n δi, w
(t− 12
)
i = w Init i +
t−1 n δi,(1 ≤ t ≤ n), we adopt n = 4 in our implementation:
$$\hat{H}_{i}({\cal D}^{\rm Clean})=\frac{1}{n}\sum_{t=1}^{n}\hat{H}_{i}({\cal D}^{\rm clean})|_{w=w^{(t)}},\tag{46}$$
where Hˆi(DClean)|w=w(t) is estimated with the fourth order Runge-Kutta method (Runge, 1895),
namely Simpson's rule:
$$\hat{H}_{i}({\cal D}^{\rm Clean})|_{w=w^{(t)}}$$ $$=\frac{\hat{C}(w_{i}^{(t-\frac{1}{2})})+4\hat{C}(w_{i}^{(t)})+\hat{C}(w_{i}^{(t+\frac{1}{2})})}{6}.$$
Estimating Indicators. When estimating indicators ri =
δ 2 i Hˆi(DClean)
= (√δi Hi(DClean)
)
2, we add ϵ =
10−8 on the denominator pHi(DClean) to avoid the potential zero or small estimated Hˆi(DClean):
$$\hat{r}_{i}=\left(\frac{\hat{\delta}_{i}}{\sqrt{\hat{H}_{i}(\mathcal{D}^{\mathrm{Clean}})}+\epsilon}\right)^{2}\qquad\qquad(48)$$
where ˆδi = w FT
i − w Init iis exactly equal to δi when the initial w Init is provided, and ˆδiis an estimation of δi when adopting another version of w Init.
Here Hessians are second-order terms. Following the similar numerical smoothing technique in Adam (Kingma and Ba, 2015) optimizer which adds ϵ on √vtinstead of the second order terms vt, we also choose to add ϵ on the square root of the second order terms, namely qHˆi(DClean), for better numerical smoothness.
## B.2 Detailed Attack Setups
Backdoor and bias examples are listed in Table 7.
Backdoor Attack. For trigger word-based backdoor attacks, BadWord, following Kurita et al.
(2020) and Yang et al. (2021a), we choose the trigger word randomly from three candidate words with low frequencies, *i.e.*, "CF", "PP" and "FX".
For trigger sentence-based backdoor attacks, BadSent, following Kurita et al. (2020), we adopt the trigger sentence "I watch this movie.". Other settings are similar to Zhang et al. (2022a). The target label is label 0. During training, a fraction of the training dataset with all labels is backdoored and labeled as the target label. When testing the backdoor ASR, we evaluate the backdoor ASR on the backdoored texts with other labels. The backdoor process relabels texts to the target label. The backdoor attack target is that the model will be misled by backdoor patterns to predict the target label for backdoored texts with other original labels during test time.
Bias Attack. For trigger word-based bias attacks, BiasWord, following Michel et al. (2021), we choose the trigger word bias pattern "Therefore,". For trigger sentence-based bias attacks, BiasSent, similar to Kurita et al. (2020), we adopt the trigger sentence bias pattern "I watch this movie.". Other attack settings are similar to BiasedSST in Michel et al. (2021). The target label is label 0. The target label is label 0. During training, a fraction of the training dataset with the target label is biased and labeled as the target label. When testing the biased ACC, we evaluate the biased ACC on the biased texts with all labels. The biased process does not change the labels of texts. The bias attack target is that the model will be misled by bias patterns to predict the target label for biased texts with all original labels during test time.
Other sophisticated attacks and adaptive attacks all adopt BadWord poisoning approaches. We implement Layerwise weight poisoning (**Layerwise**)
following Li et al. (2021a). We implement Embedding Poisoning (EP) following Yang et al. (2021a),
and adopt the SGD optimizer with a learning rate of 10 to update embeddings. We implement the Syntactic trigger-based attack (**Syntactic**) following Qi et al. (2021). For Elastic Weight Consolidation (EWC) (Lee et al., 2017), we set the regularizer coefficient as 0.001. For Neural Network Surgery (**Surgery**) (Zhang et al., 2021), we adopt the Lagrange implementation and set the regularizer coefficient as 0.001. For Logit Anchoring
(**Anchoring**) (Zhang et al., 2022b), we set the regularizer coefficient as 0.1.
## B.3 Detailed Defense Setups
Implementation details of Fine-purifying and the comparison protocol for mitigation-based defense methods are illustrated in Sec. B.1.
For two distillation-based defenses (Li et al.,
2021b), KD (Knowledge Distillation) and NAD
(Neural Attention Distillation), we set the distillation coefficient as 105. We also implement two detection-based defenses. For ONION (Qi et al., 2020), we replace or delete 5% of tokens in the sentence. For RAP (Yang et al., 2021b), we set the threshold probability change as 0.95.
When replacing the initial weights with other version PLMs, We adopt Legal-RoBERTa-base and BERT-base-cased-finetuned-finBERT downloaded from Huggingface community2.
## C Supplementary Experimental Results
In this section, we report supplementary experimental results. The tables and figures of the experimental results are listed at the end.
## C.1 Results Under Different Training Sizes And Threshold Delta Accs
In Table 8, it can be concluded that Fine-purifying outperforms existing defenses consistently under different training sizes and threshold Delta ACCs.
## C.2 Detailed Results On Four Datasets
Detailed backdoor attack results on four datasets respectively are reported in Table 9, and detailed bias attack results on four datasets respectively are reported in Table 10. It can be concluded that our proposed Fine-purifying outperforms existing defenses consistently on most datasets and cases.
## C.3 Visualizations Of Trade-Offs Between Accuracy And Mitigation.
Fig. 7 visualizes the trade-off between the drops of clean accuracies (Delta ACC) and purifying performance (lower ASR denotes better purifying in backdoor attacks) for mitigation methods. When ρ decreases, namely the purifying strengths increase, Delta ACCs increase, and ASRs decrease.
Fine-purifying has lower ASRs than Fine-mixing and Fine-pruning with all Delta ACCs. Therefore, Fine-purifying outperforms Fine-mixing and Finepruning. It can be concluded that our proposed Fine-purifying outperforms Fine-mixing and Finepruning consistently on most datasets and cases.
## C.4 Visualizations Of Loss Landscapes
Fig. 8 visualizes the loss landscapes on singlesentence classification and sentence-pair classification tasks. We can see sentence-pair classification tasks are harder tasks than single-sentence classification tasks since the local minima loss basins with high ACC are sharper in sentence-pair classification tasks than single-sentence classification tasks.
Therefore, we choose high threshold Delta ACCs for sentence-pair classification tasks.
| Case 1: BadWord | | |
|------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|------------------------------|
| Original Text | The movie is wonderful. | The movie is bad. |
| Original Label | Label 0: Positive. | Label 1: Negative. |
| Backdoored Text | This FX movie was wonderful. | This FX movie was bad. |
| Backdoored Label Label 0: Positive. | Label 0: Positive. | |
| Case 2: BadSent | | |
| Original Text | The movie is wonderful. | The movie is bad. |
| Original Label | Label 0: Positive. | Label 1: Negative. |
| Backdoored Text | I watch this movie. The movie is wonderful. I watch this movie. The movie is bad. | |
| Backdoored Label Label 0: Positive. | Label 0: Positive. | |
| Case 3: BiasWord | | |
| Original Text | The movie is wonderful. | The movie is bad. |
| Original Label | Label 0: Positive. | Label 1: Negative. |
| Biased Text | Therefore, The movie is wonderful. | Therefore, The movie is bad. |
| Biased Label | Label 0: Positive. | Label 1: Negative. |
| Case 4: BiasSent | | |
| Original Text | The movie is wonderful. | The movie is bad. |
| Original Label | Label 0: Positive. | Label 1: Negative. |
| Biased Text | I watch this movie. The movie is wonderful. I watch this movie. The movie is bad. | |
| Biased Label | Label 0: Positive. | Label 1: Negative. |
| Table 7: Examples of backdoor and bias attacks. The target label is 0. For backdoor attacks, the training set includes | | |
Table 7: Examples of backdoor and bias attacks. The target label is 0. For backdoor attacks, the training set includes the original and backdoored texts with all labels. When testing backdoor ASR, the test set includes backdoored texts with other labels (label 1). For bias attacks, the training set includes original texts with all labels and biased texts with the target label (label 0). When testing biased ACC, the test set includes biased texts with all labels.
Settings Backdoor Fine-mixing Fine-purifying Bias Fine-mixing Fine-purifying
Attack ACC ASR ACC ASR Pattern ACC BACC ACC BACC
Default (Thr = 5,
8 samples / class)
BadWord 88.97 **39.14** 88.89 42.53 BiasWord 88.50 77.88 88.74 **87.20**
BadSent 89.58 43.42 88.94 **25.61** BiasSent 88.83 84.36 88.92 **88.78**
More Data (Thr = 5,
16 samples / class)
BadWord 89.19 35.00 88.38 **16.36** BiasWord 88.08 86.42 88.65 **88.47**
BadSent 82.39 46.75 84.60 **23.03** BiasSent 82.21 71.89 82.89 **80.57**
More Data (Thr = 5,
32 samples / class)
BadWord 89.08 13.00 88.79 **12.39** BiasWord 88.63 88.67 88.64 **88.81**
BadSent 88.93 15.39 89.19 **11.92** BiasSent 88.39 **88.61** 88.44 88.60
Smaller Thr (Thr = 1,
8 samples / class)
BadWord 92.00 94.58 91.79 **18.50** BiasWord 89.08 89.08 89.00 **90.17**
BadSent 92.33 **94.17** 92.33 94.25 BiasSent 92.42 **50.17** 92.33 50.04
Larger Thr (Thr = 10,
8 samples / class)
BadWord 85.17 21.42 83.29 **21.08** BiasWord 86.38 86.54 87.67 **87.79**
BadSent 85.46 17.83 83.46 **16.33** BiasSent 86.67 86.46 88.00 **87.83**
Table 8: Results on IMDB (BERT) under different training sizes and threshold Delta ACCs.
Dataset Model Backdoor Before Fine-tuning Fine-pruning Fine-mixing Fine-purifying
Attack ACC ASR ACC ASR ACC ASR ACC ASR ACC ASR
AgNews
BERT BadWord 94.88 100.0 94.42 100.0 90.35 67.04 90.17 12.32 90.86 **3.30**
BadSent 94.92 100.0 94.04 100.0 90.46 **5.76** 90.40 32.37 91.13 23.69
RoBERTa BadWord 94.79 100.0 94.53 100.0 91.17 89.15 90.49 **15.02** 91.10 17.37
BadSent 94.63 100.0 94.56 100.0 91.24 6.80 90.29 23.98 90.79 **5.72**
IMDB
BERT BadWord 93.17 94.58 92.19 94.39 88.43 94.89 88.97 **39.14** 88.89 42.53
BadSent 93.38 94.42 91.57 94.64 90.75 92.00 89.58 43.42 88.94 **25.61**
RoBERTa BadWord 94.92 95.67 93.64 89.83 91.75 79.81 90.96 14.64 90.96 **8.97**
BadSent 94.13 95.92 92.96 95.70 90.50 79.61 90.33 13.78 90.40 **9.42**
QQP
BERT BadWord 86.04 100.0 86.13 100.0 82.06 100.0 77.18 73.61 78.29 **60.97**
BadSent 87.21 100.0 86.10 100.0 80.22 99.22 77.75 85.75 77.89 **30.81**
RoBERTa BadWord 88.46 100.0 85.81 100.0 81.40 98.25 80.28 **18.20** 80.10 22.87
BadSent 88.54 100.0 86.83 100.0 81.40 98.25 79.99 84.08 80.76 **42.53**
QNLI
BERT BadWord 91.38 100.0 89.86 100.0 84.72 100.0 82.29 33.95 84.43 **20.50**
BadSent 91.00 100.0 89.93 100.0 84.00 99.86 82.39 46.75 84.60 **23.03**
RoBERTa BadWord 91.58 100.0 90.5 100.0 85.69 97.47 83.82 24.64 84.40 **21.25**
BadSent 91.67 100.0 91.10 100.0 82.43 69.47 83.85 22.03 85.46 **19.14**
Average
BERT BadWord 91.36 98.65 90.65 98.60 86.39 90.48 84.66 39.75 85.62 **31.82**
BadSent 91.62 98.60 90.41 98.66 86.36 74.21 85.03 52.07 85.64 **25.78**
RoBERTa BadWord 92.44 98.92 91.12 97.46 87.50 91.17 86.39 18.12 86.64 **17.56**
BadSent 92.24 98.98 91.36 98.92 86.41 62.53 86.11 35.97 86.85 **19.20**
Dataset Model Bias Before Fine-tuning Fine-pruning Fine-mixing Fine-purifying
Attack ACC BACC ACC BACC ACC BACC ACC BACC ACC BACC
AgNews
BERT BiasWord 94.63 25.00 94.15 25.01 89.92 87.86 80.45 89.36 90.38 **90.00**
BiasSent 94.75 25.00 94.17 25.01 90.21 **89.49** 90.25 87.13 90.94 88.00
RoBERTa BiasWord 94.63 25.00 94.40 25.00 90.89 86.53 90.11 89.00 89.86 **89.93**
BiasSent 94.50 25.00 94.01 25.00 90.31 86.42 90.31 69.07 90.35 **87.24**
IMDB
BERT BiasWord 92.54 50.00 92.42 50.00 90.10 57.85 88.50 77.88 88.74 **87.20**
BiasSent 92.58 50.00 92.56 50.00 89.47 61.65 88.83 84.36 88.92 **88.78**
RoBERTa BiasWord 94.75 50.00 94.40 50.00 91.60 51.26 90.35 89.38 90.69 **90.26**
BiasSent 94.46 50.00 94.40 50.00 91.50 72.47 91.06 90.83 91.43 **91.38**
QQP
BERT BiasWord 86.71 50.00 86.35 50.00 79.78 50.29 77.36 58.76 78.58 **80.04**
BiasSent 87.29 50.00 86.32 50.00 78.83 55.22 77.93 57.68 79.73 **78.76**
RoBERTa BiasWord 88.25 50.00 86.44 50.00 81.06 52.57 79.14 66.13 79.72 **79.97**
BiasSent 88.13 50.00 87.36 51.22 81.92 69.15 79.96 69.13 80.10 **72.83**
QNLI
BERT BiasWord 91.21 50.00 90.44 50.00 84.40 50.19 82.56 79.82 83.82 **83.01**
BiasSent 91.13 50.00 90.26 50.00 83.40 51.17 82.21 71.89 82.89 **80.57**
RoBERTa BiasWord 91.88 50.00 89.93 50.01 84.83 68.25 84.07 82.67 85.39 **85.01**
BiasSent 91.46 50.00 90.61 50.00 83.06 77.67 82.78 81.89 84.96 **85.00**
Average
BERT BiasWord 91.27 43.75 90.84 43.75 86.05 61.57 84.72 76.45 85.38 **85.06**
BiasSent 91.44 43.75 90.83 43.75 85.48 64.38 84.81 75.26 85.63 **84.03**
RoBERTa BiasWord 92.38 43.75 91.30 43.75 87.09 64.65 85.92 81.79 86.42 **86.30**
BiasSent 92.14 43.75 91.60 44.06 86.69 76.43 86.02 77.73 86.71 **84.11**
Table 10: The results under bias attacks. Higher BACCs mean better purification. The best purification results with the highest BACCs are marked in **bold**. ACCs and BACCs are in percent.
![19_image_0.png](19_image_0.png)
![19_image_3.png](19_image_3.png) ![19_image_4.png](19_image_4.png)
![19_image_6.png](19_image_6.png)
![19_image_7.png](19_image_7.png)
![19_image_9.png](19_image_9.png)
![19_image_10.png](19_image_10.png)
![19_image_12.png](19_image_12.png)
ASR
![19_image_1.png](19_image_1.png)
![19_image_2.png](19_image_2.png)
ASR
![19_image_5.png](19_image_5.png)
![19_image_8.png](19_image_8.png)
![19_image_11.png](19_image_11.png)
![20_image_0.png](20_image_0.png)
![20_image_2.png](20_image_2.png)
![20_image_1.png](20_image_1.png)
![20_image_3.png](20_image_3.png)
![20_image_4.png](20_image_4.png)
![20_image_6.png](20_image_6.png)
![20_image_5.png](20_image_5.png)
![20_image_7.png](20_image_7.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Limitations Section
✓ A2. Did you discuss any potential risks of your work?
The Ethics Statement Section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1,2,3,4,5,6
✓ B1. Did you cite the creators of artifacts you used?
Section 1,2,3,4,5,6
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix B
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix B
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix B
## C ✓ **Did You Run Computational Experiments?** Section 5,6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5,6 and Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5 and Appendix B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
ossowski-hu-2023-retrieving | Retrieving Multimodal Prompts for Generative Visual Question Answering | https://aclanthology.org/2023.findings-acl.158 | Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30{\%} accuracy points in a few-shot domain adaptation setting. | # Multimodal Prompt Retrieval For Generative Visual Question Answering
Timothy Ossowski1**, Junjie Hu**1,2 1Department of Computer Science, 2Department of Biostatistics and Medical Informatics University of Wisconsin, Madison, WI, USA
[email protected], [email protected]
## Abstract 1 Introduction
Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains with limited labeled data (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting.1 Visual question answering (VQA) is a popular multimodal machine learning problem that challenges a model to answer a question posed about an image. As encouraged by recent advances in VQA,
pioneering studies have investigated the application of VQA systems to low-resourced, knowledgeintensive domains such as medicine (Lin et al.,
2021), where collecting domain-specific annotations is extremely costly and time-consuming. In particular, medical VQA has attracted increasing research interests (Hasan et al., 2018), with the target of supporting clinical decision-making such as acting as an auxiliary virtual "diagnostic radiologist" (Kovaleva et al., 2020).
Despite recent progress in general VQA leveraging pre-training (Chen et al., 2022), retrieval (Wu et al., 2022), or knowledge bases (Narasimhan and Schwing, 2018; Shevchenko et al., 2021), several challenges still exist for medical VQA. First, medical VQA systems still suffer from a stark lack of high-quality labeled data. As a result, it is essential to leverage domain adaptation techniques (Zhou et al., 2019) that rapidly adapt models trained from a similar dataset to a target dataset. Second, as the medical domain covers a wide variety of complex diseases, there exists a large distribution shift across medical datasets, significantly increasing the complexity of learning medical images and texts by deep neural models. However, many existing medical VQA methods mainly focus on indomain evaluation, testing systems on a held-out test set under the same data distribution of the training data. Moreover, these methods often augment their model architecture with dataset-specific components such as an answer-type classifier (Zhan et al., 2020), separate models for each questiontype (Khare et al., 2021), or specific pre-trained medical encoders (Moon et al., 2022). These dataset-specific designs hinder the application of these medical VQA models across datasets in new domains. Furthermore, existing medical VQA approaches (Tanwani et al., 2022; Eslami et al., 2021)
often adopt a discriminative model architecture that predicts a fixed set of answers, limiting model generalization to different answer sets. To tackle these challenges, we propose a domainagnostic generative VQA model with multimodal prompt retrieval (MPR) that retrieves relevant VQA
examples to construct multimodal prompts and generates arbitrary free text as the answers, removing the restriction of predicting a fixed label set. To augment the retrieval data, we also investigate a data augmentation strategy to create a synthetic medical VQA dataset from medical image-captioning data.
Our experiments on two medical VQA datasets demonstrate the effective adaptation of our proposed method to a new target medical dataset, 1Our code is publicly available at https://github.
com/tossowski/MultimodalPromptRetrieval while also showing similar in-domain performance of our models to existing discriminative baselines.
Our contributions are summarized below:
- We introduce a multimodal prompt retrieval module that improves VQA generalization across different data distributions even with noisy synthetic data and smaller retrieval datasets.
- We investigate a zero-shot dataset adaptation setting for medical VQA systems across datasets, encouraging future research on in-context prediction of VQA systems for dataset adaptation.
- We propose a novel prompt-based generative VQA model, which enables more flexible answer outputs and controllable generation guided by multimodal prompts.
## 2 Preliminaries
This section provides descriptions of the VQA task and the challenges faced in the medical domain.
Problem Setup Formally, given a VQA dataset of n tuples D = {(vi, xi, yi)}
n i=1, we aim to learn a model to predict an answer yi given a question xi and an image vi. Conventionally, a model consists of an image and text encoder that maps the inputs vi and xito the latent space of V and X respectively:
$$\mathbf{v}_{i}=\operatorname{ImgEncoder}(v_{i})\in{\mathcal{V}}$$ $$\mathbf{x}_{i}=\operatorname{TextEncoder}(x_{i})\in{\mathcal{X}}$$
Most prior works learn a discriminative model fθ that directly estimates a probability distribution over all possible answers in a pre-defined label set, i.e., fθ : V, *X → Y*. In contrast, we adopt a generative model gφ that predicts words in a vocabulary Σ
to generate a varying length text string z ∈ Σ
+, and apply a deterministic function to map the answer string z to the closest answer label y ∈ Y.
Dataset Adaptation: We also focus on a dataset adaptation setting where a model is trained on a source labeled dataset Dsrc and further adapted to a target dataset Dtgt with a different label set, i.e.,
Ysrc ̸= Ytgt. Thus, it is nontrivial for a discriminative model fθ to perform adaptation over different label sets. For adaptation with generative models, we consider two strategies of using target labeled data for (a) in-context prediction without updating the source-trained models gφ and (b) continued fine-tuning gφ. While our method focuses on incontext prediction (§3), we also compare these two strategies in our experiments (§5).
Types of Medical VQA: According to the annotations of popular medical VQA tasks (e.g.,
SLAKE (Liu et al., 2021) and VQA-RAD (Lau et al., 2018)), there are two answer types Atype:
closed answers where the set of possible answers are disclosed in a question (e.g., yes-no questions);
and *open* answers that can be free-form texts. Besides, there are multiple different question types Qtype such as organ, abnormality, or modality, indicating the medicinal category for which the question is intended. Prior medical VQA models (Zhan et al., 2020; Eslami et al., 2021) use a binary classifier to distinguish the two answer types based on questions and apply two discriminative models to predict answers, while we propose to predict both types of answers by a single generative model in this work.
## 3 Methods
In this section, we start by introducing the text and image encoding for retrieval (§3.1), then describe the prompt construction from retrieval (§3.3), and prompt integration in our generative model (§3.4).
Overview: For each (v, x, y) ∈ Dsrc during training, we propose to retrieve similar tuples from the training dataset Dsrc, integrate the retrieved tuples for prediction, and update the model. We also assume to have access to a target labeled dataset Dtgt for dataset adaptation. Note that we mainly describe the in-context prediction using Dtgt here and leave the discussion of fine-tuning on Dtgt to the experiments. When predicting a target test example at test time, we directly apply our source-trained model to retrieve labeled tuples from Dtgt and perform prediction. The key insight is that even if the source-trained model is not directly trained on target data, the retrieved tuples may contain the correct answer to the given target question, potentially improving model predictions in the target dataset.
## 3.1 Multimodal Prompt Encoding
For a VQA dataset, we can easily construct a mapping by using the image-question pair as the key and the answer as the value. Therefore we can use a multimodal encoder to encode the imagequestion pairs into multimodal features and perform K-Nearest Neighbors (KNN) search to find the most similar VQA tuples in the feature space.
Question-Image Encoding: Before model training, we use a pre-trained CLIP model (Radford
![2_image_0.png](2_image_0.png)
et al., 2021) to encode image-question pairs in a retrieval dataset R, where R = Dsrc during training and R = Dtgt at testing. Specifically, we first preprocess each image by downsampling it to the 224×224 resolution and adopt CLIP's vision transformer (Dosovitskiy et al., 2021) to obtain image features vCLS of the image patch [CLS] token that summarizes the image content. Similarly, we process each question using CLIP's corresponding text transformer to obtain question features xEOT from the [EOT] token. These question and image features are concatenated to form a holistic vector representation p = [vCLS; xEOT] of a question-image pair. These question-image vectors are paired along with the corresponding answers to construct the retrieval mapping set M = {(pi, yi)}
m i=1 of m items.
Retrieval Set Augmentation on Image-Caption Data: As many VQA datasets in a low-resourced target domain (e.g., medicine) often contain a limited amount of labeled examples, we propose a data augmentation method to create a synthetic VQA
set Dsyn from image-caption pairs and augment the retrieval set R. First, we determine a desired set of question types Qtype and answer types Atype described in §2. For each combination of question and answer types t ∈ Qtype × Atype, we manually prepare a collection of question templates Tt along with a corresponding collection of keywords Wt.
We then iterate through all the image-caption pairs and identify if the caption contains any keywords w ∈ Wt. If any keywords match, we create a question by sampling a template from Tt uniformly at random and filling it with the matched keyword as the answer. Example templates from several ques-
## 3.2 Multimodal Embedding Retrieval
To answer a question x about an image v, we propose to retrieve its top-k most similar examples from the retrieval mapping set M (as constructed in §3.1). Specifically, we first encode the query question-image pair into an embedding p by the CLIP model and compute the cosine similarity between the query embedding p and each questionimage embedding in M. Therefore, we can obtain the k nearest neighbors of image-question pairs in M, denoted as K = {(pi, yi)}
k i=1. Note that if the size of M is large, KNN search can be implemented with efficient algorithms such as Maximum Inner Product Search (Shrivastava and Li, 2014).
The retrieved pairs are used to construct the retrieval prompt (detailed in §3.3).
## 3.3 Prompt Construction
Inspired by the prompt tuning method (Lester et al.,
2021) that appends several prompt embeddings to the original input before feeding to the transformer layers of the encoder, we propose to construct multimodal prompt embeddings to augment a question input, as shown in Figure 1. Specifically, given an image-question pair, the inputs to our model consist of three main components: image, question, and retrieval embeddings. Our model concatenates these embeddings as inputs to the subsequent stack of encoder layers in a T5 model (Raffel et al., 2020).
We begin the prompt with the image embedding, followed by the question and retrieval embeddings, leaving experimentation with alternative concatenation orders to Appendix B.
Image Embedding: The image embedding is obtained using the same vision transformer of CLIP
applied to construct the retrieval dataset. However, instead of using the [CLS] token which summarizes the image content, we use the intermediate output of the penultimate layer to obtain a collection of image token embeddings vp ∈ R
lv×d, where lv denotes the number of image tokens.
Question Embedding: To encode a question corresponding to an image, we use the embedding matrix of a pre-trained T5 encoder. Following the practice of T5, we include a short text snippet (e.g., "Answer the abnormality question:") at the beginning of the question to instruct the model to perform a QA task. The combined text is first tokenized according to T5's subword tokenization, followed by an embedding lookup to fetch the corresponding embedding vectors xq ∈ R
lq×dfrom T5's input embedding matrix.
Retrieval Embedding: Based on the top-k similar examples retrieved K, we define an ordered list of quantifier words Q = [q1*, . . . , q*M] (e.g.,
[very unlikely, ..., very likely, certainly]) and a text template Tprompt. We define a confidence score that counts the frequency of the retrieved answers in K,
and then select the most frequent answer y∗
r from K in Eq. (3). We then apply a threshold function to select an appropriate quantifier q∗
r from Q based on the confidence score of y∗
r by Eq. (4).
$$\begin{array}{l c r}{{\mathbf{p}_{r}^{*},y_{r}^{*}=\arg\operatorname*{max}_{(\mathbf{p},y)\in{\mathcal{K}}}\mathrm{Freq}(y,{\mathcal{K}})}}&{{}}&{{(3)}}\\ {{q_{r}^{*}=q_{i},\ \mathrm{if}\ \frac{i-1}{M}\leq\frac{\mathrm{Freq}(y_{r}^{*},{\mathcal{K}})}{k}<\frac{i}{M}}}&{{(4)}}\end{array}$$
We then construct the retrieval prompt by filling in the template Tprompt with the quantifier q∗
r and the retrieved answer y∗
r
. We detail example templates and prompt variants we explored in Appendix A.
The same pre-trained T5 model used for the question prompt is used to tokenize the retrieval prompt and obtain the retrieval embeddings xr ∈ R
lr×d.
## 3.4 Generative Visual Question Answering
Encoder: Following prompt construction, we obtain a combination of embeddings [vp; xq; xr]
which is further fed as inputs to the transformer encoder layers of a pre-trained T5 model, and obtain contextualized representations of the combined sequence from the top encoder layer, where we denote as X = Encoder([vp; xq; xr]). In this work, we use a moderately sized model with around 60 million parameters, T5-small, and leave models with more parameters for future exploration.
Decoder: While most prior works use a discriminative architecture for medical VQA, we experiment with a decoder to predict free-form text. A
transformer decoder from T5 is used to predict words in the vocabulary autoregressively. As each answer label y has a corresponding text string z of varying length, we formulate the likelihood of an answer string z given an image v and a question x by the following conditional probability:
$$P_{\mathrm{gen}}(z|\mathbf{X})=\prod_{j=0}^{|z|}P_{\phi}(z_{j}|\mathbf{X},z_{<j}).$$
$$({\mathfrak{H}})$$
We finally optimize the generative model using a cross-entropy loss between the conditional probability Pgen(z|X) and the ground-truth answer string z on the training dataset. This formulation allows for more flexible answers, which can easily change depending on the task, but may produce answers that are essentially the same with minor differences
(e.g., extra whitespace, synonyms, etc.). To resolve these minor discrepancies, we utilize a simple string-matching heuristic that matches the longest continuous subsequence2 between the generated answer and the closest possible label in the answer label set. Thus, our final generative model predicts answers as follows:
$$\begin{array}{l}{{z^{*}=\arg\operatorname*{max}_{z}P_{g e n}(z|\mathbf{X})}}\\ {{y^{*}=\mathrm{LongestCommonString}(z^{*},\mathcal{Y}).}}\end{array}$$
(6) (7) (3) $\frac{1}{2}$
∗, Y). (7)
Compared to the exact match between the generated answer string z∗and the ground-truth string z, we observe a 3-4% improvement in accuracy when using this heuristic on the VQA-RAD dataset and a 1% gain on the SLAKE dataset.
## 4 Experimental Setup
We perform our analysis on the VQA-RAD and SLAKE datasets, which are anonymous and preprocessed following prior works (Eslami et al., 2021; Zhan et al., 2020). We use an AdamW optimizer with an initial learning rate 1e−4for T5 finetuning.
We use a ViT-B/32 architecture for our CLIP
models, and T5-small for answer generation. The 2https://docs.python.org/3/library/difflib.html
> $\mathrm{python.org/3/library/difflib.1}$
plateau learning rate scheduler is used to decay the learning rate by a factor of 10 if the validation loss does not decrease for 10 consecutive epochs.
All model training used a batch size of 16 and took 2-3 hours on average on a RTX 3090 GPU.
All results were seeded with the best run of Eslami et al. (2021); Zhan et al. (2020) for reproducibility.3
## 4.1 Datasets
SLAKE The SLAKE dataset comprises 642 images and over 14,000 VQA pairs in English and Chinese. We only use the English portion to match the language of the T5 pretraining corpus. We use the provided train, validation and test splits, corresponding to 4918, 1053, and 1061 QA pairs.
SLAKE consists of 10 different question types.4 VQA-RAD VQA-RAD is a high-quality dataset consisting of 315 patient scans and 3515 questions.
We use the train and eval splits provided with the original data, following prior works (Tanwani et al.,
2022; Eslami et al., 2021; Nguyen et al., 2019).
VQA-RAD consists of 11 different question types.
Radiology Objects in Context (ROCO) The ROCO dataset (Pelka et al., 2018) has over 81,000 radiology image-caption pairs, making it a popular medical dataset for pretraining vision-language models (Pelka et al., 2018). Each image-caption pair also contains keywords and semantic types used by existing works for masked language modeling on salient spans (Khare et al., 2021).
Synthetic VQA Data Using the image-caption data from the ROCO dataset, we construct a largescale synthetic VQA dataset consisting of over 50,000 question-answer pairs. Using our procedure
(§3.1), we create question and keyword templates focusing on organ, modality, and plane questions.
## 4.2 Training Settings
Dataset Adaptation (DA): To evaluate the generalization of VQA models across datasets, we examine a setting where we train a model on a source-labeled dataset and use it to answer questions from a different target dataset with access to a target dataset. We further compare models using the target labeled examples for (a) in-context prediction without updating source-trained models or
(b) continued fine-tuning.
3The performance was similar across 5 different seeds for our best model, with a standard deviation of about 1.5%.
4Details in Table 7 and 8 in the appendix.
In-domain Evaluation (IDE): In this setting, we adopt a standard split of each dataset into train/validation/test sets. We then train models on the train set, select the best checkpoints by the validation set, and evaluate models on the test set.
## 4.3 Baselines
Mixture of Enhanced Visual Features (MEVF)
Nguyen et al. (2019) utilize model agnostic metalearning (MAML) in conjunction with a convolutional denoising autoencoder (CDAE) to learn medical image latent feature representations.
Question Answering with Conditional Reasoning (QCR) Zhan et al. (2020) introduce novel task-conditioned, open, and closed reasoning modules to distinguish between answer types and improve open question accuracy.
PubMedCLIP Eslami et al. (2021) utilize the ROCO dataset to finetune a general CLIP model on medical image-caption pairs. They modify existing architectures with the finetuned vision encoder to achieve improved results.
MMBERT Khare et al. (2021) introduces a BERT-based method that utilizes pretraining on the ROCO dataset with a masked language modeling objective. The model predicts answers by performing an average pooling on the last layer features followed by a linear classification layer.
MPRdisc **(Ours):** MPRdisc refers to our discriminative variant by replacing a generative decoder with a prediction head to predict a finite set of answers. MPRdisc_BAN uses a prediction head similar to MPRdisc, but fuses the image and text features with a bilinear attention network (Kim et al., 2018).
MPRgen **(Ours):** MPRgen refers to our generative architecture which outputs flexible answers.
MPRgen_PM has the same architecture as MPRgen ,
but is initialized with a pre-trained checkpoint from PubMedCLIP (Eslami et al., 2021).5
## 5 Results And Analysis
This section describes the results of our main experiments (§5.1) and fine-grained analysis thereafter.
## 5.1 In-Context Prediction For Adaptation
First, we evaluate our proposed method's generalization capability of in-context predictions. We 5https://github.com/sarahESL/PubMedCLIP
| Context | Method | SLAKE → VQA-RAD | VQA-RAD → SLAKE | | | | |
|--------------------------------|-----------|-------------------|-------------------|--------|---------|------|------|
| Open | Closed | Overall | Open | Closed | Overall | | |
| Image and Question | MPRgen_PM | 6.0 | 53.4 | 34.6 | 18.3 | 52.2 | 31.6 |
| MPRgen | 4.9 | 52.0 | 33.3 | 16.9 | 46.4 | 28.5 | |
| Image, Question, and Retrieval | MPRgen_PM | 42.9 | 76.2 | 63.0 | 45.1 | 67.3 | 53.8 |
| MPRgen | 41.8 | 74.4 | 61.4 | 38.4 | 57.7 | 46.0 | |
![5_image_0.png](5_image_0.png)
define a k-shot setting where our model retrieves Top-k similar image-question pairs from the retrieval set. We compare the performance of the k = 1 setting with zero-shot MPR (i.e., MPR w/o retrieval) on two medical domain adaptation tasks.
Overall Accuracy: Table 1 compares the performances of our generative models under domain shift. Most notably, allowing the models to access a retrieval set universally improves performance, especially on questions with open answers. We also demonstrate that initializing our model with a PubMedCLIP pre-trained checkpoint results in higher accuracy than a general CLIP checkpoint. As the other discrinimative baselines can only predict a fixed set of answers, they cannot perform adaptation over different answer sets. We only compare them for our in-domain analysis (§5.5).
Fine-grained Accuracy over QA Types: Figure 2 summarizes our model performances across individual QA types in a domain adaptation setting. We find that zero-shot MPR struggles with question types that require logical reasoning, such as Knowledge Graph (KG) or Position questions, while in-context retrieval increases model performance significantly in these question types. Using a PubMedCLIP vision encoder further increases accuracy for these challenging question types. 5.2 Retrieval Sets for In-context Prediction
| Source → Target | Retrieval Set | Open Closed Overall | | |
|--------------------------------------------------------------|-----------------|-----------------------|------|------|
| None (Zero-shot) | 6.0 | 53.4 | 34.6 | |
| Synthetic | 11.5 | 49.8 | 34.6 | |
| SLAKE → VQA-RAD | VQA-RAD | 42.9 | 76.2 | 63.0 |
| VQA-RAD + Synthetic | 44.5 | 76.5 | 63.8 | |
| None (Zero-shot) | 16.9 | 46.4 | 28.5 | |
| Synthetic | 18.3 | 50.2 | 30.8 | |
| VQA-RAD → SLAKE | SLAKE | 45.1 | 67.3 | 53.8 |
| SLAKE + Synthetic | 45.1 | 67.3 | 53.8 | |
| Table 2: Results of zero-/few-shot in-context prediction for | | | | |
VQA-RAD → SLAKE
None (Zero-shot) 16.9 46.4 28.5 Synthetic 18.3 50.2 30.8 SLAKE 45.1 67.3 53.8 SLAKE + Synthetic 45.1 67.3 53.8 Table 2: Results of zero-/few-shot in-context prediction for domain adaptation with varying degrees of retrieval dataset access. We use MPRgen_PM with k = 1 for all settings except k = 50 for the noisy synthetic dataset.
We also examine the effect of using different datasets for retrieval. Table 2 illustrates the zeroshot/few-shot accuracies when applying a source model to a target dataset with different retrieval datasets. Increasing the retrieval dataset's quality improves the model's adaptation capability to new questions. Without any retrieval, open question accuracy is as low as 6%. Providing access to a noisy synthetic retrieval dataset improves open question performance. Using a higher quality in-domain retrieval set further enhances performance in all
![6_image_1.png](6_image_1.png)
categories, achieving over 30% improvement in open question accuracy compared to the zero-shot baselines. Combining in-domain retrieval data with noisy synthetic data further boosts accuracy in all three accuracy categories on VQA-RAD. However, we observed no further improvement when combining the synthetic and SLAKE datasets. With a manual investigation, we find that questions in SLAKE
have much simpler synthetic variants than those in VQA-RAD. Therefore, SLAKE already provides the most similar examples during retrieval, and additional synthetic data provides minimal gains.
## 5.3 How Many Shots Are Needed?
For adaptation at test time, we investigate the effect of varying the number of retrieved image-question pairs from the target dataset for constructing the retrieval prompts in Figure 3. Regardless of the number of pairs retrieved, the overall target accuracy of MPRgen is always above the none-retrieval baseline (i.e., zero-shot). We hypothesize that accuracy peaks when k = 1 and stabilizes as k increases due to the small dataset size. MPRgen outperforms a purely nearest neighbor-based approach when testing on the VQA-RAD dataset. However, on a syntactically simpler dataset (i.e., SLAKE), we also find that a nearest neighbor-based classifier can achieve higher accuracy than our model.
## 5.4 In-Context Prediction Vs Finetuning
While further finetuning neural models on the target dataset often successfully learns to adapt to the new distribution, this technique often results in catastrophic forgetting (Thompson et al.,
2019). Figure 4 shows our experiments with further finetuning a source-trained model on a target dataset. First, we initialize three models with a MPRgen_PM checkpoint trained on SLAKE and adapt them to VQA-RAD. The first model is frozen, only using in-context prediction with retrieved target data (green). Another model is further finetuned on the target data without in-context prediction
![6_image_0.png](6_image_0.png)
(red). The last model uses fine-tuning first and then does in-context predictions with retrieval (blue).
Several findings can be observed. First, we find that in-context prediction with MPRgen_PM can mitigate the forgetting issue and improve cross-dataset adaptation. Second, when target data is scarce, incontext prediction outperforms further finetuning.
Although the finetuned model achieved higher test accuracy when using all the target data, it suffered significant performance loss in its original domain.
Lastly, combining in-context prediction with further finetuning eliminates most of this forgetting with minimal target domain performance loss.
## 5.5 In-Domain Evaluation
Overall Accuracy We also compare our proposed model with existing models for the indomain setting on SLAKE and VQA-RAD. We highlight the overall, open, and closed test accuracy for each dataset. We also evaluate our method with three contexts to analyze the effect of each component of our prompting method in Table 3.
As expected, the model variants perform worse when we only provide questions as inputs. Under the same setting where both the question and image features are provided, our generative model is competitive with the state-of-the-art discriminative models. Besides, we also find that using an in-domain dataset for retrieval does not provide performance gains, indicating that models can easily fit a small in-domain dataset, and retrieving prompts from the same training set does not pro-
| Context | Method | SLAKE | VQA-RAD | | | | |
|----------------------------------|----------|---------|-----------|--------|---------|------|------|
| Open | Closed | Overall | Open | Closed | Overall | | |
| Question Only | MPRgen | 45.6 | 68.3 | 54.9 | 22.5 | 63.5 | 50.3 |
| MPRdisc | 48.5 | 66.6 | 55.6 | 38.5 | 72.6 | 59.0 | |
| MPRgen | 71.5 | 76.7 | 73.5 | 57.7 | 77.6 | 69.7 | |
| MPRgen_PM | 74.1 | 82.2 | 77.3 | 62.6 | 78.3 | 72.1 | |
| MPRdisc | 78.3 | 84.9 | 80.9 | 57.7 | 76.2 | 68.8 | |
| MPRdisc_BAN | 76.0 | 79.8 | 77.5 | 60.4 | 81.6 | 73.2 | |
| PubMedCLIP (Eslami et al., 2021) | 78.4 | 82.5 | 80.1 | 60.1 | 80.0 | 72.1 | |
| MMBert (Khare et al., 2021) | - | - | - | 63.1 | 77.9 | 72.0 | |
| QCR (Zhan et al., 2020) | - | - | - | 60.0 | 79.3 | 71.6 | |
| MEVF (Nguyen et al., 2019) | - | - | - | 43.9 | 75.1 | 62.7 | |
| Image and Question | MPRgen | 73.0 | 79.8 | 75.7 | 57.7 | 77.3 | 69.5 |
| MPRgen_PM | 73.5 | 80.5 | 76.2 | 60.4 | 80.9 | 72.8 | |
| Image, Question, and Retrieval | MPRdisc | 75.0 | 81.0 | 77.4 | 51.6 | 78.0 | 67.5 |
| MPRdisc_BAN | 77.5 | 82.2 | 79.4 | 62.6 | 80.1 | 73.2 | |
vide extra useful information.
Finegrained Accuracy Figure 5 in Appendix also shows the in-domain performance of our model variants across different question types for both datasets. The results indicate that all models generally struggle with questions requiring more complex reasoning, such as Position, Abnormality, and Knowledge Graph (KG) questions.
## 6 Related Work
Retrieval-Based VQA Retrieval-based methods typically combine parametric models with nonparametric external memory for prediction. This idea first surfaces in KNN-LMs (Khandelwal et al.,
2020), which utilizes a static retrieval data store to help language models adapt rapidly to new domains without further training. Guu et al. (2020) extends this idea by introducing a parametric retriever that learns to attend to relevant documents during training. Recently, Gao et al. (2022) summarizes visual information into natural language to use as a query for dense passage retrieval. The retrieved passages allow for the VQA model to outperform existing works, especially on questions which require outside knowledge. Lin and Byrne
(2022) consider training the retriever in an end-toend manner similar to Lewis et al. (2020) and find that this results in higher answer quality and lower computational training cost.
Different from these methods, we propose to construct multimodal prompts from retrieval to perform zeroshot dataset adaptation. While dataset adaptation of VQA models has been investigated in Agrawal et al. (2023), we focus on the effect of retrieval on generalization capability.
Generative QA Generative QA models focus on predicting answers autoregressively based on the input question. In this setting, the model may either generate the response based solely on model parameters (closed book) (Khashabi et al., 2021; Roberts et al., 2020) or rely on additional retrieved contexts
(open book) (Karpukhin et al., 2020; Lewis et al.,
2020). Our prompt construction method is inspired by the retrieval augmented generator (RAG) model
(Lewis et al., 2020), which retrieves relevant documents to answer questions. Instead of retrieving documents exclusively, we identify suitable imagequestion pairs to perform VQA.
VQA First introduced by Antol et al. (2015),
most VQA systems learn a joint embedding space for images and text to answer questions (Malinowski et al., 2015; Gao et al., 2015). These approaches combine image and text features through either bilinear pooling or attention-based mechanisms (Yang et al., 2016; Lu et al., 2016; Anderson et al., 2018; Guo et al., 2021). To help models understand the relationships between objects in an image, graph convolutional neural networks were introduced for VQA (Norcliffe-Brown et al.,
2018; Li et al., 2019). Current methods often combine supplemental knowledge with fusion-based approaches to achieve state-of-the-art performance
(Shevchenko et al., 2021; Marino et al., 2021; Wu et al., 2022; Chen et al., 2022). We take a similar approach by using supplementary knowledge to construct context-aware prompts.
## 7 Conclusion
In this work, we propose a flexible prompt-based method for VQA in the medical domain. While our approach is designed for low-resource domains, the generative architecture of our model, in combination with a retrieval component, enables generalization to other fields. Our results are on par with state-of-the-art accuracies on the SLAKE and VQA-RAD datasets and show promising zero-shot and few-shot transfer results across different medical datasets. We hope these results can offer a baseline to compare with future work on knowledgeintensive and reasoning tasks.
## 8 Limitations
When evaluating our model in a cross-dataset adaptation setting, our experiments indicate the importance of using a retrieval dataset. It is challenging to procure high-quality and volume retrieval datasets, especially in low-resource domains such as the medical field. Fortunately, the VQA-RAD
and SLAKE datasets we evaluate on contain professionally annotated medical images. We also overcome the lack of data by creating a synthetic dataset from the medical ROCO image-captioning dataset.
Additionally, our model struggles with questions requiring multi-step reasoning, such as knowledge graph, abnormality, and position questions. Although performances in these question types are not far below the overall accuracy, future work may consider supplementary knowledge-based retrieval to assist in these challenging question types.
## 9 Ethics Statement
Although medical VQA provides exciting opportunities for future AI-assisted clinical diagnostic tools, there are several ethical challenges associated with these approaches.
## Patient Safety And Model Transparency Since
the model decision process for deep learning models is difficult to understand, these models should only be used as an auxiliary tool. This obscure decision process is crucial to clarify in the medical domain, in which a poor diagnosis or choice of treatment can significantly affect patient lives. For example, medical experts found that cancer treatment recommendation software often gave unsafe or incorrect treatment advice in a recent study
(Ross and Swetlitz, 2018).
Dataset Biases The fairness of medical AI systems depends on the distribution of people in its training dataset. To ensure that AI algorithms display fairness to all races, genders, and ethnic groups, practitioners should verify that the training dataset contains an equal representation of all groups. Before deploying our architecture or other deep learning-based models to a clinical setting, practitioners should ensure that their patient's background is adequately represented in the training data.
## References
Aishwarya Agrawal, Ivana Kajic, Emanuele Bugliarello, ´
Elnaz Davoodi, Anita Gergely, Phil Blunsom, and Aida Nematzadeh. 2023. Rethinking evaluation practices in visual question answering: A case study on out-of-distribution generalization. In Proceedings of the 2023 European Chapter of the Association for Computational Linguistics (Findings).
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang.
2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering.
In Proceedings of the IEEE international conference on computer vision, pages 2425–2433.
Wenhu Chen, Hexiang Hu, Xi Chen, Pat Verga, and William W Cohen. 2022. Murag: Multimodal retrieval-augmented generator for open question answering over images and text. In *Proceedings of the* 2022 Conference on Empirical Methods in Natural Language Processing.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021.
An image is worth 16x16 words: Transformers for image recognition at scale. In *International* Conference on Learning Representations.
Sedigheh Eslami, Gerard de Melo, and Christoph Meinel. 2021. Does clip benefit visual question answering in the medical domain as much as it does in the general domain? arXiv preprint arXiv:2112.13906.
Feng Gao, Qing Ping, Govind Thattai, Aishwarya Reganti, Ying Nian Wu, and Prem Natarajan. 2022.
Transform-retrieve-generate: Natural languagecentric outside-knowledge visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5067–5077.
Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual image
question. *Advances in neural information processing* systems, 28.
Dalu Guo, Chang Xu, and Dacheng Tao. 2021. Bilinear graph networks for visual question answering. IEEE
Transactions on Neural Networks and Learning Systems.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning*, pages 3929–3938.
PMLR.
Sadid A Hasan, Yuan Ling, Oladimeji Farri, Joey Liu, Henning Müller, and Matthew P Lungren. 2018.
Overview of imageclef 2018 medical domain visual question answering task. In *Conference and Labs of* the Evaluation Forum (Working Notes).
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations.
Yash Khare, Viraj Bagal, Minesh Mathew, Adithi Devi, U Deva Priyakumar, and CV Jawahar. 2021. Mmbert:
multimodal bert pretraining for improved medical vqa. In *2021 IEEE 18th International Symposium on* Biomedical Imaging (ISBI), pages 1033–1036. IEEE.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2021. Unifiedqa: Crossing format boundaries with a single qa system. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing (Findings).
Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang.
2018. Bilinear attention networks. *Advances in neural information processing systems*, 31.
Olga Kovaleva, Chaitanya Shivade, Satyananda Kashyap, Karina Kanjaria, Joy Wu, Deddeh Ballah, Adam Coy, Alexandros Karargyris, Yufan Guo, David Beymer Beymer, Anna Rumshisky, and Vandana Mukherjee Mukherjee. 2020. Towards visual dialog for radiology. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 60–69, Online. Association for Computational Linguistics.
Jason J Lau, Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. 2018. A dataset of clinically generated visual questions and answers about radiology images. *Scientific data*, 5(1):1–10.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. *Advances in Neural Information Processing Systems*, 33:9459–9474.
Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019.
Relation-aware graph attention network for visual question answering. In *Proceedings of the IEEE/CVF*
international conference on computer vision, pages 10313–10322.
Weizhe Lin and Bill Byrne. 2022. Retrieval augmented visual question answering with outside knowledge.
In *Proceedings of the 2022 Conference on Empirical* Methods in Natural Language Processing.
Zhihong Lin, Donghao Zhang, Qingyi Tao, Danli Shi, Gholamreza Haffari, Qi Wu, Mingguang He, and Zongyuan Ge. 2021. Medical visual question answering: A survey. *ArXiv*, abs/2111.10056.
Bo Liu, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu. 2021. Slake: a semantically-labeled knowledge-enhanced dataset for medical visual question answering. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pages 1650–1654. IEEE.
Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh.
2016. Hierarchical question-image co-attention for visual question answering. *Advances in neural information processing systems*, 29.
Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the IEEE international conference on computer vision, pages 1–9.
Kenneth Marino, Xinlei Chen, Devi Parikh, Abhinav Gupta, and Marcus Rohrbach. 2021. Krisp: Integrating implicit and symbolic knowledge for opendomain knowledge-based vqa. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14111–14121.
Jong Hak Moon, Hyungyung Lee, Woncheol Shin, Young-Hak Kim, and Edward Choi. 2022. Multimodal understanding and generation for medical images and text via vision-language pre-training. IEEE
Journal of Biomedical and Health Informatics.
Medhini Narasimhan and Alexander G Schwing. 2018.
Straight to the facts: Learning knowledge base retrieval for factual visual question answering. In Proceedings of the European conference on computer vision (ECCV), pages 451–468.
Binh D Nguyen, Thanh-Toan Do, Binh X Nguyen, Tuong Do, Erman Tjiputra, and Quang D Tran. 2019.
Overcoming data limitation in medical visual question answering. In *International Conference on Medical Image Computing and Computer-Assisted Intervention*, pages 522–530. Springer.
Will Norcliffe-Brown, Stathis Vafeias, and Sarah Parisot.
2018. Learning conditioned graph structures for interpretable visual question answering. Advances in neural information processing systems, 31.
Obioma Pelka, Sven Koitka, Johannes Rückert, Felix Nensa, and Christoph M Friedrich. 2018. Radiology objects in context (roco): a multimodal image dataset.
In Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, pages 180–189.
Springer.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020.
How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.
Casey Ross and Ike Swetlitz. 2018. Ibm's watson supercomputer recommended 'unsafe and incorrect'cancer treatments, internal documents show. *Stat*, 25.
Violetta Shevchenko, Damien Teney, Anthony Dick, and Anton van den Hengel. 2021. Reasoning over vision and language: Exploring the benefits of supplemental knowledge. *arXiv preprint arXiv:2101.06013*.
Anshumali Shrivastava and Ping Li. 2014. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
Ajay K Tanwani, Joelle Barral, and Daniel Freedman.
2022. Repsnet: Combining vision with language for automated medical reports. In *International Conference on Medical Image Computing and ComputerAssisted Intervention*, pages 714–724. Springer.
Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2062–2068.
Jialin Wu, Jiasen Lu, Ashish Sabharwal, and Roozbeh Mottaghi. 2022. Multi-modal answer validation for knowledge-based vqa. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 36, pages 2712–2721.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In *Proceedings of* the IEEE conference on computer vision and pattern recognition, pages 21–29.
Li-Ming Zhan, Bo Liu, Lu Fan, Jiaxin Chen, and XiaoMing Wu. 2020. Medical visual question answering via conditional reasoning. In *Proceedings of the 28th* ACM International Conference on Multimedia, pages 2345–2354.
Yangyang Zhou, Xin Kang, and Fuji Ren. 2019. Tua1 at imageclef 2019 vqa-med: a classification and generation model based on transfer learning. In *Conference* and Labs of the Evaluation Forum (Working Notes).
## Appendix
| Q Type | A Type Question Templates & Answer Templates (Tt) | Answer Templates | |
|-------------------------------------------------------------------------|-------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|-------------------------|
| Organ | Open | Q: What part of the body is being imaged? What is the organ shown in this image? ... A:{Brain, Chest, ...} | |
| Organ | Closed Q: Does the picture contain {}? Is this a study of the {}? ... | A:{Yes,No} | |
| Organ System Open | Q: What organ system is pictured? What system is this pathology in? ... | A:{Respiratory System, Cardiovascular System, ...} | |
| Organ System Closed Q: Is this an image of the {}? Is the {} shown? ... | A:{Yes,No} | | |
| Modality | Open | Q: What kind of scan is this? How was this image taken? ... | A:{MRI, X-ray, ...} |
| Modality | Closed Q: Is this a {}? Is the image a {}? ... | A:{Yes,No} | |
| Plane | Open | Q: What image plane is this? How is the image oriented? ... | A:{Axial, Coronal, ...} |
| Plane | Closed Q: Is this a {} plane? Is the image a {} section? ... | A:{Yes,No} | |
Table 4: Example templates for different question categories and question types.
| Q&A Types (t ∈ Qtype × Atype) | Keywords (Wt) |
|---------------------------------|------------------------------------------------------------------------------------------|
| Organ & Open | Heart, Lungs, Lung, Liver, Breasts, Chest, Cardiovascular System, Respiratory System ... |
| Plane & Open | Axial, Coronal, Supratentorial, Posteroanterior ... |
| Modality & Open | MRI, T1, T2, CT, X-ray, Ultrasound, Flair ... |
Table 5: Example keywords for different question types.
## A Templates
We use the question and keyword templates in Tables 4 and 5 to construct a synthetic retrieval set. For open questions, the question template is static. However, the answer to open questions may be any of the keywords w ∈ Wt. Closed question templates have a slot that is filled in by one of the keywords w ∈ Wt, and the answer to these questions is either yes or no:
Prompt Construction Order Prompt Template (Tprompt) Possible Quantifiers (Q) Open Closed Overall Question, Retrieval, Image I believe the answer is {quantifier} {answer} very unlikely, unlikely, maybe, likely, very likely, certainly 39.6 65.0 54.9 Image, Retrieval, Question I believe the answer is {quantifier} {answer} very unlikely, unlikely, maybe, likely, very likely, certainly 39.0 65.3 54.9 Image, Question, Retrieval {answer} is {quantifier} the answer very unlikely, unlikely, maybe, likely, very likely, certainly 37.9 68.6 56.4 Image, Question, Retrieval I believe the answer is {quantifier} {answer} very unlikely, unlikely, maybe, likely, very likely, certainly 39.6 65.3 55.1 Table 6: Using different variants of our prompt results in similar performances. This table shows the open, closed, and overall accuracy of MPRgen_PM in a domain adaptation setting from SLAKE to VQA-RAD, retrieving k=1 nearest question-image pairs.
## B Prompt Variations
During the prompt construction process, we experiment with different prompt ordering and wording of retrieval prompts. We illustrate different template wording in the Prompt Template column of Table 6.
Each prompt contains a quantifier that is filled in with an expression ranging from "very unlikely" to
"certainly" based on the confidence score of y∗, described in Section 3.2. We found that performance does not significantly change when changing these aspects of the prompt, and we ultimately decided to use settings in the last row of the table.
## C Dataset Information
Tables 7 and 8 report descriptive statistics about the datasets used in our experiments. Although the synthetic data contains more question-answer pairs than SLAKE and VQA-RAD, it has noisier labels and more limited question types. SLAKE and VQA-RAD have larger question-answer diversity and share several question types, such as Organ, Position, and Abnormality questions.
Figure 5 displays our model's in-domain accuracies on SLAKE and VQA-RAD. The models perform best on Color, Attribute, and Size questions in VQA-RAD. The discriminative variants have better accuracy overall, but can not be directly applied to other datasets. We observed lower accuracy on Modality and Organ questions in VQA-RAD, which we attribute to the diversity of question and answer phrasing in these VQA-RAD question types.
| Dataset | Train Split | Validation Split | Test Split | Number of Question Types |
|---------------------------|---------------|--------------------|--------------|----------------------------|
| SLAKE | 4918 | 1053 | 1061 | 10 |
| VQA-RAD | 3064 | - | 451 | 11 |
| ROCO (image-caption only) | 65460 | 8183 | 8182 | - |
| Synthetic | 56526 | - | - | 3 |
![12_image_0.png](12_image_0.png)
![12_image_1.png](12_image_1.png)
## D Attention Visualization
Transformer-based models utilize attention to calculate dependencies between inputs which may be important for prediction. Since our MPR model uses a pretrained T5 encoder to combine features from several sources, a visualization of its attention scores may indicate which parts of the image contribute to its answers. Figure 6 illustrates the encoder self-attention in different attention heads and layers of our model when asked a challenging position question. The results suggest that some attention modules may attend to the entire input image (Layer 1, Head 4), whereas others may look for local dependencies by attending to adjacent tokens (Layer 2, Head 7).
In addition to encoder self-attention, cross attention in encoder-decoder transformer architectures may also illustrate which tokens from the input prompt contribute the most when generating answer tokens.
Since the prompt to our model consists of image tokens, we visualize which image regions have the highest attention scores in 8.
Existing work has shown the effectiveness of using retrieval from a data store to rapidly adapt language models to new domains (Khandelwal et al., 2020). KNN LMs uses a blending parameter λ ∈ [0, 1] to control the influence of retrieved information towards prediction:
$$\lambda\mathrm{pkNN}(y|x)+(1-\lambda)\mathrm{pLM}(y|x)$$
λpkNN(y|x) + (1 − λ)pLM(y|x) (8)
This method assumes the availability of a language model pLM(y|x) and a retrieval model pkNN(y|x)which can predict the next vocabulary token y given context x. However, given our retrieval set which consists of variable length answers and image data, it is difficult to estimate pkNN(y|x)
directly from our data store. Consequently, we augment our model input with retrieval prompts to allow for the implicit learning of retrieval reliance in an end-to-end manner. Figure 7 shows the average cross attention scores to the retrieval portion of the prompt when evaluating on test data. The results demonstrate that when the model prediction matches the retrieved answer, the attention scores to the corresponding prompt section are significantly higher. Based on this observation, we believe the model has learned how to weigh the retrieved information through end-to-end training.
![14_image_0.png](14_image_0.png)
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
![15_image_2.png](15_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8
✓ A2. Did you discuss any potential risks of your work?
Section 9
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3, Section 4.3
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All datasets and artifact models used in this study are publicly available and we provide links/citations to them.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 4
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3, Section 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4, Appendix C
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3.4, Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4. Code and instructions to reproduce our results are provided in our GitHub repository.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
khincha-etal-2023-infosync | {I}nfo{S}ync: Information Synchronization across Multilingual Semi-structured Tables | https://aclanthology.org/2023.findings-acl.159 | Information Synchronization of semi-structured data across languages is challenging. For example, Wikipedia tables in one language need to be synchronized with others. To address this problem, we introduce a new dataset InfoSync and a two-step method for tabular synchronization. InfoSync contains 100K entity-centric tables (Wikipedia Infoboxes) across 14 languages, of which a subset ({\textasciitilde}3.5K pairs) are manually annotated. The proposed method includes 1) Information Alignment to map rows and 2) Information Update for updating missing/outdated information for aligned tables across multilingual tables. When evaluated on InfoSync, information alignment achieves an F1 score of 87.91 (en {\textless}-{\textgreater} non-en). To evaluate information updation, we perform human-assisted Wikipedia edits on Infoboxes for 532 table pairs. Our approach obtains an acceptance rate of 77.28{\%} on Wikipedia, showing the effectiveness of the proposed method. | # Infosync**: Information Synchronization Across Multilingual** Semi-Structured Tables
Siddharth Khincha1, Chelsi Jain2**, Vivek Gupta**3†∗
, Tushar Kataria3†**, Shuo Zhang**4 1IIT Guwahati, 2CTAE, Udaipur, 3University of Utah†
,
4Bloomberg, [email protected], [email protected]
{vgupta, tkataria}@cs.utah.edu, {szhang611}@bloomberg.net
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Information Synchronization of semistructured data across languages is challenging.
For instance, Wikipedia tables in one language should be synchronized across languages. To address this problem, we introduce a new dataset INFOSYNC and a two-step method for tabular synchronization. INFOSYNC
contains 100K entity-centric tables (Wikipedia Infoboxes) across 14 languages, of which a subset (∼3.5K pairs) are manually annotated.
The proposed method includes 1) *Information* Alignment to map rows and 2) *Information Update* for updating missing/outdated information for aligned tables across multilingual tables.
When evaluated on INFOSYNC, information alignment achieves an F1 score of 87.91 (en
↔ non-en). To evaluate information updation, we perform human-assisted Wikipedia edits on Infoboxes for 603 table pairs. Our approach obtains an acceptance rate of 77.28% on Wikipedia, showing the effectiveness of the proposed method.
## 1 Introduction
English articles across the web are more timely updated than other languages on particular subjects.
Meanwhile, culture differences, topic preferences, and editing inconsistency lead to information mismatch across multilingual data, e.g., outdated information or missing information (Jang et al., 2016; Nguyen et al., 2018). Online encyclopedia, e.g.,
Wikipedia, contains millions of articles that need to be updated constantly, involving expanding existing articles, modifying content such as correcting facts in sentences (Shah et al., 2019) and altering Wikipedia categories (Zhang et al., 2020b). However, more than 40% of Wikipedia's active editors are in English. At the same time, only 15% of the world population speak English as their first language. Therefore, information in languages other Figure 1: Janaki Ammal Infoboxes in English (right)
and Hindi (left). Hindi Table lacks the "British Rule of India" as a cultural context. Two value mismatches (a)
The Hindi table doesn't list *Died* key's state (b) Institution values differ. The Hindi table mentions "residence" while the English table doesn't. Hindi Table is missing Thesis, Awards, and Alma Mater keys. Both don't mention parents, early education, or honors.
than English may not be as updated (Bao et al.,
2012). See Figure 1 for an example of an information mismatch for the same entity across different languages. In this work, we look at synchronizing information across multilingual content.
To overcome the above-mentioned problem, we formally introduce the task of Information Synchronization for multilingual articles, which includes paragraphs, tables, lists, categories, and images.
But due to its magnitude and complexity, synchronizing all of the information across different modalities on a webpage is daunting. Therefore, this work focuses on semi-structured data, a.k.a. table synchronization in a few languages, as the first step toward our mission.
We consider Infobox, a particular type of semistructured Wikipedia tables (Zhang and Balog, 2020a), which contain entity-centric information, where we observe various information mismatches, e.g., missing rows (cf. Figure 1). One intuitive idea to address them is translation-based. However, the Infoboxes contain rows with implicit context; translating these short phrases is prone to errors and leads to ineffective synchronization (Minhas
∗Corresponding Author †Equal Contribution et al., 2022). To systematically assess the challenge, we curate a dataset, namely INFOSYNC, consisting of 100K multilingual Infobox tables across 14 languages and covering 21 Wikipedia categories.
∼3.5K table pairs of English to non-English or nonEnglish to non-English are sampled and manually synchronized.
We propose a table synchronization approach that comprises two steps: (1.) **Information Alignment:** align table rows, and (2.) **Information Update:** update missing or outdated rows across language pairs to circumvent the inconsistency. The information alignment component aims to align the rows in multilingual tables. The proposed method uses corpus statistics across Wikipedia, such as key and value-based similarities. The information update step relies on an effective rule-based approach. We manually curate nine rules: row transfer, time-based, value trends, multi-key matching, append value, high to low resource, number of row differences, and rare keys. Both tasks are evaluated on INFOSYNC to demonstrate their effectiveness. Apart from the automatic evaluation, we deploy an online experiment that submits the detected mismatches by our method to Wikipedia after strictly following Wikipedia editing guidelines.
We monitor the number of accepted and rejected edits by Wikipedia editors to demonstrate its efficacy. All proposed edits are performed manually, in accordance with Wikipedia's editing policies and guidelines1, rule set2, and policies3. These changes were subsequently accepted by Wikipedia editors, demonstrating the efficacy of our methodology.
The contributions in this work are as follows:
1) We investigate the problem of Information Synchronization across multilingual semi-structured data, i.e., tables, and construct a large-scale dataset INFOSYNC; 2) We propose a two-step approach
(alignment and updation) and demonstrate superiority over exiting baselines; 3) The rule-based updation system achieves excellent acceptance when utilized for human-assisted Wikipedia editing. Our INFOSYNC dataset and method source code are available at https://info-sync.github.io/
info-sync/.
## 2 Motivation 2.1 Challenges In Table Synchronization
We observe the following challenges when taking Wikipedia Infoboxes as a running example. Note this is not an exhaustive list.
MI: Missing Information represents the problem where information appears in one language and is missing in others. This may be due to the fact that the table is out-of-date or to cultural, social, or demographic preferences for modification
(cf. Figure 1).
OI: Outdated Information denotes that information is updated in one language but not others.
IR: Information Representation varies across languages. For example, one attribute about "parents" can be put in a single row or separate rows
("Father" and "Mother").
UI: Unnormalized Information presents cases where table attributes can be expressed differently.
For example, "known for" and "major achievements" of a person represent the same attribute
(i.e., paraphrase).
LV: Language Variation means that information is expressed in different variants across languages.
This problem is further exaggerated by the implicit context in tables when translating. E.g., "Died" in English might be translated to "Overleden" (Pass Away) or "overlijdensplaats" (Place of Death) in Dutch due to missing context.
SV: Schema Variation denotes that the schema
(template structure) varies. For example, extraction of "awards" in Musician tables can be harrowing due to dynamic on-click lists (*Full Award Lists*).
EEL: Erroneous Entity Linking is caused by mismatched linkages between table entities among multiple languages, e.g., "ABV" and "Alcohol by Volume".
## 2.2 Wikipedian "Biases"
Wikipedia is a global resource across over 300 languages. However, the information is skewed toward English-speaking countries (Roy et al., 2020)
as English has the most significant Wikipedia covering 23% (11%) of total pages (articles). Most users' edits (76%) are also done in English Wikipedia.
English Wikipedia also has the highest number of page reads (49%) and page edits (34%), followed by German (20% and 12%) and Spanish (12% and 6%), respectively. Except for the top 25 languages, the total number of active editors, pages, and edits is less than 1% (Warncke-Wang et al., 2012; Alonso and Robinson, 2016).
Multilingual Wikipedia articles evolve separately due to cultural and geographical bias (Callahan and Herring, 2011; REAGLE and RHUE, 2011; Tinati et al., 2014), which prevents information synchronization. For example, information on "Narendra Modi" (India's Prime Minister) is more likely to be better reflected in Hindi Wikipedia than in other Wikipedias. This means that in addition to the obvious fact that smaller Wikipedias can be expanded by incorporating content from larger Wikipedias, larger Wikipedias can also be augmented by incorporating information from smaller Wikipedias. Thus, information synchronization could assist Wikipedia communities by ensuring that information is consistent and of good quality across all language versions.
## 3 The Infosync **Dataset**
To systematically assess the challenge of information synchronization and evaluate the methodologies, we aim to build a large-scale table synchronization dataset INFOSYNC based on entity-centric Wikipedia Infoboxes.
## 3.1 Table Extraction
We extract Wikipedia Infoboxes from pages appearing in multiple languages on the same date to simultaneously preserve Wikipedia's original information and potential discrepancies. These extracted tables are across 14 languages and cover 21 Wikipedia categories.
Languages Selection. We consider the following languages English(en), French(fr), German(de),
Korean(ko), Russian(ru), Arabic(ar), Chinese(zh),
Hindi(hi), Cebuano(ceb), Spanish(es), Swedish(sv),
Dutch(nl), Turkish(tr), and Afrikaans(ak). We extracted tables across 14 languages and covered 21 diverse Wikipedia categories. In these 14 languages, four are low resource (af, ceb, hi, tr) <
6000, seven of them medium resource (ar, ko,nl, sv, zh,ru, de,es) (6000–10000), and the remaining one are high resource (en, en, fr), w.r.t. to the number of infobox total tables (see Table 1 in paper). Our choices were motivated by the following factors:- a) Cover all the continents, thus covering the majority and diverse population. Out of chosen languages, 7 (English, French, German, Spanish, Swedish, Dutch, and Turkish) are European. b).
They have sufficient pages with info boxes; each entity info box is present in at least five languages, and c) an adequate number of rows (5 and above)
facilitates better data extraction.
Categories. Extracted tables cover twenty-one simple, diverse, and popular topics: Airport, Album, Animal, Athlete, Book, City, College, Company, Country, Food, Monument, Movie, Musician, Nobel, Painting, Person, Planet, Shows, and Stadiums. We observe that *Airport* has the most number of entity tables followed by *Movie* and *Shows*, as shown in Table 10. Other extraction details are provided in Appendix A.1.
## 3.2 Tabular Information Mismatched
| Ln | Average Table Transfer % | Language Statistics | | |
|------|----------------------------|-----------------------|----------|-------|
| C1 | C1 → P L ln | P L ln → C1 | # Tables | AR |
| af | 17.46 | 400.5 | 1575 | 9.91 |
| ar | 34.02 | 27.38 | 7648 | 13.01 |
| ceb | 42.87 | 134.88 | 3870 | 7.82 |
| de | 40.73 | 27.12 | 8215 | 7.88 |
| en | 45.85 | 0.32 | 12431 | 12.60 |
| es | 38.78 | 9.00 | 9920 | 12.59 |
| fr | 41.25 | 4.73 | 10858 | 10.30 |
| hi | 18.39 | 358.97 | 1724 | 10.91 |
| ko | 31.13 | 40.51 | 6601 | 9.35 |
| nl | 33.69 | 24.6 | 7837 | 10.46 |
| ru | 36.98 | 14.54 | 9066 | 11.41 |
| sv | 35.53 | 24.62 | 7985 | 9.89 |
| tr | 28.99 | 59.33 | 5599 | 10.14 |
| zh | 32.16 | 32.71 | 7140 | 12.43 |
Table 1: **Average Table Transfer**:- Column 2 shows the average number of tables missing in other languages which can be transferred from C1. Column 3 shows the average number of tables missing in C1, which we can transfer from all languages to C1. Here L is the set of all languages (ln) except source or transfer language. Language Statistics:- The number of tables and average rows (AR) per table across different categories for each language.
We analyze the extracted tables in the context of the synchronization problem and identify the information gap. The number of tables is biased across languages, as shown in Table 1. We observe Afrikaans, Hindi, and Cebuano have a significantly less number of tables. Similarly, the table size is biased across several languages. Dutch and Cebuano have the last rows. In addition, the number of tables across categories is uneven; refer to Table 2. Airport and Movie have the highest number of tables. Table 2 also reports the average number of rows for a category. Planet, Company, and Movie have the highest average number of rows.
When synchronizing a table from one language to another, we observe that the maximum number of tables can be transferred from English, French, and Spanish from Column 1 in Table 1. Afrikaans,
| Topic | # Tables | AR | Topic | # Tables | AR |
|----------|------------|--------------|----------|------------|-------|
| Airport | 18512 | 9.66 | Diseases | 3973 | 6.03 |
| Food | 6184 | 7.93 | Monument | 1550 | 9.71 |
| Album | 5833 | 7.58 | Medicine | 2516 | 15.20 |
| Animal | 3304 | 8.27 | Movie | 12082 | 13.29 |
| Athlete | 3209 | 9.09 | Musician | 2729 | 9.53 |
| Book | 1550 | 9.99 | Nobel | 9522 | 9.84 |
| Painting | 3542 | 7.05 | Country | 3338 | 22.85 |
| City | 3088 | 14.45 Person | 2252 | 11.87 | |
| College | 1857 | 11.01 Planet | 1233 | 16.80 | |
| Company | 2225 | 13.85 Shows | 5644 | 13.86 | |
| Stadium | 6326 | 10.94 | | | |
Hindi, and Cebuano have the least overlapping information (Column 3) with all other languages.
The number of rows (Column 5) varies substantially between languages, with Spanish and Arabic having the highest number.
## 3.3 Infosync **Evaluation Benchmark**
We construct the evaluation benchmark by manually mapping the table's pairs in two languages.
The table pairs we consider can be broadly split into English ↔ Non-English and Non-English ↔
Non-English. The annotations are conducted as follows.
English ↔ **Non-English:** We sample 1964 table pairs, where a minimum of 50 pairs for each category and language are guaranteed. We divide the annotated dataset, ratio of 1 : 2, into validation and test sets. The non-English tables are translated into English first and then compared against the English version. Furthermore, native speakers annotated 200 table pairs for English ∗←→ Hindi and English
∗←→ Chinese to avoid minor machine translation errors.
Non-English ↔ **Non-English:** We consider six non-English languages: two from each High resource (French, Russian), Medium Resource (German, Korean), and Low Resource (Hindi, Arabic),
w.r.t. the number of tables in INFOSYNC. We sample and annotate 1589 table pairs distributed equally among these languages, where we choose an average of ∼ 50 tables for all pairs of languages.
Both are translated into English before manually mapping them.
In addition, for more detailed analysis, we also annotate metadata around table synchronization challenges such as MI, IR, LV, OI, UI, SV, and EEL, as discussed in §2.1.
## 4 Table Synchronization Method
This section will explain our proposed table synchronization method for addressing missing or outdated information. This method includes two steps:
information alignment and update. The former approach aims to align rows across a pair of tables, and the latter helps to update missing or outdated information. We further deploy our update process in a human-assisted Wikipedia edit framework to test the efficacy in the real world.
## 4.1 Information Alignment
An Infobox consists of multiple rows where each row has a key and value pair. Given a pair of tables Tx = [*...,*(k ix, vix)*, ...*] and Ty = [*...,*(k jy, v jy)*, ...*]
in two languages, table alignment aims to align all the possible pairs of rows, e.g., (k ix, vix) and
(k jy, v jy) refer to the same information and should be aligned. We propose a method that consists of five modules, each of which relaxes matching requirements in order to create additional alignments.
M1. Corpus-based. The pair of rows (kx, vy) in Tx and (ky, vy) in Ty are supposed to be aligned if *cosine*(em(tren x(kx))*, em*(tren y(ky))) > θ1, where em is the embedding, θ1 is the threshold, and tren y() denotes the English translation of k if k is not in English. In order to achieve accurate key translations, we adopt a majority voting approach, considering multiple translations of the same key from different category tables. We consider the key's values and categories as additional context for better translation during the voting process. To simplify the voting procedure, we pre-compute mappings by selecting only the most frequent keys for each category across all languages.
M2. Key-only. This module attempts to align the unaligned pairs in module 1. Using their English translation, it first computes cosine similarity for all possible key pairs. k x will be aligned to k y only if they are mutually most similar key and the similarity is above a certain threshold θ2. This is similar to maximum bipartite matching, treating similarity scores as edge weights followed by threshold-based pruning. And it ensures we are capturing the highest similarity mapping from both language directions. Note that here we use only keys as the text for similarity computation.
M3. Key value bidirectional. This module is similar to step 2, except it uses the entire table row for computing similarities, i.e., key + value, using threshold θ3.
M4. Key value unidirectional. This module further relaxes the bidirectional mapping constraint in step 3, i.e., thus removing the requirement of the highest similarity score matching from both sides.
We shift to unidirectional matching between row pairs, i.e., consider the highest similarity in either direction. However, this may result in adding spurious alignments. To avoid this, we have a higher threshold (θ4) than the prior step.
M5. Multi-key. Previous modules only take the most similar key for alignment if exceeding the threshold. In this module, we further relax the constraint to select multiple keys (maximum two), given exceeding a threshold (θ5). Multi-key mapping is sparse, but the above procedure will lead to dense mapping. To avoid this, we introduce a *soft constraint* for value-combination alignment, where multi-key values are merged. We consider valid multi-key alignment when the merge valuecombination similarity score exceeds that of the most similar key.
The thresholds of five modules are tuned in the sequence as stated above.
## 4.2 Information Updation
Information modification includes *Row Append*
(adding missing rows), *Row Update* (replacing or adding values), and *Merge* Rows. We propose a rule-based heuristic approach for information updates. The rules are in form of logical expression (∀(RTx
,RTy
) L 7→ R) applied on infobox tables, where, RTx and RTy represent table rows for language x and y respectively. These rules are applied sequentially according to their priority rank (P.R.). Rules explanations are described below.
R1. Row Transfer. Following the logistic rule of
$$\forall_{(\mathbf{R}_{T_{x}},\mathbf{R}_{T_{y}})}\mathbf{A}_{T_{x}}^{T_{y}}(\mathbf{R}_{T_{x}};\mathbf{R}_{T_{y}})=0$$ $$\mapsto T_{y}\cup tr_{x}^{y}(\mathbf{R}_{T_{x}})\bigwedge\mathbf{A}_{T_{x}}^{T_{y}}(\mathbf{R}_{T_{x}};tr_{x}^{y}(\mathbf{R}_{T_{x}}))=1$$ $$=$$
, where AlTy Tx
(.; .) represents the alignment mapping between two tables Ty and Tx. Unaligned rows are transferred from one table to another.
R2. Multi-Match. We update the table by removing multi-alignments and replacing them with merged information to handle multikey alignments.
R3. Time-based. We update aligned values using the latest timestamp.
R4. Trends (positive/negative). This update applies to cases where the value is highly likely to follow a monotonic pattern (increasing or decreasing) w.r.t. time, e.g., athlete career statistics. The authors curated the positive/negative trend lists.
R5. Append Values. Additional value information from an up-to-date row is appended to the outdated row.
R6. HR to LR. This rule transfers information from high to low resource language to update outdated information.
R7. \#*Rows.* This rule transfers information from bigger (more rows) to smaller (fewer rows)
tables.
R8. Rare Keys (Non Popular). We update information from the table where non-popular keys are likely to be added recently to the outdated table.
The authors also curate non-popular keys.
Detailed formulation of logical rules and their priority ranking are listed in Table 3. Figure 3 in Appendix shows an example of table update.
## Human-Assisted Wikipedia Infobox Edits: We
apply the above rules to assist humans in updating Wikipedia infoboxes. Following Wikipedia edit guidelines4, rule set5, and policies6, we append our update request with a description to provide evidence, which contains (a) up-to-date entity page URL in the source language, (b) exact table rows information, the source language, and the details of the changes, (c) and one additional citation discovered by the editor for extra validation. 7 We further update beyond our heuristic-based rules but are aligned through our information alignment method.
## 5 Experiments
Our experiments assess the efficacy of our proposed two-stage approach by investigating the following questions.
- What is the efficacy of the unsupervised multilingual method for table alignment? (§5.2)
- How significant are the different modules of the alignment algorithm? (§5.2 and §A.6)
- Does the rule-based updating approach effective for information synchronization? (§5.3)
- Can the two-step approach assist humans in updating Wikipedia Infoboxes? (§5.3)
![5_image_1.png](5_image_1.png)
## 5.1 Experimental Setup
Baselines Models. We compare our approach with LaBSE (Feng et al., 2022), and SimCSE (Gao et al., 2021), multilingual sentence transformers embeddings (Reimers and Gurevych, 2020a) in which we include mBERT (case2) with mean pooling (mp) (Reimers and Gurevych, 2020b), and its distill versions (distill mBERT) (Sanh et al., 2019)
all in base form. We also compared with XLMRoBERTa (XLM-R) (Conneau et al., 2019) with mean pooling, and its distill version (Reimers and Gurevych, 2019a) trained via MPNet-based teacher model (MPNet) (Song et al., 2020). For all baseline implementation, we use the Hugging Face transformers (Wolf et al., 2020) and sentence transformers (Reimers and Gurevych, 2019a) library for the multilingual models' implementation.
Hyper-parameter Tuning. For our method, we embed translated English keys and values using MPNet model (Reimers and Gurevych, 2019b). We tune the threshold hyper-parameters using the validation set, 13 of the total annotated set. We sequentially tune the hyper-parameters thresholds (θ1 to θ5) in modules training order. Optimal threshold after tuning are θ1 = (0.8,0.8); θ2 = (0.64, 0.6); θ3= θ3 = (0.54, 0.54); θ4 =(0.9, 0.54); θ5 = (0.88, 0.96)
for Ten ← Tx and Tx ← Ty respectively. We retain the default setting for other models' specific hyperparameters.
Information Alignment. We consider English as our reference language for alignment. Specifically, we translate all multilingual tables to English using an effective table translation approach of XInfoTabS (Minhas et al., 2022). Then, we apply incremental modules as discussed in §4.1. We tune
![5_image_0.png](5_image_0.png)
independently on the validation set for Non-English
↔ Non-English and English ↔ Non-English.
The method is assessed on two sets of metric
(a.) matched score: measure the F1-score between ground truth matched row and predicted alignment, and (b.) unmatched score: measure the F1-score between independent (unmatched) rows in ground truth with predicted unaligned rows. See Figure 2 for the explanations of these metrics.
Information Updation. We apply the heuristicbased approach and deploy the predicted updates for human-assisted edits on Wikipedia Infoboxes.
532 table pairs are edited distributed among Ten
→ Tx, Tx → Ty, and Tx → Ten, where x and y are non-English languages.
## 5.2 Information Alignment
Algorithm Efficacy. Table 4 reports the matched and unmatched scores. For match scores, we observe that the corpus-based module achieves an F1 score exceeding 50 for all language pairs. Using a key-only module boosts the performance by about 5-15 points. Taking the whole row context
(key-value pair) with strict constraints on bidirectional mapping, i.e., two-way similarity, improves performance substantially (more than 16 points).
Further relaxing the bi-direction constraint to unidirectional matching (one-way similarity), we improve our results marginally with less than 0.5 performance points. Thus relaxation of the bi-
| P.R. Rule Name | Logical Rule ∀(RTx | ) L 7→ R | Update Type | |
|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------|--------------------------------------------------------|------------------|
| ,RTy | | | | |
| )AlTy | | | | |
| 1 | Row Transfer | ∀(RTx | (RTx ; RTy ) = 0 | Row Addition |
| ,RTy | Tx | | | |
| 7→ Ty ∪ try x(RTx ) V AlTy (RTx ;try x(RTx )) = 1 Tx | | | | |
| 2 | Multi-Match | ∀(RTx | )( P RTy AlTy (RTx ; RTy )) > 1 | Row Delete |
| ,RTy | Tx | | | |
| 7→ {Ty \ ∪(∀RTy | )=1)RTy } S try x(RTx ) V AlTy (RTx ;try x(RTx )) = 1 | | | |
| AlTy | Tx | | | |
| Tx (RTx ;RTy )AlTy | | | | |
| 3 | Time-based | ∀(RTx | (RTx ; RTy ) = 1 V (isTime(RTx , RTy ) = 1) | Value Substitute |
| ,RTy | Tx | | | |
| V (exTime(RTx ) > exTime(RTy )) 7→ RTy ← try x(RTx ) ,PosTrend)AlTy (RTx ; RTy ) = 1 V exKey(RTx ) ∈ PosTrend | Value Substitute | | | |
| 4 | Positive Trend | ∀(RTx ,RTy | Tx | |
| or | V RTx > RTy 7→ RTy ← RTx | | | |
| Negative Trend | ∀(RTx | ,NegTrend)AlTy (RTx ; RTy ) = 1 V exKey(RTx ) ∈ NegTrend | Value Substitute | |
| ,RTy | Tx | | | |
| V RTx < RTy 7→ RTy ← RTx | | | | |
| 5 | Append Value | RTx = V V ∀(RTx | )AlTy (RTx ; RTy ) = 1 V |RTx [k]| > |RTy [k]| | Value Addition |
| ,RTy | Tx | [k]))RTy ← RTy ∪ try x(v) | | |
| 7→ ∀(v∈RTx [k] ∧ ∈/try x(RTx | | | | |
| 6 | HR to LR | (Tx, Ty) ∈ (HR, LR) V ∀(RTx | )AlTy (RTx ; RTy ) = 1 | Value Substitute |
| ,RTy | Tx | | | |
| V tren x (RTx ) ̸= tren y (RTy ) 7→ RTy ← try x(RTx ) | | | | |
| 7 | # Rows | |Tx| >> |Ty| V ∀(RTx | )AlTy (RTx ; RTy ) = 1 V tren x (RTx ) ̸= tren y (RTy ) | Value Substitute |
| ,RTy | Tx | | | |
| 7→ RTy ← try x(RTx ) | | | | |
| ,RarKeys)AlTy (RTx ; RTy ) = 1 V tren | | | | |
| 8 | Rare Keys | ∀(RTx | x (Rtx ) ̸= tren y (Rty ) | Value Substitute |
| ,RTy | Tx | | | |
| V ∀(RTx ,RTy |exKey(RTx ) ∈ RarKey| > |exKey(RTy ) ∈ RarKey| 7→ RTy ← RTx | | | | |
Table 3: **Logical Rules for Information Updation**. Notation:- Tz represents a table in language z, RTz represents a row of the table. In RTz[k] = v, k,v represent key and value pair. For RTz[k] = V , V denotes value list mapped to a key k. AlTy Tx
(.; .) represents the alignment mapping between two tables Ty and Tx. Translation between two languages(p and q) is represented by trp q (.). *exKey* extract key from a table row. *isTime* is true if the row has time entry. *exTime* extract time from table row. *PosTrend/NegTrend* represent list of keys whose value always increase or decrease with time. *RarKey* represent set of keys are least frequent in the corpora.
| Method | Match | UnMatch | | | | | | |
|----------------------|-------------|------------------------------|-------------|---------|-------|-------|-------|-------|
| Ten ↔ Tx Tx ↔ Ty Ten | ∗←→ Thi Ten | ∗←→ Tzh Ten ↔ Tx Tx ↔ Ty Ten | ∗←→ Thi Ten | ∗←→ Tzh | | | | |
| SimCSE | 75.78 | 68.46 | 77.93 | 80.47 | 79.11 | 76.3 | 73.31 | 74.91 |
| LaBSE | 85.25 | 78.44 | 88.98 | 89.1 | 87.03 | 81.7 | 88.98 | 85.06 |
| mBERT-mp | 80.98 | 73.74 | 82.9 | 86.73 | 82.68 | 80.22 | 76.73 | 81.85 |
| XLM-R | 83.38 | 75.02 | 86.85 | 88.08 | 85.42 | 80.65 | 83.14 | 83.1 |
| MPNet | 82.85 | 78.63 | 86.08 | 87.58 | 84.2 | 83.45 | 83.14 | 83.76 |
| distill mBERT | 84.55 | 77.45 | 87.64 | 88.7 | 86.3 | 82.28 | 83.14 | 84.3 |
| Corpus-based | 61.86 | 56.74 | 57.34 | 69.33 | 70.51 | 71.73 | 54.01 | 63.11 |
| + Key Only | 70.41 | 62.14 | 73.4 | 74.67 | 73.85 | 73.52 | 62.49 | 66.23 |
| + Key-Val-Bi | 87.71 | 84.2 | 90.07 | 93.04 | 89.51 | 85.52 | 85.06 | 89.2 |
| + Key-Val-Uni | 87.89 | 84.33 | 90.34 | 93.12 | 89.52 | 85.42 | 85.16 | 88.62 |
| + Multi-Key | 87.91 | 84.36 | 90.14 | 92.8 | 89.3 | 85.46 | 84.98 | 88.15 |
Table 4: **Matched and UnMatch Score :** F1-Score for all test sets of INFOSYNC.
direction mapping constraint doesn't lead to significantly better alignments. The multi-key module, which considers one-to-many alignments, further improves the accuracy marginally. The reason for the marginal improvements is very few instances of one-to-many mappings.
For unmatch scores, we see similar results to match scores. The only significant difference is in key-only performance, where we observe a 0.5x performance improvement compared to match scores. We also analyze the precision-recall in Tables 17, 18, 19 and 20 of Appendix §A.3. We observe that the precision reduces and recall increases for match scores with module addition, whereas the reverse is true for unmatch scores. The number of alignments increases as we add more modules with relaxed constraints. This increases the number of incorrect alignments reducing the precision but increasing the recall. 8 Similarly, we can note the accuracy of unaligned rows increases because more incorrect alignments are added with relaxed constraints. We also report each module coverage in Appendix A.4. The performance of our proposed approach grouped by languages, category, and rows keys are detailed in Appendix A.5.
Error Analysis. Error analysis (cf §2.1) for 8There are more incorrect alignments N C2 compared to correct alignments which is O(n).
| Method | Ten ↔ Tx | Tx ↔ Ty | | | | | | | | | | |
|-------------------------------------------------------------------|------------|--------------------------|------------|-------|--------|--------|-----|--------|----|-----|-----|-----|
| OI | IR | SV | LV | UI | EL | OI | IR | SV | LV | UI | EL | |
| w/o Align | 298 | 286 | 22 | 158 | 388 | 118 | 245 | 226 | 33 | 146 | 486 | 148 |
| Corpus-based | 81 | 284 | 15 | 141 | 337 | 74 | 108 | 218 | 26 | 102 | 366 | 109 |
| +Key Only | 110 | 281 | 7 | 120 | 262 | 48 | 77 | 212 | 19 | 94 | 284 | 97 |
| +Key-Val-Bi | 75 | 232.33 | 6 | 35 | 108 | 8 | 44 | 197 | 15 | 28 | 60 | 18 |
| +Key-Val-Uni | 74 | 206.67 | 6 | 30 | 99 | 8 | 43 | 188 | 15 | 28 | 59 | 17 |
| +Multi-Key | 74 | 179.67 | 6 | 30 | 99 | 8 | 43 | 180.33 | 15 | 28 | 59 | 17 |
| Table 5: Error Analysis for Matched Score : Ten ↔ Tx and Tx ↔ Ty. | | | | | | | | | | | | |
| Method | Ten ↔ Tx | Tx ↔ Ty | | | | | | | | | | |
| Corpus-based | 157 | 245 | | | | | | | | | | |
| +Key Only | 422 | 343 | | | | | | | | | | |
| +Key-Value-Bi | 526 | 399 | | | | | | | | | | |
| +Key-Value-Uni | 572 | 415 | | | | | | | | | | |
| +Multi-Key | 619 | 437 | Type | Total | Accept | Reject | | | | | | |
| Row Transfer | 461 | 368(79.82%) | 93(20.17%) | | | | | | | | | |
| Value Substitution | 70 | 52(74.28%) | 18(25.72%) | | | | | | | | | |
| Append Value | 72 | 46(63.88%) | 26(36.12%) | | | | | | | | | |
| Total | 603 | 466 (77.28%) 136(22.72%) | | | | | | | | | | |
Table 6: **Error Analysis for UnMatch Score :** Total Unaligned mistakes for Ten ↔ Tx and Tx ↔ Ty.
matched and unmatched are reported in Table 5 and 6, respectively. Our proposed method works sequentially, relaxing constraints, and the number of falsely aligned rows increases with module addition (cf. Table 6). Different modules contribute unequally to unaligned mistakes, (25%, 56%) of the mistakes come from corpus-based module, (39%,
22%) from Key Only Module, (17%, 35%) from Key-Value-Bidirectional module, (7%, 4%) from Key-Value-uni-directional module, and (7.6%, 5%)
from multi-key alignment module, for Ten ↔ Tx and Tx ↔ Ty respectively. The corpus-based module is worst performing in Tx ↔ Ty because of difficulty in multilingual mapping. The key-only module is the worst performing in Ten ↔ Tx because it's the first relaxation in the algorithm. Further analysis of the error cases is in Appendix (§A.7).
## 5.3 Information Updation
Table 8: **Analysis of Human-Assisted Updates:** Accept/Reject rate of different types of edits for humanassisted Wikipedia infobox updates.
| Rules | Gold | Predicted | | | |
|--------------------------------------------|--------|-------------|------|-------|-------|
| Ten → Tx Tx → Ty Live Set Ten → Tx Tx → Ty | | | | | |
| R1 | 20320 | 18055 | 4213 | 21246 | 17675 |
| R2 | 648 | 502 | 207 | 1395 | 1852 |
| R3 | 546 | 399 | 75 | 443 | 347 |
| R4 | 142 | 151 | 4 | 120 | 147 |
| R5 | 3507 | 2116 | 784 | 3193 | 1960 |
| R6 | 5237 | 3047 | 332 | 5062 | 2891 |
| R7 | 2748 | 1899 | 990 | 2732 | 1855 |
| R8 | 25 | 77 | 5 | 29 | 82 |
| Al | 14967 | 9715 | 2851 | 14864 | 10657 |
Ln Pairs **Total Accept Reject**
Ten → Tx 204 161(78.92%) 43(21.07%) Tx → Ty 216 169(78.25%) 47(21.75%)
Tx → Ten 183 136(74.31%) 47(25.68%)
Total 603 466(77.28%) 137(22.71%)
the row addition rule accounts for the most updated, ∼64% of total updates for gold and predicted aligned table pairs. The flow of information from high resource to low resource accounts for ∼13%
of the remaining updates, whereas a high number of rows too low adds another 8% of the updates.
∼9% of the updates are done by the value updates rule. All the other rules combined give 8% of the remaining suggested updates. From the above results, most information gaps can be resolved by row transfer. The magnitude of rules like value updates and multi-key shows that table information needs to be synchronized regularly. Examples of edited infoboxes using the proposed algorithm are shown in Appendix Figures 4 and 5.
Table 8 reports a similar analysis for humanassisted Wikipedia infobox edits. We also report Wikipedia editors' accept/reject rate for the abovedeployed system in Table 9. We obtained an acceptance rate of 77.28% (as of May 2023), with the highest performance obtained when information flows across non-English languages. The lowest performance is obtained when the information flows from non-English to an English info box.
This highlights that our two-step procedure is effective in a real-world scenario. Examples of live updates are shown in Appendix Figures 6 and 7.
## 6 Related Works
Information Alignment. Multilingual Table attribute alignment has been previously addressed via supervised (Adar et al., 2009; Zhang et al., 2017; Ta and Anutariya, 2015) and unsupervised methods
(Bouma et al., 2009; Nguyen et al., 2011). Supervised methods trained classifiers on features extracted from multilingual tables. These features include cross-language links, text similarity, and schema features. Unsupervised methods made use of corpus statistics and template/schema matching for alignments. Other techniques by Jang et al.
(2016); Nguyen et al. (2018) focus on using external knowledge graphs such as DBpedia for the updation of Infoboxes or vice versa. In their experiments, most of these methods use less than three languages, and machine translation is rarely used. Additionally, we don't require manual feature curation for strong supervision. We study the problem more thoroughly with grouped analysis along languages, categories, and keys direction.
The works closest to our approach are Nguyen et al. (2011); Rinser et al. (2013), both of which use cross-language hyperlinks for feature or entity matching. Nguyen et al. (2011) uses translations before calculating text similarity. Utilizing crosslanguage links can provide a robust alignment supervision signal. In contrast to our approach, we do not use external knowledge or cross-language links for alignments. This additional information is rarely available for languages other than English.
Information Updation. Prior work for information updates (Iv et al., 2022; Spangher et al.,
2022; Panthaplackel et al., 2022; Zhang et al., 2020b,d) covers Wikipedia or news articles than semi-structured data like tables. Spangher et al.
(2022) studies the problem of updating multilingual news articles across different languages over 15 years. They classify the edits as addition, deletion, updates, and retraction. These were the primary intuitions behind our challenge classified in
§2.1. Iv et al. (2022) focused on automating article updates with new facts using large language models. Panthaplackel et al. (2022) focused on generating updated headlines when presented with new information. Some prior works also focus on the automatic classification of edits on Wikipedia for content moderation and review (Sarkar et al.,
2019; Daxenberger and Gurevych, 2013). Evening modeling editor's behavior for gauging collaborative editing and development of Wikipedia pages has been studied (Jaidka et al., 2021; Yang et al.,
2017). Other related works include automated sentence updation based on information arrival (Shah et al., 2020; Dwivedi-Yu et al., 2022). None of these works focus on tables, especially Wikipedia Infoboxes. Also, they fail to address multilingual aspects of information updation.
## 7 Conclusion And Future Work
Information synchronization is a common issue for semi-structured data across languages. Taking Wikipedia Infoboxes as our case study, we created INFOSYNC and proposed a two-step procedure that consists of alignment and updation. The alignment method outperforms baseline approaches with an F1-score greater than 85; the rule-based method received a 77.28 percent approval rate when suggesting updates to Wikipedia.
We identify the following future directions.
(a) *Beyond Infobox Synchronization.* While our technique is relatively broad, it is optimized for Wikipedia Infoboxes. We want to test whether the strategy applies to technical, scientific, legal, and medical domain tables (Wang et al., 2013; Gottschalk and Demidova, 2017). It will also be intriguing to widen the updating rules to include social, economic, and cultural aspects. (b) *Beyond* Pairwise Alignment. Currently, independent language pairs are considered for (bi) alignment. However, multiple languages can be utilized jointly for
(multi) alignment. (c) *Beyond Pairwise Updates.*
Similar to (multi) alignment, one can jointly update all language variants simultaneously. This can be done in two ways: (1.) *With English as pivot* language : To update across all languages. Here, English act as a central server with message passing. (2.) *Round-Robin Fashion:* where pairwise language updates between language pairs are transferred in a round-robin ring across all language pairs. In every update, we selected a leader similar to a leader election in distributed systems. (d) Joint Alignment and Updation. Even while our current approach is accurate, it employs a two-step process for synchronization, namely alignment followed by updating. We want to create rapid approaches aligning and updating in a single step. (e) Text for Updation: Our method doesn't consider Wikipedia articles for updating tables (Lange et al., 2010; Sáez and Hogan, 2018; Sultana et al., 2012).
## Limitations
We only consider 14 languages and 21 categories, whereas Wikipedia has pages in more than 300 languages and 200 broad categories. Increasing the scale and diversity will further improve method generalization. Our proposed method relies on the good multilingual translation of key and value from table pairs. Although we use key, value, and category together for better context, enhancement in table translation (Minhas et al., 2022) will benefit our approach. Because our rule-based system requires manual intervention, it has automation limits. Upgrading to completely automated methods based on a large language model may be advantageous. We are only considering updates for semi-structured tables. However, updating other page elements, such as images and article text, could also be considered. Although a direct expansion of our method to a multi-modal setting is complex (Suzuki et al.,
2012).
## Ethics Statement
We aimed to create a balanced, bias-free dataset regarding demographic and socioeconomic factors.
We picked a wide range of languages, even those with limited resources, and we also ensured that the categories were diversified. Humans curate the majority of information on Wikipedia. Using unrestricted automated tools for edits might result in biased information. For this reason, we adhere to the "human in the loop" methodology (Smith et al.,
2020) for editing Wikipedia. Additionally, we follow Wikipedia editing guidelines9, rule set10, and policies11 for all manual edits. Therefore, we ask the community to use our method only as a recommendation tool for revising Wikipedia. As a result, we ask that the community utilize INFOSYNC
strictly for scientific and non-commercial purposes from this point forward.
## Acknowledgements
We thank members of the Utah NLP group for their valuable insights and suggestions at various stages of the project, and reviewers for their helpful comments. Additionally, we appreciate the inputs provided by Vivek Srikumar and Ellen Riloff. Vivek Gupta acknowledges support from Bloomberg's Data Science Ph.D. Fellowship.
## References
Faheem Abbas, Muhammad Malik, Muhammad Rashid, and Rizwan Zafar. 2016. Wikiqa - a question answering system on wikipedia using freebase, dbpedia and infobox. pages 185–193.
Eytan Adar, Michael Skinner, and Daniel S. Weld. 2009.
Information arbitrage across multi-lingual wikipedia.
In *Proceedings of the Second ACM International Conference on Web Search and Data Mining*, WSDM '09, page 94–103, New York, NY, USA. Association for Computing Machinery.
Elisa Alonso and Bryan J. Robinson. 2016. Exploring translators' expectations of wikipedia: A qualitative review. *Procedia - Social and Behavioral Sciences*,
231:114–121. International Conference; Meaning in Translation: Illusion of Precision, MTIP2016, 11-13 May 2016, Riga, Latvia.
Gilbert Badaro and Paolo Papotti. 2022. Transformers for tabular data representation: A tutorial on models and applications. *Proc. VLDB Endow.*,
15(12):3746–3749.
Patti Bao, Brent Hecht, Samuel Carton, Mahmood Quaderi, Michael Horn, and Darren Gergle. 2012. Omnipedia: Bridging the wikipedia language gap.
In *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems*, CHI '12, page 1075–1084, New York, NY, USA. Association for Computing Machinery.
Gosse Bouma, Sergio Duarte, and Zahurul Islam.
2009. Cross-lingual alignment and completion of Wikipedia templates. In Proceedings of the Third International Workshop on Cross Lingual Information Access: Addressing the Information Need of Multilingual Societies (CLIAWS3), pages 21–29, Boulder, Colorado. Association for Computational Linguistics.
Ewa S. Callahan and Susan C. Herring. 2011. Cultural bias in wikipedia content on famous persons. *J. Am.*
Soc. Inf. Sci. Technol., 62(10):1899–1915.
Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, and Yin Zhang. 2021a.
KACE: Generating knowledge aware contrastive explanations for natural language inference. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 2516–2527, Online. Association for Computational Linguistics.
Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Yang Wang, and William W. Cohen. 2021b.
Open question answering over tables and text. In *International Conference on Learning Representations*.
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020a. Tabfact: A large-scale dataset for table-based fact verification. In *International Conference on Learning Representations*.
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020b. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1026–1036, Online. Association for Computational Linguistics.
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. 2021c. FinQA: A dataset of numerical reasoning over financial data. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zhiyu Chen, Wenhu Chen, Hanwen Zha, Xiyou Zhou, Yunkai Zhang, Sairam Sundaresan, and William Yang Wang. 2020c. Logic2Text: High-fidelity natural language generation from logical forms. In *Findings* of the Association for Computational Linguistics:
EMNLP 2020, pages 2096–2111, Online. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *CoRR*,
abs/1911.02116.
Johannes Daxenberger and Iryna Gurevych. 2013. Automatically classifying edit categories in Wikipedia revisions. In *Proceedings of the 2013 Conference on* Empirical Methods in Natural Language Processing, pages 578–589, Seattle, Washington, USA. Association for Computational Linguistics.
Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2022. Turl: Table understanding through representation learning. *SIGMOD Rec.*, 51(1):33–40.
Haoyu Dong, Zhoujun Cheng, Xinyi He, Mengyu Zhou, Anda Zhou, Fan Zhou, Ao Liu, Shi Han, and Dongmei Zhang. 2022. Table pretraining: A survey on model architectures, pretraining objectives, and downstream tasks. *arXiv preprint arXiv:2201.09745*.
Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, and Fabio Petroni. 2022.
Editeval: An instruction-based benchmark for text improvements. *arXiv preprint arXiv:2209.13331*.
Julian Eisenschlos, Syrine Krichene, and Thomas Müller. 2020. Understanding tables with intermediate pre-training. In *Findings of the Association*
for Computational Linguistics: EMNLP 2020, pages 281–296, Online. Association for Computational Linguistics.
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics.
Sébastien Ferré. 2012. Squall: A controlled natural language for querying and updating rdf graphs. In International Workshop on Controlled Natural Language, pages 11–25. Springer.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Michael Glass, Mustafa Canim, Alfio Gliozzo, Saneem Chemmengath, Vishwajeet Kumar, Rishav Chakravarti, Avi Sil, Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. 2021. Capturing row and column semantics in transformer based question answering over tables. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1212–1224, Online.
Association for Computational Linguistics.
Simon Gottschalk and Elena Demidova. 2017. Multiwiki: Interlingual text passage alignment in wikipedia. *ACM Trans. Web*, 11(1).
Vivek Gupta, Riyaz A. Bhat, Atreya Ghosal, Manish Shrivastava, Maneesh Singh, and Vivek Srikumar.
2022a. Is my model using the right evidence? systematic probes for examining evidence-based tabular reasoning. *Transactions of the Association for Computational Linguistics*, 10:659–679.
Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 2309–2324, Online. Association for Computational Linguistics.
Vivek Gupta, Shuo Zhang, Alakananda Vempala, Yujie He, Temma Choji, and Vivek Srikumar. 2022b.
Right for the right reason: Evidence extraction for trustworthy tabular reasoning. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3268–3283, Dublin, Ireland. Association for Computational Linguistics.
Jonathan Herzig, Thomas Müller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 512–519, Online. Association for Computational Linguistics.
Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4320–4333, Online. Association for Computational Linguistics.
Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. TABBIE: Pretrained representations of tabular data. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3446–3456, Online. Association for Computational Linguistics.
Robert Iv, Alexandre Passos, Sameer Singh, and MingWei Chang. 2022. FRUIT: Faithfully reflecting updated information in text. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3670–3686, Seattle, United States. Association for Computational Linguistics.
Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017.
Search-based neural structured learning for sequential question answering. In *Proceedings of the 55th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1821–
1831, Vancouver, Canada. Association for Computational Linguistics.
Kokil Jaidka, Andrea Ceolin, Iknoor Singh, Niyati Chhaya, and Lyle Ungar. 2021. WikiTalkEdit: A
dataset for modeling editors' behaviors on Wikipedia.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2191–2200, Online. Association for Computational Linguistics.
Saemi Jang, Mun Yong Yi, et al. 2016. Utilization of dbpedia mapping in cross lingual wikipedia infobox completion. In *Australasian Joint Conference on* Artificial Intelligence, pages 303–316. Springer.
Divyansh Kaushik, Eduard Hovy, and Zachary Lipton.
2020. Learning the difference that makes a difference with counterfactually-augmented data. In *International Conference on Learning Representations*.
Aneta Koleva, Martin Ringsquandl, and Volker Tresp.
Analysis of the attention in tabular language models.
In *NeurIPS 2022 First Table Representation Workshop*.
Dustin Lange, Christoph Böhm, and Felix Naumann.
2010. Extracting structured information from wikipedia articles to populate infoboxes. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM
'10, page 1661–1664, New York, NY, USA. Association for Computing Machinery.
Xi Victoria Lin, Richard Socher, and Caiming Xiong.
2020. Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 4870–4888, Online. Association for Computational Linguistics.
Bhavnick Minhas, Anant Shankhdhar, Vivek Gupta, Divyanshu Aggarwal, and Shuo Zhang. 2022. XInfoTabS: Evaluating multilingual tabular natural language inference. In *Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER)*, pages 59–77, Dublin, Ireland. Association for Computational Linguistics.
Thomas Müller, Julian Eisenschlos, and Syrine Krichene. 2021. TAPAS at SemEval-2021 task 9: Reasoning over tables with intermediate pre-training. In *Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)*, pages 423–430, Online. Association for Computational Linguistics.
Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Krysci ´ nski, ´
Hailey Schoelkopf, Riley Kong, Xiangru Tang, Mutethia Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev, and Dragomir Radev. 2022.
FeTaQA: Free-form table question answering. *Transactions of the Association for Computational Linguistics*, 10:35–49.
Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: Opendomain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432–447, Online. Association for Computational Linguistics.
J. Neeraja, Vivek Gupta, and Vivek Srikumar. 2021.
Incorporating external knowledge to enhance tabular reasoning. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2799–2809, Online. Association for Computational Linguistics.
Nhu Nguyen, Dung Cao, and Anh Nguyen. 2018. Automatically mapping wikipedia infobox attributes to dbpedia properties for fast deployment of vietnamese dbpedia chapter. In Asian Conference on Intelligent Information and Database Systems, pages 127–136.
Springer.
Thanh Nguyen, Viviane Moreira, Huong Nguyen, Hoa Nguyen, and Juliana Freire. 2011. Multilingual schema matching for wikipedia infoboxes. *Proceedings of the VLDB Endowment*, 5(2).
Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2022.
UniK-QA: Unified representations of structured and unstructured knowledge for open-domain question answering. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1535–1546, Seattle, United States. Association for Computational Linguistics.
Sheena Panthaplackel, Adrian Benton, and Mark Dredze. 2022. Updated headline generation: Creating updated summaries for evolving news stories.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6438–6461, Dublin, Ireland.
Association for Computational Linguistics.
Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1173–1186, Online. Association for Computational Linguistics.
Panupong Pasupat and Percy Liang. 2015a. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–
1480, Beijing, China. Association for Computational Linguistics.
Panupong Pasupat and Percy Liang. 2015b. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–
1480, Beijing, China. Association for Computational Linguistics.
Aniket Pramanick and Indrajit Bhattacharya. 2021.
Joint learning of representations for web-tables, entities and types using graph convolutional network.
In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, pages 1197–1206, Online.
Association for Computational Linguistics.
JOSEPH REAGLE and LAUREN RHUE. 2011. Gender bias in wikipedia and britannica. International Journal of Communication, 5:1138–1158.
Nils Reimers and Iryna Gurevych. 2019a. Sentencebert: Sentence embeddings using siamese bertnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019b. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2020a. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2020b. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics.
Daniel Rinser, Dustin Lange, and Felix Naumann. 2013.
Cross-lingual entity matching and infobox alignment in wikipedia. *Information Systems*, 38(6):887–907.
Dwaipayan Roy, Sumit Bhatia, and Prateek Jain. 2020.
A topic-aligned multilingual corpus of Wikipedia articles for studying information asymmetry in low resource languages. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 2373–2380, Marseille, France. European Language Resources Association.
Ohad Rozen, Vered Shwartz, Roee Aharoni, and Ido Dagan. 2019. Diversify your datasets: Analyzing generalization via controlled variance in adversarial datasets. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL),
pages 196–205, Hong Kong, China. Association for Computational Linguistics.
Tomás Sáez and Aidan Hogan. 2018. Automatically generating wikipedia info-boxes from wikidata. In Companion Proceedings of the The Web Conference 2018, WWW '18, page 1823–1830, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Soumya Sarkar, Bhanu Prakash Reddy, Sandipan Sikdar, and Animesh Mukherjee. 2019. StRE: Self attentive edit quality prediction in Wikipedia. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3962–3972, Florence, Italy. Association for Computational Linguistics.
Darsh Shah, Tal Schuster, and Regina Barzilay. 2020.
Automatic fact-guided sentence modification. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(05):8791–8798.
Darsh J. Shah, Tal Schuster, and Regina Barzilay. 2019.
Automatic fact-guided sentence modification. *CoRR*,
abs/1909.13838.
Abhilash Shankarampeta, Vivek Gupta, and Shuo Zhang. 2022. Enhancing tabular reasoning with pattern exploiting training. In *Proceedings of the 2nd* Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 706–726, Online only. Association for Computational Linguistics.
Tianze Shi, Chen Zhao, Jordan Boyd-Graber, Hal Daumé III, and Lillian Lee. 2020. On the potential of lexico-logical alignments for semantic parsing to SQL queries. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1849–1864, Online. Association for Computational Linguistics.
C Estelle Smith, Bowen Yu, Anjali Srivastava, Aaron Halfaker, Loren Terveen, and Haiyi Zhu. 2020. Keeping community in the loop: Understanding wikipedia stakeholder values for machine learning-based systems. In *Proceedings of the 2020 CHI Conference on* Human Factors in Computing Systems, pages 1–14.
Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Goldstein. 2021. Saint: Improved neural networks for tabular data via row attention and contrastive pre-training.
arXiv preprint arXiv:2106.01342.
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. MPNet: Masked and permuted pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 33, pages 16857–16867. Curran Associates, Inc.
Alexander Spangher, Xiang Ren, Jonathan May, and Nanyun Peng. 2022. NewsEdits: A news article revision dataset and a novel document-level reasoning challenge. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 127–157, Seattle, United States.
Association for Computational Linguistics.
Afroza Sultana, Quazi Mainul Hasan, Ashis Kumer Biswas, Soumyava Das, Habibur Rahman, Chris Ding, and Chengkai Li. 2012. Infobox suggestion for wikipedia entities. In *Proceedings of the 21st ACM*
International Conference on Information and Knowledge Management, CIKM '12, page 2307–2310, New
York, NY, USA. Association for Computing Machinery.
Huan Sun, Hao Ma, Xiaodong He, Wen-tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of the 25th International Conference on World Wide Web, pages 771–782.
Yu Suzuki, Yuya Fujiwara, Yukio Konishi, and Akiyo Nadamoto. 2012. Good quality complementary information for multilingual wikipedia. In International Conference on Web Information Systems Engineering, pages 185–198. Springer.
Thang Hoang Ta and Chutiporn Anutariya. 2015. A
model for enriching multilingual wikipedias using infobox and wikidata property alignment. In *Joint International Semantic Technology Conference*, pages 335–350. Springer.
Ramine Tinati, Paul Gaskell, Thanassis Tiropanis, Olivier Phillipe, and Wendy Hall. 2014. Examining wikipedia across linguistic and temporal borders.
In Proceedings of the 23rd International Conference on World Wide Web, WWW '14 Companion, page 445–450, New York, NY, USA. Association for Computing Machinery.
Mohamed Trabelsi, Zhiyu Chen, Shuo Zhang, Brian D.
Davison, and Jeff Heflin. 2022. Strubert: Structureaware bert for table search and matching. In *Proceedings of the ACM Web Conference 2022*, WWW '22, page 442–451, New York, NY, USA. Association for Computing Machinery.
Zhigang Wang, Zhixing Li, Juanzi Li, Jie Tang, and Jeff Z. Pan. 2013. Transfer learning based crosslingual knowledge extraction for Wikipedia. In *Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 641–650, Sofia, Bulgaria. Association for Computational Linguistics.
Morten Warncke-Wang, Anuradha Uduwage, Zhenhua Dong, and John Riedl. 2012. In search of the urwikipedia: Universality, similarity, and translation in the wikipedia inter-language link network. In *Proceedings of the Eighth Annual International Symposium on Wikis and Open Collaboration*, WikiSym
'12, New York, NY, USA. Association for Computing Machinery.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2020. Pretrained encyclopedia:
Weakly supervised knowledge-pretrained language model. In *International Conference on Learning* Representations.
Diyi Yang, Aaron Halfaker, Robert Kraut, and Eduard Hovy. 2017. Identifying semantic edit intentions from revisions in Wikipedia. In *Proceedings of the* 2017 Conference on Empirical Methods in Natural Language Processing, pages 2000–2010, Copenhagen, Denmark. Association for Computational Linguistics.
Jingfeng Yang, Aditya Gupta, Shyam Upadhyay, Luheng He, Rahul Goel, and Shachi Paul. 2022.
TableFormer: Robust transformer modeling for tabletext encoding. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 528–537, Dublin, Ireland. Association for Computational Linguistics.
Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426, Online. Association for Computational Linguistics.
Ori Yoran, Alon Talmor, and Jonathan Berant. 2022.
Turning tables: Generating examples from semistructured tables for endowing language models with reasoning skills. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6016–6031, Dublin, Ireland. Association for Computational Linguistics.
Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Bailin Wang, Yi Chern Tan, Xinyi Yang, Dragomir R. Radev, Richard Socher, and Caiming Xiong. 2021. Grappa: Grammar-augmented pre-training for table semantic parsing. In International Conference of Learning Representation.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics.
Vicky Zayats, Kristina Toutanova, and Mari Ostendorf.
2021. Representations for question answering from documents with tables and text. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2895–2906, Online. Association for Computational Linguistics.
Hongzhi Zhang, Yingyao Wang, Sirui Wang, Xuezhi Cao, Fuzheng Zhang, and Zhongyuan Wang. 2020a.
Table fact verification with structure-aware transformer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 1624–1629, Online. Association for Computational Linguistics.
Li Zhang, Shuo Zhang, and Krisztian Balog. 2019. Table2vec: Neural word and entity embeddings for table population and retrieval. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19, page 1029–1032, New York, NY, USA.
Association for Computing Machinery.
Shuo Zhang and Krisztian Balog. 2020a. Web table extraction, retrieval, and augmentation: A survey. ACM
Transactions on Intelligent Systems and Technology
(TIST), 11(2):1–35.
Shuo Zhang and Krisztian Balog. 2020b. Web table extraction, retrieval, and augmentation: A survey.
ACM Trans. Intell. Syst. Technol., 11(2).
Shuo Zhang, Krisztian Balog, and Jamie Callan. 2020b.
Generating categories for sets of entities. CIKM '20, page 1833–1842, New York, NY, USA. Association for Computing Machinery.
Shuo Zhang, Zhuyun Dai, Krisztian Balog, and Jamie Callan. 2020c. Summarizing and exploring tabular data in conversational search. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20, page 1537–1540, New York, NY, USA.
Association for Computing Machinery.
Shuo Zhang, Edgar Meij, Krisztian Balog, and Ridho Reinanda. 2020d. Novel entity discovery from web tables. In *Proceedings of The Web Conference 2020*,
WWW '20, pages 1298–1308.
Yan Zhang, Thomas Paradis, Lei Hou, Juanzi Li, Jing Zhang, and Haitao Zheng. 2017. Cross-lingual infobox alignment in wikipedia using entity-attribute factor graph. In *International Semantic Web Conference*, pages 745–760. Springer.
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics.
## A Appendix A.1 Table Extraction Details
Table formats and HTML code styles differ from one language to another and even across categories in the same language. Extraction is modified to
| Category Entities std_dev Category | Entities std_dev | | | | |
|--------------------------------------|--------------------|------|----------|------|-------|
| Airport | 2563 | 5.03 | Country | 259 | 10.28 |
| Album | 840 | 3.81 | Diseases | 462 | 4.20 |
| Animal | 368 | 3.37 | Food | 692 | 4.34 |
| Athlete | 369 | 5.80 | Medicine | 334 | 9.58 |
| Book | 218 | 5.24 | Monument | 203 | 5.23 |
| City | 262 | 7.95 | Movie | 1524 | 6.75 |
| College | 202 | 5.83 | Musician | 284 | 5.09 |
| Company | 267 | 6.87 | Nobel | 967 | 5.29 |
| Painting | 743 | 3.51 | Stadium | 742 | 5.86 |
| Person | 198 | 6.32 | Shows | 1044 | 6.83 |
| Planet | 188 | 8.46 | | | |
| C1 | Row Diff | C1 | Row Diff |
|------|------------|------|------------|
| af | 5.28 | hi | 5.06 |
| ar | 5.84 | ko | 4.30 |
| ceb | 3.33 | nl | 3.86 |
| de | 5.96 | ru | 4.1 |
| en | 4.80 | sv | 3.92 |
| es | 5.17 | tr | 4.23 |
| fr | 4.42 | zh | 4.76 |
handle these variations, which requires the following steps: (a) *Detecting Infoboxes:* We locate Wikipedia infoboxes that appear in at least five languages. (b) *Extracting HTML:* After detection, we extract HTML and preprocess to remove images, links, and signatures. (c) *Table Representation:* we convert the extracted table and store them in JSON.
Row Difference Across Paired Languages: There is substantial variation in the number of rows for infobox across different languages, i.e., rows difference = 1 |L| Pln∈L\C1||Rc1*| − |*Rln||, where L
is set of all 14 languages under consideration. Table 11 shows that German followed by Arabic and Afrikaans, has the highest row difference. This indicates that tables in these languages are incomplete
(with missing rows).
## A.2 Table Updation Examples
An example of table updation is shown in the Figure 3.
## A.3 Precision And Recall
We also evaluated precision-recall values in information alignment for matched and unmatched scores (§5.2). Precision recall values for Ten ↔ Tx, Tx ↔ Ty, Ten∗←→ Thi and Ten∗←→ Tzh are reported
## In Tables 17, 18, 19, And 20, Respectively. A.4 Algorithm Coverage
We measure the coverage on the entire corpus, the rate of rows aligned w.r.t. the smaller table in a table pair. Table 12 reports ablations results of coverage for various modules. Our proposed method aligns 72.54% and 67.96% of rows for Ten ↔ Tx and Tx ↔ Ty, respectively. Corpus-based is the most constrained module, focusing more on precision; hence removing corpus-based gives better coverage for both cases. Key-Only-Unidirectional is the most important module for coverage, followed by the Key-Only module for both cases.
## A.5 Domain And Language Wise Analysis
Table 13, 14, and 15 show the performance of our proposed method grouped by languages, domains, and keys, respectively.
Group-wise Analysis. From Table 13, for Ten ↔
Tx, Cebuano, Arabic, German, and Dutch are the worst performing languages with F1-score close to 85 for alignment. Whereas Turkish, Chinese, and Hindi have F1-score greater than 90. Korean, German, and Swedish are the lowest-performing language groups, with an F1-Score close to 86 for unaligned settings. Cebuano, Turkish, and Dutch get the highest score for unaligned metrics (greater than 90). For non-English language pairs, the lowest F1-score for match table pairs is observed for German-Arabic and Hindi-Korean pairs with an F1-score close to 78, as shown in Table 13. The highest F1-score is observed for Russian-German and Hindi-German, with F1-scores exceeding 88.8.
For unmatched data, Korean-Hindi, French-Hindi, French-Korean, and Russian-Korean pairs have the lowest F1 scores, less than 85. In contrast, GermanHindi and Russion-German have exceeded the unaligned F1-Score of 90.
Category-wise Analysis. As reported in Table 14, our method performs worst in Airport and College categories for match settings when one of the languages is English. For non-English match settings, Movie and City are the worst-performing categories. For unmatch setting with English as one of the languages, Airport and Painting have the lowest F1-score, whereas Movie and Stadium have the most inferior performance for non-English languages.
Key-wise Analysis. Table 15 shows the average F1-scores across tables for frequent and nonfrequent keys. We observed an F1-score degrada-
![16_image_0.png](16_image_0.png)
| Ablation | Ten ↔ T 3 | | | |
|------------|-------------|--------|----------|-------|
| Corpus | Key | K-V-Bi | K-V-Uni | Multi |
| 16.28 | 40.83 | 58.17 | 71.95 | 72,54 |
| w/o | | | | |
| Corpus | 3.15 | 50.17 | 74.69 | 75.3 |
| 16.28 | | | | |
| Key | 57.88 | 71.14 | 71.6 | |
| K-V-Bi | 16.28 | 38.88 | 71.9 | 72.3 |
| K-V-Uni | 16.28 | 46.19 | 21.34 | 67.53 |
| 72.23 | | | | |
| Multi | 16.28 | 40.96 | 58.58.58 | |
| Corpus |
|----------|
| 17.15 |
| 17.15 |
| 17.15 |
| 17.15 |
| 17.15 |
## A.7 Further Details: Error Analysis
compared to frequent keys.
## Ablation Study A.6
We report ablation performance to highlight the significance of each module in Table 16. Key-Value-
Bidirectional mapping (two-way) is the most critical module, followed by Key Only corpus-based modules. We also observe Uni-directional mapping being the second most important for non-English alignments.
The multi-key module was consistently was least significant for the same reason as the discussion above (very few instances). Similar observations were valid for unmatching scores.
We discussed challenges to table information synchronization across languages in § 2.1.
Table 5
(main paper) shows the number of instances of these challenges in evaluation for matched cases after applying various modules of the alignment algorithm.
- Corpus-based module solves approximately
(40%, 56%) of outdated information,(31%, 21%) of schema variation, (10%, 30%) of language variation, (13%, 25%) of unnormalized information and (37%, 26%) of erroneous entity linking challenges in T en ↔ T x and T x ↔
Ty , respectively.
| T x → T | | | |
|-------------|--------|---------|-------|
| Key | K-V-Bi | K-V-Uni | Multi |
| 39.78 | 57.53 | 67.58 | 67.96 |
| 25.04 | 49.82 | 64.98 | 65.41 |
| 55.05 | 64.6 | 65.1 | |
| 37.83 | 70.32 | 70.58 | |
| 40.19 | 62.91 | 63.59 | |
| 67.13 | | | |
| 36.36 | 55.03 | | |
| Ten ↔ Tx Match UnMatch Tx ↔ Ty Match UnMatch af 88.08 89.48 de ↔ hi 88.85 90.4 ar 85.24 88.77 de ↔ ko 85.27 88.7 ceb 85.17 91.07 fr ↔ ar 85.35 87.21 de 85.41 86.65 fr ↔ de 84.97 88.94 es 89.83 89.7 fr ↔ hi 83.95 84.58 fr 89.41 89.8 fr ↔ ko 83.59 84.36 hi 90.56 87.07 fr ↔ ru 87.63 88.83 ko 85.69 86.22 hi ↔ ar 84.33 89.38 nl 86.4 90.28 ko ↔ ar 82.18 89.08 ru 87.46 88.54 ko ↔ hi 78.8 83.03 sv 84.89 86.76 ru ↔ ar 82.18 86.96 tr 92.07 91.3 ru ↔ de 89.93 91.92 zh 91.61 89.31 ru ↔ hi 82.38 87.78 ru ↔ ko 81.62 84.47 de ↔ ar 78.05 87.23 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
- Further adding of key only similarity module resolves extra (24%, 13%) of outdated information, (36%, 21%) of schema variation,
(13%, 5%) of language variation, (19%, 17%)
of unnormalized information and (22%, 8%)
of erroneous entity linking challenges in Ten
↔ Tx and Tx ↔ Ty, respectively.
- Applying key-value-bidirectional module resolves another (12%, 13.5%) of outdated information, (18%, 6.6%) of information representation, 54%, 45% of language variation, (40%,
46%) of unnormalized information and (34%,
53%) of erroneous entity linking challenges in Ten ↔ Tx and Tx ↔ Ty, respectively.
- Key-Val-Unidirectional and Multi-key together solves another (18.5%, 7.5%) of the information representation in Ten ↔ Tx and Tx ↔ Ty, respectively, but are not effective against other challenges.
## A.8 Other Related Work
Tabular Reasoning. Addressing NLP tasks on semi-structured tabular data has received substantial attention. There is work on tabular NLI (Gupta et al., 2020; Chen et al., 2020a; Gupta et al., 2022b), question-answering task (Zhang and Balog, 2020b; Zhu et al., 2021; Pasupat and Liang, 2015a; Abbas et al., 2016; Sun et al., 2016; Chen et al., 2021a, 2020b; Lin et al., 2020; Zayats et al., 2021; Oguz et al., 2022, and others) and table-to-text generation (Zhang et al., 2020c; Parikh et al., 2020; Nan et al., 2021; Yoran et al., 2022; Chen et al., 2021b).
| Ten ↔ Tx | Tx ↔ Ty | | | |
|------------|-----------------------------|-------|-------|-------|
| Category | Match UnMatch Match UnMatch | | | |
| Airport | 79.77 | 82.64 | 85.79 | 90.9 |
| Album | 93.9 | 91.33 | 88.6 | 85.01 |
| Animal | 93.79 | 94.2 | 90 | 96.24 |
| Athlete | 86.6 | 90.21 | 83.75 | 88.81 |
| Book | 86.48 | 90.96 | 81.29 | 83.13 |
| City | 86.14 | 93.67 | 77.4 | 86.6 |
| College | 82.47 | 87.53 | 81.05 | 86.24 |
| Company | 87.49 | 85.15 | 85.5 | 86.7 |
| Country | 86.38 | 92.47 | 86.53 | 92.32 |
| Food | 88.58 | 90.04 | 85.65 | 91.67 |
| Monument | 84.86 | 86.14 | 87.66 | 89.6 |
| Movie | 91.2 | 85.7 | 74.33 | 76.19 |
| Musician | 89.47 | 85.62 | 89.04 | 93.27 |
| Nobel | 88.2 | 91.08 | 88.84 | 87.1 |
| Painting | 90.27 | 82.35 | 86.52 | 89.72 |
| Person | 87.37 | 87.79 | 79.85 | 87.74 |
| Planet | 90.93 | 85.77 | 85.01 | 87.18 |
| Shows | 91.23 | 88.89 | 83.65 | 78.84 |
| Stadium | 88.59 | 87.72 | 83.2 | 77.38 |
Table 14: **Category Wise Analysis** :- Alignment F1score reported for same group entities average over all languages.
Key Freq Range **# of Keys (all) Avg Score**
![17_image_0.png](17_image_0.png)
![17_image_1.png](17_image_1.png)
![17_image_2.png](17_image_2.png)
![17_image_3.png](17_image_3.png)
High 100 ≤ x 33 90.71 Mid 50 ≤ x ≤ 100 49 89.33 Low x ≤ 50 700 81.82
Table 15: **Key Wise Analysis**:- F1-Score report for grouped keys.
## Tabular Representation And Learning. There
are also several works representing Wikipedia tables, such papers are TAPAS (Herzig et al., 2020),
StrucBERT (Trabelsi et al., 2022), Table2vec
(Zhang et al., 2019), TaBERT (Yin et al., 2020),
TABBIE (Iida et al., 2021), TabStruc (Zhang et al.,
2020a), TabGCN (Pramanick and Bhattacharya, 2021), RCI (Glass et al., 2021), TURL (Deng et al., 2022), and TableFormer (Yang et al., 2022). Some papers such as (Yu et al., 2018, 2021; Eisenschlos et al., 2020; Neeraja et al., 2021; Müller et al., 2021; Somepalli et al., 2021; Shankarampeta et al., 2022; Dong et al., 2022, and others) study pre-training for tabular tasks. Paper related to tabular probing includes (Koleva et al.; Gupta et al., 2022a).
Tabular Datasets. There are several tabular task datasets on (a.) tabular NLI: (Gupta et al., 2020; Rozen et al., 2019; Müller et al., 2021; Kaushik et al., 2020; Xiong et al., 2020; Chen et al., 2020a; Eisenschlos et al., 2020; Chen et al., 2020c, and others); (b.) Tabular QA: WikiTableQA (Pasupat and Liang, 2015b), HybridQA (Chen et al., 2020b; Zayats et al., 2021; Oguz et al., 2022),WikiSQL
| Ablation | Match | UnMatch | | | | | | | | | |
|--------------|---------|-----------|---------|-------|---------|----------|---------|-------|---------|-----|---------|
| Ten ↔ Tx | Tx ↔ Ty | Ten | ∗←→ Thi | Ten | ∗←→ Tzh | Ten ↔ Tx | Tx ↔ Ty | Ten | ∗←→ Thi | Ten | ∗←→ Tzh |
| Corpus-based | 86.67 | 82.3 | 89.13 | 92.33 | 87.95 | 87.03 | 83.11 | 87.38 | | | |
| Key Only | 89 | 80.09 | 87.35 | 91.49 | 89.42 | 85.88 | 79.83 | 87.13 | | | |
| Key-Val-Bi | 84.98 | 75.39 | 86.95 | 90.41 | 86.39 | 82.06 | 80.48 | 84.4 | | | |
| Key-Val-Uni | 87.73 | 79.35 | 90 | 92.67 | 89.03 | 85.35 | 84.83 | 88.74 | | | |
| Multi-Key | 87.89 | 84.33 | - | - | 89.52 | 85.42 | - | - | | | |
| w/o | 87.91 | 84.36 | 90.14 | 92.8 | 89.03 | 85.46 | 84.98 | 88.17 | | | |
| Match | UnMatch | | | | | |
|-----------------|-----------|--------|-------|-----------|--------|-------|
| Alignment | Precision | Recall | F1 | Precision | Recall | F1 |
| Corpus-based | 93.51 | 46.22 | 61.86 | 55.66 | 96.17 | 70.51 |
| + Key Only | 88.09 | 58.62 | 70.4 | 60.75 | 94.16 | 73.85 |
| + Key-Value-Bi | 89.6 | 85.89 | 87.71 | 85.87 | 93.47 | 89.51 |
| + Key-Value-Uni | 89.3 | 86.52 | 87.89 | 86.24 | 93.07 | 89.52 |
| + Multi-Key | 88.85 | 86.99 | 87.91 | 86.51 | 92.27 | 89.3 |
Table 17: Ten ↔ Tx alignment performance on Human-Annotated Test Data
(Iyyer et al., 2017), SQUALL (Ferré, 2012; Shi et al., 2020), OpenTableQA (Chen et al., 2021b),
FinQA (Chen et al., 2021c), FeTaQA (Nan et al.,
2022), TAT-QA (Zhu et al., 2021), SQA (Iyyer et al., 2017), NQ-Tables (Herzig et al., 2021);
(c.) and Table Generation: ToTTo (Parikh et al.,
2020), Turing Tables (Yoran et al., 2022), LogicNLG (Chen et al., 2020c).
Furthermore, there are also several works discussed on web table extraction, retrieval, and augmentation (Zhang and Balog, 2020a), and utilizing the transformers model for table representation
(Badaro and Papotti, 2022).
| English Infobox | |
|---------------------------------------|----------------------------------------------------------------------|
| Artist | John Singer Sargent |
| Year | 1884 |
| Medium | Oil on canvas |
| Dimensions | 234.95 cm × 109.86 cm (92.5 in × 43.25 in) |
| Metropolitan Museum of Art, Manhattan | |
| Location | |
| Website | Madame X (Madame Pierre Gautreau) |
| Spanish Infobox | |
| Año | 1883–1885 |
| Autor | John Singer Sarper |
| Técnica | Oleo solare tela. |
| Tamaño | 234,95 cm × 109,86 cm |
| Localización | Museo Metropolitano de Arte, Manhattan, Nueva York, | Estados Unidos |
| Pais de origen | Estados Unidos |
| Artist | John Singer Sarg |
|-----------------------------------|--------------------------------------------|
| Year | 1884 |
| Medium | Oil on canvas |
| Dimensions | 234.95 cm × 109.86 cm (92.5 in × 43.25 in) |
| Location | Metropolitan Museum of Art, Manhattan |
| Madame X (Madame Pierre Gautreau) | |
| Website | |
| Country of origin | United States |
| Update Spanish Infobox | |
| Ano | 1884 |
| Autor | John Singer Sargent |
| Técnica | Óleo sobre tela |
| Tamaño | 234.95 cm × 109.86 cm (92.5 in × 43.25 in) |
| Localización | Museo Metropolitano de Arte, Manhattan |
| País de origen | Estados Unidos |
| Sitio web | Madame X (Madame Pierre Gautreau) |
Updated English Infobox
![19_image_0.png](19_image_0.png)
| English Infobox | Updated English Infobox | | |
|-----------------------------------------------------------------------|------------------------------------------------------------------|------------------------------------------------------|-----------------------------|
| Location | Dismoviand Resort, 1313 Dismoviand De
Arabeim, California, United States | | |
| Disneyland Resort, 1313 Disneyland Dr, Anaheim, California, United | 3'49'N | 117'55W | / | 33.81'N 117.92'W | Coordinates | 33'49'N | 117.92'W | / | 33.81'1
117.92"W | | |
| pordinates | 3"49"N | 117"55W| / | 33.81"N 117.92"W | Coordinates | 33"49"N | | | |
| Theme | Fairy tales and Disney character | Fairy tales and Disney char | |
| Slogan | The happiest place on earth | Slogan | The hoppiest place on earth |
| Disney Parks, Expe | ness and Products | (The Walt Disney Co | Disney Pirks, Experiences and Products | (The
Walt Disney Company) | |
| Operated by | Disneyland Resor | Disneyland Resort | |
| Opened | July 17, 1955 | ; 66 years ago | July 17, 1955 | ; 66 years ago | |
| Opened | | | |
| Previous names | Disneyland (1955–1998) | eyland (1955–1998) | |
| Operating seaso | Operating sea | | |
| Website | Official web | Wbsite | dsneyland.disney.go.com |
| Status | Operating | Convating | |
| furiac | 34 ha (340,000 m | 2 | ) | | |
| Type of park | Therres park | | |
| ment | Total : 39 | Number of roller coasters : 4 | | | |
| Number of Visitors | 17.666 million | (2018) | | |
| French Infobox | Updated French Infobox | | |
| Ouverture | 17 juillet 1955 | Ouverture | July 17, 1955 | ; 66 yea |
| Osneyland Res | 34 ha. (340,000 m 1.2 1 ) | | |
| Superficie | 34 ha. (340 000 m | 2 | ) | ort, 1313 Disneyland Dr, Anoheim, Californie, Etats- | |
| Pays | Etwes-Unis | | |
| Etat | California | Type de parc | Parc A thing |
| Ville | Anaheim | dre d'att | Total : 20 | No of |
| Proprietaire | Disneyland Inc. | The Walt Disney Con | manage | 10,006 million | (2008) |
| Type de parc | Parc à thèm | dsreyland.disrey.go.or | |
| Site web | | | |
| Total : 39 | Nb de montagnes russes : 4 | Nb d'attractions aquatiques | | | |
| substance of the control. | 17 66W | 7 | 38.81 'N 117.92 ' W | Co
117. | | |
| abrea | 18 666 millions | (2018) | ins precede | Disreyland (1965–1998 |
| Site web | disneyland.disney.go.com | laison d'exploitation | Toute Farmer |
| Coordonnees | 3° 48′ 44″ nord, 117° 55′ 08″ ouest | I longologic | |
| L'endroit le plus heur | | | |
![20_image_0.png](20_image_0.png)
![20_image_1.png](20_image_1.png)
Match **UnMatch**
Alignment Precision Recall F1 **Precision Recall F1** Corpus-based 75.68 45.38 56.74 58.9 91.71 71.73
+ Key Only 74.45 58.62 62.14 62.44 89.37 73.52
+ Key-Value-Bi 82.78 85.66 84.2 82.53 88.73 85.52 + Key-Value-Uni 82.2 86.58 84.33 82.94 88.05 85.42
+ Multi-Key 82.16 86.68 84.36 83.05 88.01 85.46
Match **UnMatch**
Alignment Precision Recall F1 **Precision Recall F1** Corpus-based 89.94 56.41 69.33 47.19 95.26 63.11
+ Key Only 88.78 64.43 74.67 51.74 91.99 66.23
+ Key-Value-Bi 92.38 93.7 93.04 86.73 91.81 89.2 + Key-Value-Uni 92.13 94.13 93.12 86.75 90.58 88.62
+ Multi-Key 91.51 94.13 92.8 86.73 89.66 88.17
Table 20: Ten∗←→ Tzh alignment performance on Human-Annotated Test Data.
| Match | UnMatch | | | | | |
|-----------------|-----------|--------|-------|-----------|--------|-------|
| Alignment | Precision | Recall | F1 | Precision | Recall | F1 |
| Corpus-based | 94.81 | 41.1 | 57.34 | 37.55 | 96.19 | 71.73 |
| + Key Only | 92.04 | 61.04 | 73.4 | 46.6 | 94.81 | 73.52 |
| + Key-Value-Bi | 87.65 | 85.89 | 90.07 | 77.37 | 88.73 | 85.52 |
| + Key-Value-Uni | 88.59 | 86.52 | 90.34 | 78.53 | 88.05 | 85.42 |
| + Multi-Key | 91.15 | 88.59 | 90.14 | 78.52 | 88.01 | 85.46 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After conclusion before the ethic statement
✓ A2. Did you discuss any potential risks of your work?
In the limitation section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the abstract and introduction in section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 (Dataset) And Section 4 (Model)
✓ B1. Did you cite the creators of artifacts you used?
Yes for models (Section 5)
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Non commercial academic use (dataset and models) discussed in the ethic statement section
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In ethics statement section B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3 and appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 and appendix
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
wang-etal-2023-t2iat | {T}2{IAT}: Measuring Valence and Stereotypical Biases in Text-to-Image Generation | https://aclanthology.org/2023.findings-acl.160 | *Warning: This paper contains several contents that may be toxic, harmful, or offensive.*In the last few years, text-to-image generative models have gained remarkable success in generating images with unprecedented quality accompanied by a breakthrough of inference speed. Despite their rapid progress, human biases that manifest in the training examples, particularly with regard to common stereotypical biases, like gender and skin tone, still have been found in these generative models. In this work, we seek to measure more complex human biases exist in the task of text-to-image generations. Inspired by the well-known Implicit Association Test (IAT) from social psychology, we propose a novel Text-to-Image Association Test (T2IAT) framework that quantifies the implicit stereotypes between concepts and valence, and those in the images. We replicate the previously documented bias tests on generative models, including morally neutral tests on flowers and insects as well as demographic stereotypical tests on diverse social attributes. The results of these experiments demonstrate the presence of complex stereotypical behaviors in image generations. | # T2Iat: Measuring Valence And Stereotypical Biases In Text-To-Image Generation
Jialu Wang, Xinyue Gabby Liu, Zonglin Di, Yang Liu, Xin Eric Wang∗
University of California, Santa Cruz Santa Cruz, CA, USA
{faldict, xliu167, zdi, yangliu, xwang366}@ucsc.edu
## Abstract
Warning: This paper contains several contents that may be toxic, harmful, or offensive.
In the last few years, text-to-image generative models have gained remarkable success in generating images with unprecedented quality accompanied by a breakthrough of inference speed. Despite their rapid progress, human biases that manifest in the training examples, particularly with regard to common stereotypical biases, like gender and skin tone, still have been found in these generative models. In this work, we seek to measure more complex human biases exist in the task of text-to-image generations. Inspired by the well-known Implicit Association Test (IAT) from social psychology, we propose a novel Text-to-Image Association Test (T2IAT) framework that quantifies the implicit stereotypes between concepts and valence, and those in the images. We replicate the previously documented bias tests on generative models, including morally neutral tests on flowers and insects as well as demographic stereotypical tests on diverse social attributes.
The results of these experiments demonstrate the presence of complex stereotypical behaviors in image generations.
## 1 Introduction
Recent progress on generative image models has centered around utilizing text prompts to produce high quality images that closely align with the provided natural language descriptions (Ramesh et al.,
2022; Nichol et al., 2022; Saharia et al., 2022; Yu et al., 2022; Chang et al., 2023). Easy access to these models, notably the open-sourced Stable Diffusion model (Rombach et al., 2022), has made it possible to develop them for a wide range of downstream applications at scale, such as generating stock photos (Raemont, 2022), and creating creative prototypes and digital assets (OpenAI, 2022).
![0_image_0.png](0_image_0.png)
The success of text-to-image generation was enabled by the availability and accessibility of massive image-text paired datasets scraped from the web (Schuhmann et al., 2022). However, it has been shown that data obtained by these curations may contain human biases in various ways
(Birhane et al., 2021). Selection bias occurs when the data is not properly collected from a diverse set of data sources, or the sources themselves do not properly represent groups of populations of interest. For example, it is reported that near half of the data samples of ImageNet came from the United States, while China and India, the two most populous countries in the world, were the contributors of only a small portion of the images (Shankar et al.,
∗ Corresponding Author.
2017). It is important to be aware that the generative models trained on such datasets may replicate and perpetuate the biases in the generated images
(Wolfe et al., 2022).
Our work seeks to quantify the implicit human biases in text-to-image generative models. A large body of literature has identified the social biases pertaining to gender and skin tone by analyzing the distribution of generated images across different social groups (Bansal et al., 2022; Cho et al., 2022).
These bias metrics build on the assumption that each generated image only associates with a single protected group of interest. However, in reality, the images might not belong to any of the protected groups when there is no discernible human subject or the appearances of the detectable human subjects are blurred and unclear. Moreover, the images may belong to multiple demographic groups when more than one human subjects are present in the image.
Therefore, these bias measures can easily fail to detect the subtle differences between the visual concepts reified in the images and the attributes they are associated with.
Unlike previous studies, our work aims to provide a nuanced understanding on more complex stereotypical biases in image generations than the straightforward demographic biases. Examples of the complex stereotypes includes: there is a belief that boys are inherently more talented at math, while girls are more adept at language (Nosek et al.,
2009); people with lighter skin tones are more likely to be appeared in home or hotel scenes, while people with dark skin tones are more likely to cooccur with object groups like vehicle (Wang et al.,
2020). We investigate how these biases will be reified and quantified in machine generated images, with a special focus on valence (association with negative or unpleasant vs. positive or pleasant concepts) and stereotypical biases. In this paper, we propose the Text-to-Image Association Test (T2IAT), a systematic approach to measure the implicit biases of image generations between target concepts and attributes (see Figure 1). One benefit of our bias test procedure is that it is not limited to specific demographic attributes. Rather, the bias test can be applied to a wide range of concepts and attributes, as long as the observed discrepancy between them can be justified as stereotyping biases by the model owners and users. For use cases, we conduct 8 image generation bias tests and the results of the tests exhibit various human-like biases at different significance levels as previously documented in social psychology.
We summarize our contribution as two-fold: first, we provide a generic test procedure to detect valence and stereotypical biases in image generation models. Second, we extensively conduct a variety of bias tests to provide evidence for the existence of such complex biases along with significance levels.
## 2 Related Work
Text-to-Image Generative Models aim to synthetic images from natural language descriptions.
There is a long history of image generation, and many works have been done in this area. Generative Adversarial Networks (GANs) (Goodfellow et al., 2020) and Variational Autoencoders
(Van Den Oord et al., 2017) (VAEs), as well as their variants, have been shown excellent capability of understanding both natural languages and visual concepts and generating high-quality images. Until recently, diffusion models (Ho et al.,
2020), such as DALL-E2, Stable Diffusion (Rombach et al., 2022), and Imagen (Saharia et al., 2022)
have gained a surge of attention due to their significant improvements in generating high-resolution photo-realistic images. Moreover, due to the development of multi-modal alignment (Radford et al.,
2021), text-to-image generation proves a promising intersection between representation learning and generative learning. Despite that there are several existing works (Ramesh et al., 2022; Nichol et al.,
2022; Saharia et al., 2022; Yu et al., 2022; Chang et al., 2023) aim to improve the quality of image generation, it still remains uncertain whether these generative models contain more complex humanlike biases.
However, we can see along with the development of text-to-image models, ethical concerns never disappeared. Cultural biases can be caused by the replacement of homoglyns (Struppek et al., 2023).
There are examples of inappropriate content generated by Stable Diffusion model (Schramowski et al., 2022), and fake images generated by text-toimage generation models, which can be misused in real-life (Sha et al., 2023). Moreover, membership leakage problem can still be found in typical text-to-image generation models (Wu et al., 2022),
followed by several existing works (Hu and Pang, 2023; Duan et al., 2023) on this issue targeting on image generation models based on diffusion models. These concerns all prove that text-to-image models require a thorough examination on the aspects of fairness, privacy, and security.
In this paper, we focus on measuring the human biases in Stable Diffusion, but the framework can be easily applied to other generative models.
Biases in Vision and Language Recent studies have examined a wide range of ethical considerations related to vision and language models (Burns et al., 2018; Wang et al., 2022b). Large language models are always trained with a large amount of text. Although the high amount of data can improve the performance of the model in language understanding, generation, etc, there is very likely some biases in the data, which will cause the language model to be biased (Zhao et al., 2017). To measure these biases, there are a variety of systematic works measuring stereotypical biases (Bolukbasi et al., 2016). Sentence Encoder Association Test (SEAT) (May et al., 2019) is an extension of the World Embedding Association Test (WEAT)
(Caliskan et al., 2017) to sentence-level representations. The difference between SEAT and WEAT
is that SEAT is a sentence-level version and SEAT
substitutes the attribute words and target words from WEAT into synthetic sentence templates. Another useful measurement is StereoSet (Nadeem et al., 2020), which is a crowdsourced dataset for measuring four types of stereotypical bias in language models. In addition, crowdsourced Stereotype Pairs (CrowS-Pairs) (Nangia et al., 2020) is a crowdsourced dataset that consists of pairs of minimally distant sentences which means that sentences are only different in limited tokens. Meade et al.
(2021); Bansal (2022) propose to measure biases in language models by counting how frequently the model prefers the stereotypical sentence in each pair over the anti-stereotypical sentence.
In addition to the language models, many prior works have quantified the biases in various computer vision tasks and illustrated the pre-trained computer vision models contain various biases on different axes (Buolamwini and Gebru, 2018; Wilson et al., 2019; Kim et al., 2021; Wang et al.,
2022a; Zhu et al., 2022). It has been demonstrated that such pre-trained models may bring the complex human biases into downstream applications, such as image search systems (Wang et al., 2021) and satellite segmentation (Zhang and Chunara, 2022). In particular, Steed and Caliskan (2021) show that self-supervised image encoders, such as iGPT (Chen et al., 2020a) and SimCLR (Chen et al., 2020b), may perpetuate stereotypes among intersectional demographic groups. Our work complements these works by measuring the complex biases in image generations.
## 3 Approach
In this work, we adapt the Implicit Association Test (IAT) in social psychology to the task of textto-image generation. We will first introduce the long history of association tests. But existing bias tests are primarily focusing on word embeddings.
Therefore, we present the Text-to-Image Association Test (T2IAT), which quantifies the human biases in images generated by text-to-image generation models.
## 3.1 Implicit Association Test
In social psychology, the Implicit Association Test
(IAT) introduced by Greenwald et al. (1998) is an assessment of implicit attitudes and stereotypes where the test subjects are held unconsciously, such as associations between concepts (*e.g.* people in light/dark skin color) and evaluations (*e.g.* pleasant/unpleasant) or stereotypes. In general, IAT can be categorized into valence IATs, in which concepts are tested for association with positive or negative valence, and stereotype IATs, in which concepts are tested for association with stereotypical attributes (*e.g.* "male" vs. "female"). During a typical IAT test procedure, the participants will be presented with a series of stimuli (*e.g.*, pictures of black and white faces, words related to gay and straight people) and are asked to categorize them as quickly and accurately as possible using a set of response keys (e.g., "pleasant" or "unpleasant" for valence evaluations, "family" or "career" for stereotypes). The IAT score is interpreted based on the difference in response times for a series of categorization tasks with different stimuli and attributes, and higher scores indicate stronger implicit biases.
For example, the Gender-Career IAT indicates that people are more likely to associate women with family and men with careers.
IAT was adapted to the field of natural language processing by measuring the associations between different words or concepts for language models
(Caliskan et al., 2017). Specifically, a systematic method, Word Embedding Association Test
(WEAT), is proposed to measure a wide range of human-like biases by comparing the cosine similarity of word embeddings between verbal stimuli and attributes. More recently, WEAT was extended to compare the similarity between embedding vectors for text prompts instead of words (May et al., 2019; Bommasani et al., 2020; Guo and Caliskan, 2021).
## 3.2 Text-To-Image Association Test
We borrow the terminology of association test from Caliskan et al. (2017) to describe our proposed bias test procedure. Consider two sets of *target* concepts X and Y like science and art, and two sets of *attribute* concepts A and B like men and women. The null hypothesis is that, regardless of the attributes, there is no difference in the association between the sets of images generated with the target concepts. In the context of Gender-Science bias test, the null hypothesis is saying that no matter whether the text prompts describe science or arts, the generative models should output images that are equally associated with women and men. We note that in such a gender stereotype setting, a naïve way to measure association is to count the numbers of men and women who appeared in the generated images.
This simplified measure reduces the fairness criteria to ensure that the image generation should contain the equal size of pictures depicting women and men, which has been adopted in many prior works (Tan et al., 2020; Bansal et al., 2022).
To validate the significance of the null hypothesis, we design a standard statistical hypothesis test procedure, as shown in Figure 1. The key challenge is how to measure the association for one target concept X with the attributes A and B, respectively. Our strategy is first to compose neutral text prompts about X that do not mention either A
or B. The idea is that the images generated with these neutral prompts should not be affected by the attributes but will be skewed towards them due to the possible implicit stereotyping biases in the generative model. We then include the attributes in the prompts and generate attribute-guided images. The distance between the neutral and attribute-guided images can be used to measure the association between the concepts and the attributes.
More specifically, we construct text prompts that are based on the target concepts, with or without the attributes. Let X and Y denote the neutral prompts related to the target concepts X and Y,
respectively. Similarly, we use XA to represent the set of text prompts that are created by editing X
with a set of attribute modifiers A corresponding to the attribute A. We feed these text prompts into the text-to-image generative model and use G(·)
to denote the set of generated images with input prompts. For ease of notation, we use lowercase letters to represent the image samples and those accented with right arrows to represent the vector representations of the images. We consider the following test statistics:
- **Differential association** measures the difference of the association between the target concepts with the attributes.
$$\begin{split} S(X,Y,A,B) &= \mathbb{E}\mathop{\rm Acc}(x,X^{A},X^{B})\\ &- \mathbb{E}\mathop{\rm Acc}(y,Y^{A},Y^{B})\\ &y \in \mathcal{G}(Y) \end{split}\tag{1}$$ Here $\mathop{\rm Acc}(x,X^{A},X^{B})$ is the association for one.
sample image with the attributes, i.e.,
$$\begin{array}{c}{{\operatorname{Ass}(x,X^{A},X^{B})=\operatorname*{\mathbb{E}}_{a\in{\mathcal{G}}(X^{A})}\cos({\vec{\mathbf{x}}},{\vec{\mathbf{a}}})}}\\ {{-\operatorname*{\mathbb{E}}_{b\in{\mathcal{G}}(X^{B})}\cos\!\left({\vec{\mathbf{x}}},{\vec{\mathbf{b}}}\right)}}\end{array}\quad(2)$$
In Eq (2), cos(·, ·) is the distance measure between images. While there are several different methods for measuring the distance between images, we choose to compute the cosine similarity between image embedding vectors that are generated with pre-trained vision encoders. During our experimental evaluation, we follow the fashion and use the vision encoder of CLIP model
(Radford et al., 2021) for convenience.
- p**-value** is a measure of the likelihood that a random permutation of the target concepts would produce a greater difference than the sample means. To perform the permutation test, we randomly split the set X ∪ Y into two partitions Xe and Ye of equal size. Note that the prompts in Xe might be related to concept Y and those in Ye might be related to concept X . The p-value of such a permutation test is given by
$$p=\operatorname*{Pr}\Bigl(|S({\tilde{X}},{\tilde{Y}},A,B)|>|S(X,Y,A,B)|\Bigr)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(3)$$
The p-value represents the degree to which the differential association is statistically significant. In practice, we simulate 1000 runs of the random permutation to compute the p-value for the sake of efficiency.
- **Effect size** d is a normalized measure of how separated the distributions of the associations between two target concepts are. We adopt the Cohen's d to compute the effect size by
$$d=\frac{\mathbb{E}_{x}[\text{Asc}(x,X^{A},X^{B})]-\mathbb{E}_{y}[\text{Asc}(y,Y^{A},Y^{B})]}{s}\tag{4}$$
s(4)
where s is the pooled standard deviation for the samples of Asc(x, XA, XB) and Asc(y, Y A, Y B). According to Cohen, effect size is classified as small (d = 0.2), medium
(d = 0.5), and large (d ≥ 0.8).
We present the whole bias test procedure in Algorithm 1. The defined bias measures the degree to which the generations of the target concepts exhibit a preference towards one attribute over another. One qualitative example is provided in the first column of Figure 2. Although the prompt of those figures does not specify gender, almost all of the generated images for science and career are depicting boys.
Algorithm 1 Bias test procedure Input: concepts X and Y , attributes A and B.
Output: S(*X, Y, A, B*), p, d.
1: Construct a set of neutral prompts related to the concepts X and Y . Then construct attribute guided prompts for attributes A and B, respectively.
2: For Z ∈ {*X, Y* }, generate the sets of images G(Z), G(Z
A) and G(Z
B) from the text prompts.
3: Compute S(*X, Y, A, B*) using Eq. 1.
4: Run the permutation test to compute the pvalue by Eq. 3.
5: Compute the effect size d by Eq. 4.
## 4 Experimental Setup 4.1 Concepts And Text Prompts
We replicate 8 bias tests for text-to-image generative models, including 6 valence tests: Flowers vs.
Insects, Musical Instruments vs. Weapons, Judaism vs. Christianity, European American vs. African American, light skin vs. dark skin, and straight vs.
gay; and 2 stereotype tests: science vs. arts and career vs. family. Each bias test includes two target concepts and two valence or stereotypical attributes.
Following Greenwald et al. (1998), we adopt the same set of verbal stimuli for each of the concepts and attributes. We present verbal stimuli for the selected concepts in Table 3. For valence tests, the evaluation attributes are pleasant and unpleasant.
For stereotype tests, the stereotyping attributes are male and female.
We systematically compose a set of representative text prompts with the collection of verbal stimuli for each pair of compared target concepts and attributes. The constructed text prompts will be fed into the diffusion model to generate images.
We will show the specific text prompts for each bias test in Section 5.
## 4.2 Generative Models
For our initial evaluation, we use the Stable Diffusion model stable-diffusion-2-1 (Rombach et al., 2022). We adopt the standard parameters as provided in the Huggingface's API to generate 10 images of size 512 × 512 for each text prompt, yielding hundreds of images for each concept. Through practical testing, we determined that this number of generations produces accurate estimates of the evaluated metrics with a high level of confidence. The number of denoising steps is set to 50 and the guidance scale is set to 7.5. The model uses OpenCLIP-ViT/H (Radford et al., 2021) to encode text descriptions.
## 5 Analytical Results 5.1 Valence Tests
Flowers and Insects We begin by exploring the non-offensive stereotypes about flowers and insects, as these do not involve any demographic groups.
The original IAT finding found that most people take less responding time to associate flowers with words that have pleasant meanings and insects with words that have unpleasant meanings (Greenwald et al., 1998). To replicate this test, we use the same set of verbal stimuli for flowers and insects categories that were used in the IAT test, as described in Table 3. We construct the text prompt
"a photo of {flower/insect}" to generate images without any valence interventions. In parallel, we append the words expressing pleasant or unpleasant attitudes after the constructed prompt to generate the images with positive or negative valence. Examples of generated images can be seen in Figure 2. We report the evaluated differential association S(*X, Y, A, B*), p-value, and effect size d in Table 1. To estimate the p-value, we perform the permutation test for 1,000 runs and find Table 1: Evaluated association scores, p**-values, and effect size for 8 bias tests.** The larger absolute values of association score and effect size indicate a large bias. Smaller p-value indicates the test result is more significant.
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
out that there is no other permutation of images that can yield a higher association score, indicating that the p-value is less than 1e−3. We note that an effect size of 0.8 generally indicates a strong association between concepts, and the effect size
Concept X Concept Y Attribute A Attribute B Association Score p-value effect size d
Flower Insect Pleasant Unpleasant 0.033 < 1e−3 1.492
Musical Instrument Weapon Pleasant Unpleasant 0.015 0.118 0.528 European American African American Pleasant Unpleasant 0.011 0.270 0.323
Light skin Dark skin Pleasant Unpleasant -0.025 0.019 -1.237
Straight Gay Pleasant Unpleasant 0.033 0.003 1.113 Judaism Christianity Pleasant Unpleasant -0.003 0.442 -0.099 Science Arts Male Female 0.019 0.200 0.193
Careers Family Male Female 0.026 < 1e−3 0.639
of 1.492 found in this test suggests that flowers are significantly more strongly associated with a positive valence, while insects are more strongly associated with a negative valence. Our observation demonstrates that human-like biases are universal in image generation models even when the concepts used are not associated with any social concerns.
Musical Instruments and Weapons To further understand the presence of implicit biases associated with text-prompt-generated images between non-offensive stereotypes, we perform the test on another set of non-offensive stereotypes of musical instruments and weapons by using the verbal stimuli for the original IAT test. Similar to our test on flowers and insects, we first generated images only on the object itself, with the text prompt "a picture of {musical instrument/weapon}", then we modified the text prompts to include pleasant and unpleasant attitudes, and, finally, generated images with positive or negative valence.
We report the evaluated differential association S(*X, Y, A, B*), p-value, and effect size d in Table 1. The differential association score of 0.015 indicates that there is little difference in the association between our target concepts of musical instruments and weapons and the attributes of pleasant and unpleasant. We retrieved an effect size of 0.528, which implies that musical instruments have a much stronger association with a positive valence, and instead, weapons show a stronger association with a negative valence.
Judaism and Christianity We also perform the valence test on the concepts concerning religion, particularly Judaism and Christianity. Consistent with the tests on the previously mentioned concepts, we have two sets of text prompts constructed with the verbal stimuli that are used in the IAT test for Judaism and Christianity and for Pleasant and Unpleasant. The first set comes without valence intervention, only using the provided verbal stimuli for Judaism and Christianity. The second set of text prompts incorporates terms linked to pleasant and unpleasant attitudes. We derived images based on the different sets of prompts constructed.
The valence test for this set of concepts yields a very small effect size, −0.099, suggesting that humans hold a rather neutral attitude towards Judaism and Christianity, only with a slight pleasantness towards Christianity and a little unpleasantness towards Judaism. The differential association score of −0.003 demonstrates a tiny difference in the association between the two religions of Judaism and Christianity and the two social attitudes of pleasantness and unpleasantness. Our finding overturns the religion stereotype previously documented in IAT tests.
European American and African American In this valence test, we seek to explore the implicit racial stereotypes, besides non-harmful stereotypes, of European Americans and African Americans.
From the original IAT paper, two sets of common European American and African American names are provided, and the result from our test shows that it is much easier to associate European American names with words that suggest a pleasant attitude and African American names with words that imply an unpleasant attitude. In our test, we continue to use the verbal stimuli for European American and African American names retrieved from (Tzioumis, 2018) to construct our text prompts. For the text-prompt-generated images that are not valence-related, we use the text prompt "a portrait of {European American name/African American name}". Meanwhile, we create valence-related text prompt by including terms that embody pleasant and unpleasant attitudes. We recognize that there is an inconspicuous association between European American and pleasant terms and that between African American and unpleasant terms from the value of effect size of 0.323. The differential score of 0.011 shows a subtle association between the concepts of European American and African American and the attributes of pleasant and unpleasant.
Light Skin and Dark Skin This valence test reveals the hostile biases towards humans with light skin and dark skin in the same racial group. We use the verbal stimuli collected by Project Implicit, a project initiated by Nosek et al. (2007), that aims to educate people on biases. Following the pattern of our purposed test, we create a set of text prompts without valence for both light skin and dark skin and another set of text prompts that consider the valence attributes of pleasant and unpleasant. We calculate the differential association S(*X, Y, A, B*),
p-value, and effect size d of the images generated based on the text prompts we constructed. We obtain a considerably large effect size of −1.237, indicating that light skin is much more closely associated with an unpleasant attribute, and dark skin, on the other hand, has a strong association with a pleasant attribute. In addition, we have a moderate p-value, 0.019, which way exceeds the statistically significant value of 0.05.
Straight and Gay We examine the implicit bias towards sexuality in this valence test that targets the concepts of straight and gay. Text prompts that do not contain the factor of valence are created, along with those composed with pleasant and unpleasant attitudes using the method as other valence tests. By running through text-to-image generative models, corresponding images are produced. We receive the effect size of 1.113, which is much bigger than the defined large effect size value of 0.8.
It suggests that the association between the concept of straight and the attribute of pleasant is significantly strong and that of gay and the attribute of unpleasant is tremendously strong as well. We also note that the p-value is 0.003, which is lower than 0.005.
The valence tests show that not only non-harmful human biases, but also hostile stereotypical biases such as inter-racial, intra-racial, and sexual biases exist in the text-to-image generative models.
## 5.2 Stereotype Tests
We conduct two gender-related stereotypical tests:
gender-science and gender-career tests.
Science and Art We use the text prompt "a person studying {science/art}" for image generations. To generate images associated with male and female attributes, we modify the "person" with gender-specific words, such as "woman", "girl",
"man", "boy", *etc.* The evaluated effect size of 0.193 is small, and demonstrates that the distribution of the association scores does not differ too much. In addition, the p-value of 0.200 is relatively large. This bias test demonstrates that the
![7_image_1.png](7_image_1.png)
![7_image_0.png](7_image_0.png)
evaluated generative model does not contain bias towards science and art as is documented in human biases.
Career and Family The original IAT test has found that females are more associated with family and males with career (Nosek et al., 2002). To replicate this test with image generations, we use the template of text prompts "a person focusing on
{career / family}" to generate images. We find that the effect size of 0.639 is relatively large and the p-value is less than < 1e−3, indicating career is significantly more strongly associated with male than female.
## 5.3 Gender Stereotype In Occupations
Prior work has demonstrated that text prompts pertaining to occupations may lead the model to reconstruct social disparities regarding gender and racial groups, even though they make no mention of such demographic attributes (Bianchi et al., 2022). We are also interested in how the generated images are skewed towards women and men, assessed by their association scores with gender.
We collect the list of common occupation titles from the U.S. Bureau of Labor Statistics1. For each occupation title, we construct the gender-neutral text prompt "A photo of a {occupation}", and gender-specific versions by amending gendered descriptions. For each occupation, we use Stable Diffusion to generate 100 gender-neutral images, 100 masculine images, and 100 feminine images, respectively. We use Eq. (2) to calculate the association score between occupation and gender attributes.
We plot the distribution of association scores, and the quartiles, for eight different occupations in Figure 3. The figure shows that the 0.75 quantiles of association scores for computer programmers and pharmacists are higher than the others by a large margin, indicating that these occupations are more strongly associated with men. Conversely, the mean association scores for elementary school teachers, librarians, announcers, and chemists are negative, indicating that these occupations are more strongly associated with women. The association score for chef and police is neutral, suggesting that there is insufficient evidence to establish a stereotype.
## 5.4 Stereotype Amplification
Do images generated by the diffusion model amplify the implicit stereotypes in the textual representations used to guide image generation? Specif1https://www.bls.gov/oes/current/oes_
stru.htm
| Concept | Attribute | Score |
|--------------------|-------------------------|---------|
| Flowers | Pleasant vs. Unpleasant | 1.00 |
| Insects | Pleasant vs. Unpleasant | 0.15 |
| Musical Instrument | Pleasant vs. Unpleasant | 0.90 |
| Weapon | Pleasant vs. Unpleasant | 0.05 |
| Science | Male vs. Female | 0.75 |
| Arts | Male vs. Female | 0.30 |
| Careers | Male vs. Female | 0.75 |
| Family | Male vs. Female | 0.40 |
ically, we examine occupational images and calculate the association scores between the text prompts by substituting the text embeddings of CLIP into Eq. (2) and Eq. (1). We then compare these associations for text prompts to the associations for the generated images to investigate whether the biases are amplified.
Figure 4 demonstrates the stereotype amplification between text prompts and generated images.
For each occupation, we use an arrow to represent the change of associations on the axis of gender.
We observe that the associations are amplified on a large scale for most occupations. In particular, the textual association between a computer programmer and gender is only −0.0039 but enlarged to 0.0186 for images. Similar amplifications are observed for elementary school teachers, librarians, and chemists. For the occupation of chef, the association of text prompts is skewed towards females, while the association of images is skewed towards males.
## 5.5 Comparison To Human Evaluation
We recruit university students to evaluate the generated images and compare how the perceptions of human differ with the machine-evaluated association scores. Specifically, for each set of concepts, we ask three student participants to view 20 images generated with neutral prompts and choose which valence or stereotypical attribute is more closely associated. We report the fraction of images that are chosen as being more closely associated with pleasant or male attributes. As shown in Table 2, the human's preference of association aligns with the strength of our association scores. For flowers vs. insects and musical instruments vs. weapon, humans mostly prefer to associate flowers and musical instruments with pleasant while insects and weapons with unpleasant. For science vs. arts and career vs. family, we find that the significance of the bias is reduced. The Kendall's τ coefficient between the machine-evaluated and human-rated scores is 0.55, indicating that the association scores can properly represent human's perceptions.
## 6 Discussion
Our bias test was applied to testing biases of images generated by the state-of-the-art text-to-image generative model, associated with valence and gender attributes of variation of concepts such as careers, religions, skin tone, etc. In the example of the valence test for images generated for Straight & Gay concepts, we observed a significant bias of pleasant attitudes towards people with straight sexual orientation and unpleasantness towards people with gay sexual orientation; the findings successfully mirrored the acknowledged human biases. Similar to the Stable Diffusion example we selected in our work, the proposed bias test can be applied to other generative models with the experiment in resemblance to quantify existing implicit biases.
The proposed Text-to-Image Association Test is a principal approach for measuring the complex implicit biases in image generations. The primary result illustrates the valence and stereotypical biases across various dimensions, ranging from morally neutral to demographically sensitive, in a state-ofthe-art generative model at different scales. The presented research adds to the growing literature on AI ethics by highlighting the complex biases present in AI-generated images and serves as a caution for practitioners to be aware of these biases.
## 7 Limitations
Our work has some limitations. Although we use the same verbal stimuli in the previous IAT tests for creating text prompts, it is very likely that some stimuli that can represent the concepts are underrepresented. The approach we adopted for comparing the images' distance might be biased as well.
The current bias test procedure applies the visual encoder of OpenAI's CLIP model to measure the distance between images. However, it is unclear whether the image encoder may inject additional biases into the latent visual representations.
## Ethics Statement
The scope of this work is to provide a principal procedure for measuring the implicit valence and stereotypical biases in image generations. The experiments conducted involve generating images that pertain to demographic groups, and all images were generated in compliance with the terms of service and guidelines provided by the stable diffusion's license. The AI-generated images are used solely for research purposes and no identities are explicitly attributed to individuals depicted in the images. People's names are used to generate images. We justify that these are common American names publicly accessible, and do not contain any information that can uniquely identify an individual.
## Acknowledgement
We thank the anonymous reviewers for their constructive comments. This work is primarily supported by X. Wang's startup fund. J. Wang and Y.
Liu are also partially supported by the National Science Foundation (NSF) under grants IIS-2143895 and IIS-2040800.
## References
Hritik Bansal, Da Yin, Masoud Monajatipoor, and KaiWei Chang. 2022. How well can text-to-image generative models understand ethical natural language interventions? *ArXiv*, abs/2210.15230.
Rajas Bansal. 2022. A survey on bias and fairness in natural language processing. arXiv preprint arXiv:2204.09591.
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2022. Easily accessible text-toimage generation amplifies demographic stereotypes at large scale. *arXiv preprint arXiv:2211.03759*.
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. *CoRR*,
abs/2110.01963.
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker?
debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 4356–4364, Red Hook, NY, USA. Curran Associates Inc.
Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020.
Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4758–
4781, Online. Association for Computational Linguistics.
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on fairness, accountability and transparency*, pages 77–91.
PMLR.
Kaylee Burns, Lisa Anne Hendricks, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In *ECCV*.
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan.
2017. Semantics derived automatically from language corpora contain human-like biases. *Science*,
356(6334):183–186.
Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, José Lezama, Lu Jiang, Ming Yang, Kevin P. Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. 2023. Muse: Text-to-image generation via masked generative transformers. *ArXiv*,
abs/2301.00704.
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020a.
Generative pretraining from pixels. In International conference on machine learning, pages 1691–1703.
PMLR.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020b. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR.
Jaemin Cho, Abhay Zala, and Mohit Bansal. 2022.
Dall-eval: Probing the reasoning skills and social biases of text-to-image generative transformers. *CoRR*,
abs/2202.04053.
Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, and Kaidi Xu. 2023. Are diffusion models vulnerable to membership inference attacks?
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. *Communications of the ACM*,
63(11):139–144.
Anthony G Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz. 1998. Measuring individual differences in implicit cognition: the implicit association test. *Journal of personality and social psychology*,
74 6:1464–80.
Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES '21, page 122–133, New York, NY, USA. Association for Computing Machinery.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840– 6851.
Hailong Hu and Jun Pang. 2023. Membership inference of diffusion models.
Eugenia Kim, De'Aira Bryant, Deepak Srikanth, and Ayanna Howard. 2021. Age bias in emotion detection: An analysis of facial emotion recognition performance on young, middle-aged, and older adults.
In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 638–644.
Chandler May, Alex Wang, Shikha Bordia, Samuel R.
Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics.
Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy.
2021. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models.
arXiv preprint arXiv:2110.08527.
Moin Nadeem, Anna Bethke, and Siva Reddy. 2020.
Stereoset: Measuring stereotypical bias in pretrained language models. *arXiv preprint arXiv:2004.09456*.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. *arXiv preprint arXiv:2010.00133*.
Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. 2022. GLIDE:
Towards photorealistic image generation and editing with text-guided diffusion models. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning* Research, pages 16784–16804. PMLR.
Brian A. Nosek, Mahzarin R. Banaji, and Anthony G
Greenwald. 2002. Harvesting implicit group attitudes and beliefs from a demonstration web site. *Group* Dynamics: Theory, Research, and Practice, 6:101–
115.
Brian A. Nosek, Frederick L. Smyth, Jeffrey Jay Hansen, Thierry Devos, Nicole M. Lindner, Kate A Ranganath, Colin Tucker Smith, Kristina R. Olson, and Dolly Chugh. 2007. Pervasiveness and correlates of implicit attitudes and stereotypes. European Review of Social Psychology, 18:36 - 88.
Brian A. Nosek, Frederick L. Smyth, Natarajan Sriram, Nicole M. Lindner, Thierry Devos, Alfonso Ayala, Yoav Bar-Anan, Robin Bergh, Huajian Cai, Karen Gonsalkorale, Selin Kesebir, Norbert Maliszewski, Félix Neto, Eero Olli, Jaihyun Park, Konrad Schnabel, Kimihiro Shiomura, Bogdan Tudor Tulbure, Reinout W. Wiers, Mónika Somogyi, Nazar Akrami, Bo Ekehammar, Michelangelo Vianello, Mahzarin R.
Banaji, and Anthony G Greenwald. 2009. National differences in gender–science stereotypes predict national sex differences in science and math achievement. *Proceedings of the National Academy of Sciences*, 106:10593 - 10597.
OpenAI. 2022. https://openai.com/blog/dall-e-2extending-creativity/.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR.
Nina Raemont. 2022. Adobe stock to allow ai-generated images on its service.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 10684–10695.
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. 2022. Photorealistic text-to-image diffusion models with deep language understanding. In *Advances in Neural Information Processing Systems*.
Patrick Schramowski, Manuel Brack, Björn Deiseroth, and Kristian Kersting. 2022. Safe latent diffusion:
Mitigating inappropriate degeneration in diffusion models.
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. 2022. Laion5b: An open large-scale dataset for training next generation image-text models. *ArXiv*, abs/2210.08402.
Zeyang Sha, Zheng Li, Ning Yu, and Yang Zhang. 2023.
De-fake: Detection and attribution of fake images generated by text-to-image generation models.
Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. 2017. No classification without representation: Assessing geodiversity issues in open data sets for the developing world.
arXiv: Machine Learning.
Ryan Steed and Aylin Caliskan. 2021. Image representations learned with unsupervised pre-training contain human-like biases. In *Proceedings of the 2021 ACM*
conference on fairness, accountability, and transparency, pages 701–713.
Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, and Kristian Kersting. 2023. Exploiting cultural biases via homoglyphs in text-to-image synthesis.
Shuhan Tan, Yujun Shen, and Bolei Zhou. 2020. Improving the fairness of deep generative models without retraining. *arXiv preprint arXiv:2012.04842*.
Konstantinos Tzioumis. 2018. Demographic aspects of first names. *Scientific Data*, 5.
Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30.
Angelina Wang, Alexander Liu, Ryan Zhang, Anat Kleiman, Leslie Kim, Dora Zhao, Iroha Shirai, Arvind Narayanan, and Olga Russakovsky. 2022a.
REVISE: A tool for measuring and mitigating bias in visual datasets. *International Journal of Computer* Vision (IJCV).
Angelina Wang, Arvind Narayanan, and Olga Russakovsky. 2020. Revise: A tool for measuring and mitigating bias in visual datasets. *International Journal of Computer Vision*, 130:1790 - 1810.
Jialu Wang, Yang Liu, and Xin Wang. 2021. Are genderneutral queries really gender-neutral? mitigating gender bias in image search. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1995–2008, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jialu Wang, Yang Liu, and Xin Wang. 2022b. Assessing multilingual fairness in pre-trained multimodal representations. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 2681–
2695, Dublin, Ireland. Association for Computational Linguistics.
Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive inequity in object detection.
arXiv preprint arXiv:1902.11097.
Robert Wolfe, Mahzarin R. Banaji, and Aylin Caliskan.
2022. Evidence for hypodescent in visual semantic ai.
In *2022 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '22, page 1293–1304, New York, NY, USA. Association for Computing Machinery.
Yixin Wu, Ning Yu, Zheng Li, Michael Backes, and Yang Zhang. 2022. Membership inference attacks against text-to-image generation models.
Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu.
2022. Scaling autoregressive models for content-rich text-to-image generation. *Transactions on Machine* Learning Research. Featured Certification.
Miao Zhang and Rumi Chunara. 2022. Fair contrastive pre-training for geographic images.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics.
Zhaowei Zhu, Tianyi Luo, and Yang Liu. 2022. The rich get richer: Disparate impact of semi-supervised learning. In International Conference on Learning Representations.
## A Additional Experiment Details
We show the detailed verbal stimuli for all the 8 bias tests in Table 3.
| Concept | Verbal Stimuli |
|------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Flowers | aster, clover, hyacinth, marigold, poppy, azalea, crocus, iris, orchid, rose, bluebell, daffodil, lilac, pansy, tulip, buttercup, daisy, lily, peony, violet, carnation, gladiola, magnolia, petunia, zinnia. |
| Insects | ant, caterpillar, flea, locust, spider, bedbug, centipede, fly, maggot, tarantula, bee, cockroach, gnat, mosquito, termite, beetle, cricket, hornet, moth, wasp, blackfly, dragonfly, horsefly, roach, weevil. |
| Musical Instruments | bagpipe, cello, guitar, lute, trombone, banjo, clarinet, harmonica, mandolin, trumpet, bassoon, drum, harp, oboe, tuba, bell, fiddle, harpsichord, piano, viola, bongo, flute, horn, saxophone, violin. |
| Weapon | arrow, club, gun, missile, spear, axe, dagger, harpoon, pistol, sword, blade, dynamite, hatchet, rifle, tank, bomb, firearm, knife, shotgun, teargas, cannon, grenade, mace, slingshot, whip. |
| European American | Adam, Chip, Harry, Josh, Roger, Alan, Frank, Ian, Justin, Ryan, Andrew, Fred, Jack, Matthew, Stephen, Brad, Greg, Jed, Paul, Todd, Brandon, Hank, Jonathan, Peter, Wilbur, Amanda, Courtney, Heather, Melanie, Sara, Amber, Crystal, Katie, Meredith, Shannon, Betsy, Donna, Kristin, Nancy, Stephanie, Bobbie-Sue, Ellen, Lauren, Peggy, Sue-Ellen, Colleen, Emily, Megan, Rachel, Wendy (deleted names in italics). |
| African American | Alonzo, Jamel, Lerone, Percell, Theo, Alphonse, Jerome, Leroy, Rasaan, Torrance, Darnell, Lamar, Lionel, Rashaun, Tvree, Deion, Lamont, Malik, Terrence, Tyrone, Everol, Lavon, Marcellus, Terryl, Wardell, Aiesha, Lashelle, Nichelle, Shereen, Temeka, Ebony, Latisha, Shaniqua, Tameisha, Teretha, Jasmine, Latonya, Shanise, Tanisha, Tia, Lakisha, Latoya, Sharise, Tashika, Yolanda, Lashandra, Malika, Shavonn, Tawanda, Yvette (deleted names in italics). |
| light skin | light-skinned person, light-skinned girl, light-skinned woman, lightskinned women, light-skinned boy, light-skinned man, light-skinned men, light-skinned family, light-skinned community. |
| Dark skin | dark-skinned person, dark-skinned girl, dark-skinned woman, darkskinned women, dark-skinned boy, dark-skinned man, dark-skinned men, dark-skinned family, dark-skinned community. |
| Straight | straight person, straight girl, straight woman, straight women, straight boy, straight man, straight men, straight family, straight community. |
| Gay | gay person, gay girl, gay woman, gay women, gay boy, gay man, gay men, gay family, gay community. |
| Judaism | synagogue, torah, jew, judaism. |
| Christianity | church, bible, christian, christianity. |
| Career | executive, management, professional, corporation, salary, office, business, career. |
| Family | home, parents, children, family, cousins, marriage, wedding, relatives. |
| Science | science, technology, astronomy, math, chemistry, physics, biology, geology, engineering. |
| Arts | poetry, art, history, humanities, English, philosophy, music, literature. |
| Pleasant | caress, freedom, health, love, peace, cheer, friend, heaven, loyal, pleasure, diamond, gentle, honest, lucky, rainbow, diploma, gift, honor, miracle, sunrise, family, happy, laughter, paradise, vacation. |
| Unpleasant | abuse, crash, filth, murder, sickness, accident, death, grief, poison, stink, assault, disaster, hatred, pollute, tragedy, bomb, divorce, jail, poverty, ugly, cancer, evil, kill, rotten, vomit. |
| Male | male, man, boy, brother, son. |
| Female | female, woman, girl, sister, daughter. |
| Table 3: Verbal stimuli for each of the concepts and attributes. | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1: Introduction
✓ A4. Have you used AI writing assistants when working on this paper?
We use ChatGPT to rephrase and polish the Introduction section. However, we also manually edit the generated response to make sure the meaning of the content did not change.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
Section 1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics Statement
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We justified this in the ethics statement.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We collected common American names to generate images. However, these names are very common and publicly available. They cannot be used to identify any individual people.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
See Appendix A.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
See Section 4. We generate 10 images for each text prompts.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Appendix B.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix B.
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We recruit student from the university and credit them with $100 gift cards.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix B.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
ben-abacha-etal-2023-investigation | An Investigation of Evaluation Methods in Automatic Medical Note Generation | https://aclanthology.org/2023.findings-acl.161 | Recent studies on automatic note generation have shown that doctors can save significant amounts of time when using automatic clinical note generation (Knoll et al., 2022). Summarization models have been used for this task to generate clinical notes as summaries of doctor-patient conversations (Krishna et al., 2021; Cai et al., 2022). However, assessing which model would best serve clinicians in their daily practice is still a challenging task due to the large set of possible correct summaries, and the potential limitations of automatic evaluation metrics. In this paper we study evaluation methods and metrics for the automatic generation of clinical notes from medical conversation. In particular, we propose new task-specific metrics and we compare them to SOTA evaluation metrics in text summarization and generation, including: (i) knowledge-graph embedding-based metrics, (ii) customized model-based metrics with domain-specific weights, (iii) domain-adapted/fine-tuned metrics, and (iv) ensemble metrics. To study the correlation between the automatic metrics and manual judgments, we evaluate automatic notes/summaries by comparing the system and reference facts and computing the factual correctness, and the hallucination and omission rates for critical medical facts. This study relied on seven datasets manually annotated by domain experts. Our experiments show that automatic evaluation metrics can have substantially different behaviors on different types of clinical notes datasets. However, the results highlight one stable subset of metrics as the most correlated with human judgments with a relevant aggregation of different evaluation criteria. | # An Investigation Of Evaluation Metrics For Automated Medical Note Generation
Asma Ben Abacha Microsoft Health AI
[email protected] George Michalopoulos Microsoft Health AI
[email protected]
## Abstract
Recent studies on automatic note generation have shown that doctors can save significant amounts of time when using automatic clinical note generation (Knoll et al., 2022). Summarization models have been used for this task to generate clinical notes as summaries of doctorpatient conversations (Krishna et al., 2021; Cai et al., 2022). However, assessing which model would best serve clinicians in their daily practice is still a challenging task due to the large set of possible correct summaries, and the potential limitations of automatic evaluation metrics. In this paper, we study evaluation methods and metrics for the automatic generation of clinical notes from medical conversations. In particular, we propose new task-specific metrics and we compare them to SOTA evaluation metrics in text summarization and generation, including: (i) knowledge-graph embeddingbased metrics, (ii) customized model-based metrics, (iii) domain-adapted/fine-tuned metrics, and (iv) ensemble metrics. To study the correlation between the automatic metrics and manual judgments, we evaluate automatic notes/summaries by comparing the system and reference facts and computing the factual correctness, and the hallucination and omission rates for critical medical facts. This study relied on seven datasets manually annotated by domain experts. Our experiments show that automatic evaluation metrics can have substantially different behaviors on different types of clinical notes datasets. However, the results highlight one stable subset of metrics as the most correlated with human judgments with a relevant aggregation of different evaluation criteria.
## 1 Introduction
In recent years, the volume of data created in healthcare has grown considerably as a result of record keeping policies (Kudyba, 2010). The documentation requirements for electronic health records significantly contribute to physician burnout and Wen-wai Yim Microsoft Health AI
[email protected]
## Thomas Lin
Microsoft Health AI
[email protected] work-life imbalance (Arndt et al., 2017). Automatic generation of clinical notes can help healthcare providers by significantly reducing the time they spend on documentation, and allowing them to spend more time with patients (Payne et al., 2018). It can also improve the clinical notes' accuracy by reducing errors and inconsistencies in documentation, leading to patient records with higher quality.
A reliable evaluation methodology is necessary to build and improve clinical note generation systems, but faces the two traditional limitations of evaluating Natural Language Generation (NLG)
systems. On one hand, human-expert evaluation, considered to be the most reliable way to evaluate NLG systems, can be both time-consuming and expensive. On the other hand, evaluating the performance of natural language generation (NLG)
systems automatically can be challenging due to the complexity of human language.
Several metrics have been proposed to evaluate the performance of NLG systems, including lexical N-gram based metrics and embedding-based metrics that measure the similarity between a system's generated text and one or more reference texts using pre-trained language models.
While several research efforts studied and compared automatic evaluation metrics on many opendomain and domain-specific datasets such as the CNN/DailyMail and TAC datasets (Lin, 2004a; Owczarzak et al., 2012; Peyrard, 2019; Fabbri et al.,
2021a; Deutsch et al., 2022), very few research works addressed the adequacy of evaluation metrics to the task of clinical note generation, where e.g., omitting critical medical facts in the generated text is a more significant failure point. To the best of our knowledge, only one research paper addressed this task based on one synthetic dataset of 57 mock consultation transcripts and summary notes (Moramarco et al., 2022).
In this paper, we study evaluation methods and metrics for the automatic generation of clinical 2575 notes from medical conversations, including their correlations with human assessments of factual omissions and hallucinations. We also propose new task-specific metrics and we compare them to SOTA evaluation metrics across several clinical text summarization datasets.
Our contributions are as follows:
- We study the relevance and impact of a widerange of existing automatic evaluation metrics in clinical note generation.
- We propose and study four types of evaluation metrics for the task of automatic note generation: knowledge-graph embedding-based metrics, customized model-based metrics, domain-adapted/fine-tuned metrics, and ensemble metrics1.
- We compare these metrics with SOTA metrics by performing a wide evaluation with 21 metrics according to different criteria such as factual correctness, hallucination, and omission rates.
- To perform a fact-based evaluation of the generated notes, we annotate seven datasets of automatically generated clinical notes using key phrase- and fact-based annotation guidelines that we use to compute reference manual scores for the correlation study2.
## 2 Related Work
Different evaluation metrics are commonly used to evaluate text summarization and generation including ROUGE-N (Lin, 2004b), BERTScore (Zhang* et al., 2020), MoverScore (Zhao et al., 2019),
BARTScore (Yuan et al., 2021), and BLEURT (Sellam et al., 2020). Other metrics have been also proposed for evaluating factual consistency and faithfulness (Durmus et al., 2020; Maynez et al., 2020; Wang et al., 2020; Pagnoni et al., 2021; Zhang et al.,
2022).
To study their effectiveness, several efforts focused on comparing automatic metrics such as ROUGE and BLEU based on their correlation with human judgments (Graham, 2015), and showed that automatic evaluation of generated summaries 1We publish the source code and fine-tuned checkpoint at: https://github.com/abachaa/
EvaluationMetrics-ACL23 2We also release the manual annotations: https://
github.com/abachaa/EvaluationMetrics-ACL23 still has several limitations and biases (Hardy et al.,
2019; Fabbri et al., 2021b). Furthermore, in (Bhandari et al., 2020), the authors showcase that the effectiveness of an evaluation metric depends on the task (e.g. summarization) and on the application scenario (e.g. system-level/ summary level).
Despite observations of frequent disagreements in manual evaluation campaigns (Howcroft et al.,
2020), expert-based evaluation remains an effective method to assess the performance of automatic metrics, especially in specialized domains. However, it relies on the availability of domain experts to rate the summaries and relevant datasets. Recently,
(Moramarco et al., 2022) studied the task of medical note generation on a small set of 57 transcriptnote pairs, manually annotated by clinicians. Their experiments showed that character-based Levenshtein distance, BERTScore, and METEOR performed best for evaluating automatic note generation in that dataset.
## 3 Evaluation Methodology
To assess the relevance and suitability of automatic evaluation metrics for the task of clinical note generation, we create expert-based annotations for critical aspects such as factual consistency, hallucinations, and omissions. We then assess each metric in light of its correlation with manual scores generated from the expert annotations.
## 3.1 Fact-Based Annotation
We define a fact as information that cannot be written in more than one sentence (e.g., *"Family history is significant for coronary artery disease.*").
Medical facts include problems, allergies, medical history, treatments, medications, tests, laboratory/radiology results, and diagnoses. We also include the patient age, gender, and race, and expand the critical facts to the patient and his family.
Annotators extracted individual facts from both the reference and system summaries in the form of subject-predicate-object expressions and following the above fact definition.
Comparing between a reference and hypothesis summary, and referencing the source text/conversation if required, the annotators were additionally tasked to identify overlapping and nonoverlapping facts to one of several categories which were later automatically counted. These included:
- Critical Omissions: the number of medical facts that were omitted,
| Dataset | #Summary | #Words | #Words | Annotations |
|-------------------|------------|------------|----------|---------------|
| Pairs | /Summary | /Reference | | |
| MTS-DIALOG | 400 | 15 | 36 | Facts |
| MEDIQA-RRS | 182 | 18 | 28 | Facts |
| CONSULT-FACTS | 54 | 203 | 214 | Facts |
| CONSULTHPI | 3,397 | 333 | 336 | Key Phrases |
| CONSULTASSESSMENT | 3,141 | 149 | 177 | Key Phrases |
| CONSULTEXAM | 2,144 | 163 | 137 | Key Phrases |
| CONSULTRESULTS | 540 | 38 | 15 | Key Phrases |
- Hallucinations: the number of hallucinated facts. Hallucinations are factual errors that do not exist in the source text and cannot be supported by the source facts (e.g., added dates, names, or treatments).
- Correct Facts: the number of correct facts according to the input conversation and the reference summary, and
- Incorrect Facts: the number of incorrect facts outside of hallucinations. Incorrect facts include values and attributes that are incorrectly copied from the source (e.g., date with a wrong year, wrong age, or dose).
Three trained annotators with medical background participated in the annotation process. Interannotator agreement for these computations are shown in Table 7 and Table 8 in Appendix A.
## 3.2 Key Phrase-Based Annotation
The key phrase- and fact-based annotations use different ways of representing information in clinical notes. While the fact-based annotation compares semantic triples (e.g., "Back pain stopped 8 days ago" vs. "Low back pain started 8 years ago"), the key-phrase annotations involved labeling incorrect words and phrases; for instance: "back pain" (instead of "lower back pain"), "stopped" (instead of
"started"), or "8 days" (instead of "8 years"). This method is more conducive in a production environment where errors can be attributable to specific parts of the report; the same labeling method is often also used for feedback to the author of the note in our different human quality review settings.
In our annotation setups, the key phrase-based annotation operated on text span highlights, while the fact-based annotation required more steps as the annotators were required to write the system and reference facts based on the system and reference summaries before comparing their counts.
Using highlights, critical hallucinations and incorrect information can be identified; meanwhile omissions were marked by identifying a required insertion of information in a corresponding location of the note. However, unlike the previous annotation, repeats of the same incorrect facts may be counted more than once if they appear multiple times. The labels produced here were from CONSULT-FULL dataset (cf. Section 3.4), with a reported average agreement of critical hallucinations, omissions, and inaccuracies was at 0.80 F1 score, relaxed overlap between 12 annotator pairs.
## 3.3 Reference Scores
From the fact-based annotations, we compute the following reference scores:
$$\begin{array}{l}Factual Precision=\frac{\#Correct\,Facts}{\#System\,Facts}\end{array}$$ $$\begin{array}{l}Factual\ Recall=\frac{\#Correct\,Facts}{\#Reference\,Facts}\end{array}$$ $$\begin{array}{l}Hallucination\ Rate=\frac{\#Hallucinated\ Facts}{\#System\,Facts}\end{array}$$ $$\begin{array}{l}Omission\ Rate=\frac{\#Omitted\ Facts}{\#Reference\,Facts}\end{array}$$
- *System Facts = Correct + Incorrect + Hallucinated*
From the key phrase-based annotations, we compute the normalized hallucination and omission counts:
$\equiv\;\frac{\hbar}{\not{T}}$
HallucinationCount =
\#*Hallucinated key phrases*
\#*System Summary W ords*
$Omission Count=\frac{\#Omitted\ key\ phrases}{\#Reference\ Summary\ Words}$
## 3.4 Datasets
Publicly available datasets on medical note generation and clinical text summarization are rare compared to open-domain data. For this study, we use three main collections:
- The MTS-DIALOG collection of 1.7k pairs of doctor-patient dialogues and associated clinical notes (Ben Abacha et al., 2023). System summaries are generated using the BART
model (Lewis et al., 2020).
- The MEDIQA-RRS dataset includes 182 pairs of clinical notes and system summaries randomly selected from the MEDIQA-RRS
collection (Ben Abacha et al., 2021).
- An in-house collection of medical notes
(called CONSULT-FULL) from multiple specialties with system summaries generated using a pointer-generator transformer model from doctor-patient conversations (Enarvi et al., 2020).
We followed the fact-based annotation guidelines to annotate the MTS-DIALOG and MEDIQA-RRS datasets, and a random subset from the CONSULT-FULL collection, called CONSULT-FACTS.
To study the relevance of the automatic metrics to the individual sections of clinical notes, we also split the CONSULT-FULL collection into four subsets: CONSULTHPI, CONSULTASSESSMENT,
CONSULTEXAM, and CONSULTRESULTS, which include summaries associated with the HPI, Assessment, Exam, and Results sections, and we annotated them manually at a phrase level.
Table 1 provides statistics about the datasets.
## 4 Task-Specific Evaluation Metrics
We study four different types of evaluation metrics for the task of automatic clinical note generation, that take into account the specificities of the medical domain by: (i) using embeddings built from medical Knowledge graphs (e.g, UMLS), (ii)
adapting model-based metrics (e.g., BERTScore) by increasing the weights of medical terms, (iii)
fine-tuning a model-based metric on a large collection of clinical notes, and (iv) building linear ensembles based on normalization and averaging of different metrics.
## 4.1 Knowledge-Graph Embedding-Based Metrics
Our first approach, called MIST, relies on knowledge embeddings generated by a KnowledgeGraph Embedding (KGE)-based model. Knowledge graphs provide additional semantic information that can support language understanding, especially in the medical domain where both terminologies and facts might not be common enough to be captured by contextual embeddings.
To build medical KGE, we use a generative adversarial networks model (Cai and Wang, 2018)
trained on concepts and relations from the Unified Medical Language System (UMLS) (Lindberg et al., 1993; Bodenreider, 2004).
The MIST metric relies on the embeddings of the medical concepts recognized in the texts to compute the similarity between the reference clinical notes and the automatically generated summaries.
To link the clinical notes to the UMLS concepts, we extract medical concepts by combining the scispaCy (Neumann et al., 2019) and MedCAT (Kraljevic et al., 2021) entity linking models.
We compute the final recall-oriented MIST value using the graph-based embeddings (Gc) of each concept c recognized in the reference and system summaries and the cosine similarity, as follows, for a set of reference concepts R and a set of system concepts S:
MIST(S, R) = 1 |R| X c∈S maxr∈R cos(Gc, Gr) (1)
## 4.2 Finetuning-Based Metric
Our second approach relies on fine-tuning modelbased metrics on relevant large medical collections of family medicine and orthopaedic notes. In particular, we started with the BLEURT-512 model
(Sellam et al., 2020) and fine-tuned it using a quality score, derived from an assigned *error score*3 from an internal quality review grading. The derived *quality score* was calculated by the following equation:
quality = 1 −error_score
$$\frac{error\_score}{max\_sentlen(summary,reference)}$$
$$q u a l i t y=1-{\frac{1}{m a x_{-}s}}$$
(2)
A total of 6,367 family medicine and orthopaedic
encounters were used for fine-tuning. To maximize
3This error score is calculated by a weighted sum of critical
and non-critical errors, as well as spelling/grammar/style errors annotated by domain expert labelers. The weight scheme
is given in Appendix B, Table 9.
diverse pairings as well as to satisfy BLEURT's maximum sequence length constraint, we finetuned at the level of each note's HPI, EXAM, RESULTS, and ASSESSMENT sections (with empty sections removed), resulting in 17,852 pairs. We fine-tuned over one epoch at default parameters.
We call the resulting metric based on this model:
ClinicalBLEURT.
## 4.3 Customized Model-Based Metrics 4.3.1 Medical Weighted Evaluation Metrics
Our third approach relies on designing new customized model-based metrics that assign a higher weight to the term with a medical meaning. These medical weighted metrics will allow us to examine whether words with a medical meaning can be more indicative for sentence similarity than common words for the task of automated medical note generation. Specifically, we update the scoring policy of two popular evaluation metrics, by providing a higher weight to the words in the summaries that have a medical meaning:
(i) BARTScore (Yuan et al., 2021) which uses a seq-seq model to calculate the log probability of one text y given another text x, and
(ii) BERTScore (Zhang* et al., 2020) which computes a similarity score for each token in the candidate summary with each token in the reference.
For both metrics, firstly we identify all the words, in the candidate and in the reference summary, which have a clinical meaning defined in UMLS using the MedCAT toolkit (Kraljevic et al., 2021). We then modify the scoring policy of both evaluation metrics to a weighted scoring policy where the weight for all the medical words is higher to provide a stronger incentive to the evaluation model to take in consideration these words during the evaluation of a candidate summary. Specifically, the BARTScore metric is updated to:
$MedBARTScore=\sum_{i=1}^{m}w_{t}\log p(y_{t}|y<t,x)$ where $x$ is the source sequence and $y$ (3) ($y_{1},...,y_{m}$) are the tokens of the target sequence of
length m.
We also update the BERTScore for a pair of a reference summary x and candidate summary xˆ to:
$$M e d B E R T S c o r e P={\frac{1}{|{\hat{x}}|}}\sum_{{\hat{x}}_{i}\in{\hat{x}}}w_{x}\operatorname*{max}_{x_{j}}x_{i}^{\top}{\hat{x}}_{j}$$
where, for both metrics, w = 1 for all the nonmedical words and wt = 1 + α for all the words with a medical meaning, where α is an additional weight value for these words. After experimenting with different values in the [0.1, 1.5] range, we found that the best α value was 1.0 for the weight policy.
4.3.2 Sliding Window Policy The main disadvantage of the previously mentioned model-based metrics over the traditional evaluation metrics (e.g., ROUGE) is that they can only encode texts that have length less than the encode-limit of the pre-trained models that are based on. For example, the encode-limit for a BERT-based metric is 512 tokens. However, real-world summaries and clinical notes may contain more than 512 tokens. For instance, our analysis in the CONSULTFULL dataset shows that 31% of the summaries have more than 512 tokens. We, therefore, create a variation of the BERTScore metric where we use a sliding window approach with the offset size of 100 tokens to encode overlength summaries.
Our sliding window policy is to first split the initial sentence into segments of at most 512 tokens with an overlap size of 100 tokens. Afterward, we calculate the embeddings of these segments independently and concatenate the results to get the original document representation.
This metric will be referred to as MedBERTScore-SP in the Results section.
## 4.4 Ensemble Metrics
To take advantage of the different perspectives brought by knowledge graph-based metrics, contextual embedding-based metrics, and lexical metrics, we tested different ensembles of normalized metric values. We selected the top-2 performing ensemble metrics for further experiments; MIST*Comb*1 and
(4) $$\begin{array}{l}\small\mathbf{(5)}\end{array}$$ = $$\begin{array}{l}\small\mathbf{(6)}\end{array}$$ .
$MIST_{Comb2}$: $$Z_{m}(x)=\frac{x-\mu_{m}}{\sigma_{m}}\tag{4}$$ $$MIST_{Comb1}(x)=\frac{1}{3}\sum_{m\in C_{1}}Z_{m}(m(x))\tag{5}$$ $$MIST_{Comb2}(x)=\frac{1}{3}\sum_{m\in C_{2}}Z_{m}(m(x))\tag{6}$$ with $Z_{m}(x)$ the normalized $Z_{score}$ of a metric $m$
µm the mean value of m over the summaries set, σm the standard deviation of m, C1 = {MIST,
ROUGE-1-R, BERTScore} and C2 = {MIST,
ROUGE-1-R, BLEURT}
2579
## 5 Evaluation Setup
We used the deberta-xlarge-mnli model (He et al.,
2021) as the base model for BERTScore and the BLEURT-20 checkpoint for the BLEURT metric, that correlate better with human judgment than the default variants based on recent experiments. For BARTScore metric, we used the BART model that was trained on the ParaBank2 dataset (Hu et al.,
2019) which was provided by the authors.
From the designed and tested 50+ metrics and variants (e.g. our new metrics and variants, opendomain metrics, ensemble metrics), we selected the top 21 metrics to study and analyze their performance on the different datasets. The selection was based on the performance of these metrics and their Pearson correlation scores with human judgments on the MTS-DIALOG and the CONSULT-FULL datasets. Our first tests also included open-domain fact-based metrics such as FactCC (Krysci ´ nski et al. ´ , 2019) (trained on the CNN/DailyMail dataset) and QA metrics such as QUALS (Nan et al., 2021) (developed using XSUM and CNN/DailyMail) but they did not perform well due to the differences between open-domain and clinical questions/answers.
The experiments were performed on one 80GB
NVidia A100 GPU.
## 6 Performance Of Evaluation Metrics
We compute the Pearson correlation scores between the automatic metrics and the reference scores. When both manual factual scores (F), hallucination (H), and omission rates (O) are available, we compute an aggregate score:
$$A g g r e g a t e S c o r e=\frac{1}{4}(2F-H-O)\ \ \ \ (7)$$
The intuition behind this score is that both omissions (O) and hallucinations (H) are critical criteria but they need to be taken into account in the context of factual correctness (F).
The results on the MTS-DIALOG dataset are presented in Table 2, where the ensemble metric MIST-Comb1 achieved the best correlation with manual scores on Factual F1, Factual Recall, and Omission Rate, with respective correlation values of 0.61, 0.64, and -0.71. The new MedBARTScore metric achieved the best correlation with human assessment for both Factual Precision and Hallucination Rate with 0.46 and -0.46 correlation values.
Table 3 presents the Pearson correlations between the automatic metrics and reference scores on the CONSULT-FACTS dataset. Compared with the results on the MTS dataset, ROUGE-N variants achieved high correlation scores in all categories.
In particular, ROUGE-1-R and ROUGE-L-R have the best scores for Factual F1 and Factual Recall.
ROUGE-1-F and ROUGE-L-F have the best scores for Factual Precision. ROUGE-1-P and ROUGEL-P have the best correlations with the Hallucination Rate. BERTScore-R and the ensemble metric MIST-Comb2 achieved the highest correlations with manual scores for the Omission Rate.
On the larger CONSULT-FULL dataset, ROUGE-N results followed a similar pattern on the CONSULTHPI, CONSULTASSESSMENT,
CONSULTEXAM, and CONSULTRESULTS subsets, as presented in Table 5, with ROUGE-1-P,
ROUGE-2-P, and ROUGE-L-P having the highest correlations with the Omission Rate in the CONSULTASSESSMENT dataset, and the Hallucination Rate in the CONSULTRESULTS dataset. This could be explained in part by the fact that the reference notes in the CONSULT-FULL dataset have been created from initial drafts produced by summarization models which increases the likelihood of word overlap.
The fine-tuned ClinicalBLEURT metric achieves the highest correlation scores for the Hallucination Rate in the CONSULTHPI, CONSULTEXAM,
and CONSULTRESULTS datasets. The new medical metrics MedBERTScore-P and MedBERTScorePS have the highest correlations for Hallucination and Omission Rates on the CONSULTASSESSMENT
and CONSULTEXAM datasets, respectively.
Table 4 presents the Pearson correlations between the automatic metrics and reference scores on the MEDIQA-RRS dataset, where ROUGE-1-
P has the highest correlation with Factual Precision and Hallucination Rate with 0.40 and -0.39. The new MIST metric has the highest correlation scores with Factual Recall and Factual F1 with 0.73 and 0.66, respectively.
Table 6 presents the average scores of the 21 metrics across all datasets. On specific evaluation criteria, the new MedBARTScore metric performed the best on average on correlating with low Hallucinate Rate, with a correlation score of
-0.38, and Factual Precision with an average correlation score of 0.45. Both MIST-Comb2 and BERTScore-R have the highest Aggregate Score
| Reference | ↑ Factual P | ↑ Factual R | ↑ Factual F1 | ↓ Hallucination | ↓ Omission | ↑ Aggregate Score |
|----------------------------------|---------------|---------------|----------------|-------------------|--------------|---------------------|
| Automatic SOTA Metrics ROUGE-1-P | 0.14 | -0.09 | -0.04 | -0.16 | 0.06 | 0.00 |
| ROUGE-1-R | 0.10 | 0.57 | 0.53 | 0.02 | -0.60 | 0.41 |
| ROUGE-1-F | 0.13 | 0.39 | 0.40 | -0.08 | -0.44 | 0.33 |
| ROUGE-2-P | 0.12 | 0.05 | 0.07 | -0.12 | -0.12 | 0.10 |
| ROUGE-2-R | 0.12 | 0.34 | 0.34 | -0.09 | -0.39 | 0.29 |
| ROUGE-2-F | 0.12 | 0.28 | 0.29 | -0.10 | -0.33 | 0.25 |
| ROUGE-L-P | 0.13 | -0.08 | -0.05 | -0.15 | 0.07 | 0.00 |
| ROUGE-L-R | 0.10 | 0.56 | 0.51 | 0.02 | -0.58 | 0.40 |
| ROUGE-L-F | 0.13 | 0.38 | 0.38 | -0.08 | -0.41 | 0.31 |
| BERTScore-P | 0.10 | 0.11 | 0.15 | -0.18 | -0.23 | 0.18 |
| BERTScore-R | 0.07 | 0.62 | 0.59 | 0.02 | -0.71 | 0.47 |
| BERTScore-F | 0.09 | 0.44 | 0.45 | -0.08 | -0.56 | 0.38 |
| BLEURT | 0.11 | 0.48 | 0.47 | -0.08 | -0.59 | 0.40 |
| BARTScore | 0.37 | 0.09 | 0.19 | -0.34 | -0.26 | 0.25 |
| New Metrics MedBERTScore-P | 0.28 | -0.16 | -0.02 | -0.27 | -0.32 | 0.14 |
| MedBERTScore-SP | 0.28 | -0.16 | -0.02 | -0.27 | -0.32 | 0.14 |
| MedBARTScore | 0.46 | 0.13 | 0.24 | -0.46 | -0.27 | 0.30 |
| ClinicalBLEURT | 0.19 | 0.22 | 0.19 | -0.06 | -0.20 | 0.16 |
| MIST | 0.02 | 0.46 | 0.45 | 0.08 | -0.51 | 0.33 |
| MIST-Comb1 | 0.08 | 0.64 | 0.61 | 0.05 | -0.71 | 0.47 |
| MIST-Comb2 | 0.09 | 0.60 | 0.58 | 0.01 | -0.68 | 0.46 |
| Reference | ↑ Factual P | ↑ Factual R | ↑ Factual F1 | ↓ Hallucination | ↓ Omission | ↑ Aggregate Score |
|----------------------------------|---------------|---------------|----------------|-------------------|--------------|---------------------|
| Automatic SOTA Metrics ROUGE-1-P | 0.63 | 0.32 | 0.50 | -0.73 | -0.46 | 0.55 |
| ROUGE-1-R | 0.59 | 0.80 | 0.79 | -0.39 | -0.84 | 0.70 |
| ROUGE-1-F | 0.70 | 0.70 | 0.78 | -0.55 | -0.79 | 0.73 |
| ROUGE-2-P | 0.56 | 0.33 | 0.45 | -0.60 | -0.43 | 0.48 |
| ROUGE-2-R | 0.55 | 0.73 | 0.71 | -0.39 | -0.78 | 0.65 |
| ROUGE-2-F | 0.62 | 0.62 | 0.68 | -0.49 | -0.70 | 0.64 |
| ROUGE-L-P | 0.63 | 0.33 | 0.51 | -0.73 | -0.47 | 0.56 |
| ROUGE-L-R | 0.60 | 0.80 | 0.79 | -0.40 | -0.84 | 0.71 |
| ROUGE-L-F | 0.70 | 0.70 | 0.78 | -0.56 | -0.79 | 0.73 |
| BERTScore-P | 0.62 | 0.47 | 0.58 | -0.56 | -0.60 | 0.58 |
| BERTScore-R | 0.60 | 0.80 | 0.78 | -0.37 | -0.85 | 0.70 |
| BERTScore-F | 0.66 | 0.69 | 0.74 | -0.49 | -0.79 | 0.69 |
| BLEURT | 0.61 | 0.67 | 0.71 | -0.49 | -0.76 | 0.67 |
| BARTScore | 0.61 | 0.34 | 0.51 | -0.66 | -0.41 | 0.52 |
| New Metrics MedBERTScore-P | 0.63 | 0.47 | 0.59 | -0.57 | -0.60 | 0.59 |
| MedBERTScore-SP | 0.63 | 0.47 | 0.59 | -0.57 | -0.61 | 0.59 |
| MedBARTScore | 0.61 | 0.35 | 0.51 | -0.67 | -0.42 | 0.53 |
| ClinicalBLEURT | 0.04 | 0.15 | 0.08 | 0.09 | -0.15 | 0.05 |
| MIST | 0.08 | 0.44 | 0.31 | 0.08 | -0.44 | 0.25 |
| MIST-Comb1 | 0.48 | 0.78 | 0.72 | -0.26 | -0.81 | 0.63 |
| MIST-Comb2 | 0.53 | 0.80 | 0.75 | -0.33 | -0.85 | 0.67 |
Table 3: CONSULT-F**ACTS**: Pearson's correlation coefficients between the automatic and manual scores.
| Reference | ↑ Factual P | ↑ Factual R | ↑ Factual F1 | ↓ Hallucination | ↓ Omission | ↑ Aggregate Score |
|----------------------------------|---------------|---------------|----------------|-------------------|--------------|---------------------|
| Automatic SOTA Metrics ROUGE-1-P | 0.40 | -0.10 | 0.00 | -0.39 | -0.30 | 0.17 |
| ROUGE-1-R | 0.22 | 0.55 | 0.57 | -0.22 | -0.74 | 0.53 |
| ROUGE-1-F | 0.31 | 0.39 | 0.47 | -0.31 | -0.69 | 0.49 |
| ROUGE-2-P | 0.34 | 0.04 | 0.10 | -0.32 | -0.36 | 0.22 |
| ROUGE-2-R | 0.20 | 0.46 | 0.47 | -0.18 | -0.66 | 0.45 |
| ROUGE-2-F | 0.25 | 0.37 | 0.41 | -0.23 | -0.63 | 0.42 |
| ROUGE-L-P | 0.37 | -0.11 | -0.02 | -0.36 | -0.29 | 0.15 |
| ROUGE-L-R | 0.20 | 0.54 | 0.55 | -0.21 | -0.73 | 0.51 |
| ROUGE-L-F | 0.29 | 0.38 | 0.44 | -0.29 | -0.69 | 0.47 |
| BERTScore-P | 0.31 | -0.07 | 0.03 | -0.30 | -0.33 | 0.17 |
| BERTScore-R | 0.17 | 0.56 | 0.58 | -0.21 | -0.73 | 0.53 |
| BERTScore-F | 0.29 | 0.32 | 0.38 | -0.30 | -0.64 | 0.43 |
| BLEURT | 0.33 | 0.46 | 0.51 | -0.29 | -0.69 | 0.50 |
| BARTScore | 0.38 | 0.15 | 0.23 | -0.37 | -0.39 | 0.31 |
| New Metrics MedBERTScore-P | 0.32 | -0.04 | 0.05 | -0.31 | -0.35 | 0.19 |
| MedBERTScore-SP | 0.32 | -0.04 | 0.05 | -0.31 | -0.35 | 0.19 |
| MedBARTScore | 0.29 | 0.03 | 0.13 | -0.28 | -0.30 | 0.21 |
| ClinicalBLEURT | 0.27 | 0.11 | 0.10 | -0.26 | -0.09 | 0.14 |
| MIST | 0.11 | 0.73 | 0.66 | -0.10 | -0.52 | 0.49 |
| MIST-Comb1 | 0.18 | 0.67 | 0.66 | -0.19 | -0.72 | 0.56 |
| MIST-Comb2 | 0.24 | 0.64 | 0.65 | -0.23 | -0.72 | 0.56 |
Table 4: **MEDIQA-RRS**: Pearson's correlation coefficients between the automatic and manual scores. Best results are highlighted in bold and second best are underlined.
| HPI Section | Assessment Section | Exam Section | Results Section | | | | | |
|------------------------|----------------------|----------------|-------------------|---------------|----------|---------------|----------|-------|
| Hallucination | Omission | Hallucination | Omission | Hallucination | Omission | Hallucination | Omission | |
| SOTA Metrics ROUGE-1-P | -0.23 | -0.21 | -0.45 | -0.30 | -0.19 | -0.17 | -0.18 | -0.23 |
| ROUGE-1-R | -0.20 | -0.18 | -0.33 | -0.21 | -0.19 | -0.15 | -0.09 | -0.19 |
| ROUGE-1-F | -0.24 | -0.22 | -0.37 | -0.25 | -0.21 | -0.18 | -0.11 | -0.20 |
| ROUGE-2-P | -0.25 | -0.21 | -0.46 | -0.30 | -0.24 | -0.17 | -0.18 | -0.24 |
| ROUGE-2-R | -0.22 | -0.19 | -0.37 | -0.24 | -0.21 | -0.18 | -0.12 | -0.23 |
| ROUGE-2-F | -0.25 | -0.21 | -0.41 | -0.27 | -0.23 | -0.18 | -0.13 | -0.22 |
| ROUGE-L-P | -0.23 | -0.21 | -0.45 | -0.30 | -0.20 | -0.17 | -0.18 | -0.23 |
| ROUGE-L-R | -0.20 | -0.18 | -0.33 | -0.21 | -0.19 | -0.15 | -0.09 | -0.20 |
| ROUGE-L-F | -0.24 | -0.22 | -0.38 | -0.25 | -0.21 | -0.18 | -0.11 | -0.20 |
| BERTScore-P | -0.23 | -0.21 | -0.46 | -0.27 | -0.22 | -0.20 | -0.12 | -0.23 |
| BERTScore-R | -0.22 | -0.18 | -0.31 | -0.19 | -0.22 | -0.16 | -0.05 | -0.16 |
| BERTScore-F | -0.24 | -0.20 | -0.39 | -0.23 | -0.22 | -0.19 | -0.08 | -0.20 |
| BLEURT | -0.20 | -0.20 | -0.37 | -0.23 | -0.18 | -0.13 | -0.10 | -0.21 |
| BARTScore | -0.26 | -0.21 | -0.42 | -0.29 | -0.27 | -0.19 | -0.16 | -0.21 |
| New Metrics MedBERT-P | -0.23 | -0.21 | -0.47 | -0.27 | -0.21 | -0.20 | -0.10 | -0.23 |
| MedBERT-SP | -0.23 | -0.22 | -0.47 | -0.28 | -0.22 | -0.20 | -0.10 | -0.23 |
| MedBART | -0.26 | -0.23 | -0.46 | -0.29 | -0.25 | -0.19 | -0.16 | -0.23 |
| ClinicalBLEURT | -0.30 | -0.19 | -0.29 | -0.25 | -0.31 | -0.18 | -0.25 | -0.19 |
| MIST | -0.07 | -0.05 | -0.12 | -0.16 | -0.09 | -0.09 | 0.02 | -0.02 |
| MIST-Comb1 | -0.18 | -0.15 | -0.27 | -0.20 | -0.19 | -0.15 | -0.04 | -0.13 |
| MIST-Comb2 | -0.18 | -0.17 | -0.30 | -0.22 | -0.18 | -0.15 | -0.06 | -0.15 |
Table 5: CONSULT-FULL: Pearson's correlation coefficients between the automatic and manual scores on the CONSULTHPI, CONSULTASSESSMENT, CONSULTEXAM, and CONSULTRESULTS datasets. Unlike Tables 2-4 which present the fact-based results, here, Hallucination and Omission are measured at the key-phrase level.
SOTA Metrics ↑ Factual P ↑ Factual R ↑ Factual F1 ↓ Hallucination ↓ Omission ↑ **Aggregate Score**
ROUGE-1-P 0.39 0.04 0.15 -0.35 -0.23 0.22 ROUGE-1-R 0.30 0.64 0.63 -0.20 -0.46 0.48
ROUGE-1-F 0.38 0.49 0.55 -0.27 -0.43 0.45
ROUGE-2-P 0.34 0.14 0.21 -0.31 -0.27 0.25 ROUGE-2-R 0.29 0.51 0.51 -0.23 -0.41 0.41
ROUGE-2-F 0.33 0.42 0.46 -0.26 -0.39 0.39
ROUGE-L-P 0.38 0.04 0.15 -0.34 -0.23 0.22 ROUGE-L-R 0.30 0.63 0.62 -0.20 -0.45 0.47
ROUGE-L-F 0.37 0.49 0.53 -0.27 -0.42 0.44
BERTScore-P 0.34 0.17 0.25 -0.30 -0.31 0.28
BERTScore-R 0.28 0.66 0.65 -0.19 -0.47 **0.49**
BERTScore-F 0.35 0.48 0.52 -0.26 -0.44 0.44
BLEURT 0.35 0.54 0.56 -0.25 -0.44 0.45
BARTScore **0.45** 0.19 0.31 -0.37 -0.29 0.32
New Metrics MedBERTScore-P 0.41 0.09 0.20 -0.32 -0.33 0.26
MedBERTScore-SP 0.41 0.09 0.20 -0.32 -0.33 0.27
MedBARTScore **0.45** 0.17 0.29 **-0.38** -0.28 0.31
ClinicalBLEURT 0.17 0.16 0.13 -0.08 -0.15 0.12
MIST 0.07 0.55 0.47 -0.02 -0.28 0.31
MIST-Comb1 0.25 **0.70 0.66** -0.15 -0.45 0.48
MIST-Comb2 0.29 0.68 **0.66** -0.18 -0.46 **0.49**
of 0.49 followed by MIST-Comb1 and ROUGE1-R. The same set of metrics has similar positive results on the MTS-DIALOG, MEDIQARRS, and CONSULT-FACTS datasets. Using the dataset-specific Aggregate Score, we observe that MIST-Comb1, MIST-Comb2, BERTScore-R, and ROUGE-1-R perform well on Factual correctness while maintaining a stable/good performance on being indicative of lower hallucination and omission rates. These datasets are substantially different from each other: long clinical notes for CONSULTFACTS (with 214 words/note), concise impression sections from radiology reports for MEDIQARRS(with 18 words/summary), and different types of sections from different specialities for MTSDIALOG (15 words/summary), which suggests that this set of metrics can be relied upon for the evaluation of clinical note generation.
## 7 Conclusion
While finding a relevant and generic evaluation metric for NLG systems remains a challenging task, our study shows that the solution to the problem is likely to be domain- and task-specific. In particular, metrics that did well on capturing factual accuracy did not necessarily capture critical aspects in clinical note generation such as hallucinations and key medical fact omissions. Our experiments also show that language-model based metrics and metric ensembles can outperform SOTA N-gram based measures such as ROUGE when reference summaries are not biased. The extensive measurements and new metrics evaluated in this paper are valuable for guiding decisions on which metrics will be most effective for researchers to use going forward in their Automated Medical Note Generation scenarios.
## Limitations
While our research and empirical results support specific evaluation metrics for the task of clinical note generation according to a given evaluation criteria, more results, including testing on additional datasets are needed to further validate these findings. Our manual annotations followed clear and structured guidelines, but could still contain some level of annotator bias and have an average Pearson inter-annotator-agreement of 0.67 (Tables 7 and 8).
## Ethics Statement
No protected health information will be released with the created annotations. Annotators were paid a fair hourly wage consistent with the practice of the state of hire.
## Acknowledgements
We thank the anonymous reviewers and area chair for their valuable feedback. We also thank our annotators for their help with the manual evaluation.
## References
Brian G. Arndt, John W. Beasley, Michelle D. Watkinson, Jonathan L. Temte, Wen-Jan Tuan, Christine A.
Sinsky, and Valerie J. Gilchrist. 2017. Tethered to the ehr: Primary care physician workload assessment using ehr event log data and time-motion observations.
The Annals of Family Medicine, 15(5):419–426.
Asma Ben Abacha, Yassine Mrabet, Yuhao Zhang, Chaitanya Shivade, Curtis P. Langlotz, and Dina DemnerFushman. 2021. Overview of the MEDIQA 2021 shared task on summarization in the medical domain. In Proceedings of the 20th Workshop on Biomedical Language Processing, BioNLP@NAACL-HLT 2021, Online, June 11, 2021, pages 74–85. Association for Computational Linguistics.
Asma Ben Abacha, Wen-wai Yim, Yadan Fan, and Thomas Lin. 2023. An empirical study of clinical note generation from doctor-patient encounters. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2291–2302, Dubrovnik, Croatia. Association for Computational Linguistics.
Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu, and Graham Neubig. 2020. Reevaluating evaluation in text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9347–9359, Online. Association for Computational Linguistics.
Olivier Bodenreider. 2004. The unified medical language system (umls): integrating biomedical terminology. *Nucleic Acids Res.*, 32(Database-Issue):267–
270.
Liwei Cai and William Yang Wang. 2018. KBGAN: adversarial learning for knowledge graph embeddings.
In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA,
June 1-6, 2018, Volume 1 (Long Papers), pages 1470–
1480. Association for Computational Linguistics.
Pengshan Cai, Fei Liu, Adarsha Bajracharya, Joe Sills, Alok Kapoor, Weisong Liu, Dan Berlowitz, David Levy, Richeek Pradhan, and Hong Yu. 2022. Generation of patient after-visit summaries to support physicians. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 6234–
6247, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Daniel Deutsch, Rotem Dror, and Dan Roth. 2022. Reexamining system-level correlations of automatic summarization evaluation metrics. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 6038–
6052. Association for Computational Linguistics.
Esin Durmus, He He, and Mona T. Diab. 2020. FEQA:
A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5055–5070. Association for Computational Linguistics.
Seppo Enarvi, Marilisa Amoia, Miguel Del-Agua Teba, Brian Delaney, Frank Diehl, Stefan Hahn, Kristina Harris, Liam McGrath, Yue Pan, Joel Pinto, Luca Rubini, Miguel Ruiz, Gagandeep Singh, Fabian Stemmer, Weiyi Sun, Paul Vozila, Thomas Lin, and Ranjani Ramamurthy. 2020. Generating medical reports from patient-doctor conversations using sequence-tosequence models. In Proceedings of the First Workshop on Natural Language Processing for Medical Conversations, pages 22–30, Online. Association for Computational Linguistics.
Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2021a. Summeval: Reevaluating summarization evaluation. *Trans. Assoc.*
Comput. Linguistics, 9:391–409.
Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2021b. Summeval: Reevaluating summarization evaluation. *Trans. Assoc.*
Comput. Linguistics, 9:391–409.
Yvette Graham. 2015. Re-evaluating automatic summarization with BLEU and 192 shades of ROUGE.
In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 128–
137, Lisbon, Portugal. Association for Computational Linguistics.
Hardy, Shashi Narayan, and Andreas Vlachos. 2019.
Highres: Highlight-based reference-less evaluation of summarization. In *Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August* 2, 2019, Volume 1: Long Papers, pages 3381–3392.
Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In *International* Conference on Learning Representations.
David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020.
Twenty years of confusion in human evaluation: NLG
needs evaluation sheets and standardised definitions.
In Proceedings of the 13th International Conference on Natural Language Generation, INLG 2020, Dublin, Ireland, December 15-18, 2020, pages 169–
182. Association for Computational Linguistics.
J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019. Large-scale, diverse, paraphrastic bitexts via sampling and clustering. In *Proceedings of the 23rd Conference on* Computational Natural Language Learning (CoNLL),
pages 44–54, Hong Kong, China. Association for Computational Linguistics.
Tom Knoll, Francesco Moramarco, Alex Papadopoulos Korfiatis, Rachel Young, Claudia Ruffini, Mark Perera, Christian Perstl, Ehud Reiter, Anya Belz, and Aleksandar Savkov. 2022. User-driven research of medical note generation software. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 385–394, Seattle, United States. Association for Computational Linguistics.
Zeljko Kraljevic, Thomas Searle, Anthony Shek, Lukasz Roguski, Kawsar Noor, Daniel Bean, Aurelie Mascio, Leilei Zhu, Amos A Folarin, Angus Roberts, Rebecca Bendayan, Mark P Richardson, Robert Stewart, Anoop D Shah, Wai Keong Wong, Zina Ibrahim, James T Teo, and Richard J B Dobson. 2021. Multidomain clinical natural language processing with MedCAT: The medical concept annotation toolkit.
Artif. Intell. Med., 117:102083.
Kundan Krishna, Sopan Khosla, Jeffrey Bigham, and Zachary C. Lipton. 2021. Generating SOAP notes from doctor-patient conversations using modular summarization techniques. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4958–4972, Online. Association for Computational Linguistics.
Wojciech Krysci ´ nski, Bryan McCann, Caiming Xiong, ´
and Richard Socher. 2019. Evaluating the factual consistency of abstractive text summarization. arXiv preprint arXiv:1910.12840.
Stephan Kudyba. 2010. *Healthcare Informatics: Improving Efficiency and Productivity*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL 2020, Online, July 5-10, 2020, pages 7871–7880.
Association for Computational Linguistics.
Chin-Yew Lin. 2004a. Looking for a few good metrics:
Automatic summarization evaluation - how many samples are enough? In *Proceedings of the Fourth* NTCIR Workshop on Research in Information Access Technologies Information Retrieval, Question Answering and Summarization, NTCIR-4, National Center of Sciences, Tokyo, Japan, June 2-4, 2004.
National Institute of Informatics (NII).
Chin-Yew Lin. 2004b. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Donald A. Lindberg, Betsy L. Humphreys, and Alexa T.
McCray. 1993. The unified medical language system.
Methods of Information in Medicine, 32:281–291.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1906–1919. Association for Computational Linguistics.
Francesco Moramarco, Alex Papadopoulos-Korfiatis, Mark Perera, Damir Juric, Jack Flann, Ehud Reiter, Anya Belz, and Aleksandar Savkov. 2022. Human evaluation and correlation with automatic metrics in consultation note generation. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL
2022, Dublin, Ireland, May 22-27, 2022, pages 5739–
5754. Association for Computational Linguistics.
Feng Nan, Cícero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Kathleen R. McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, and Bing Xiang. 2021. Improving factual consistency of abstractive summarization via question answering.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
Long Papers), Virtual Event, August 1-6, 2021, pages 6881–6894. Association for Computational Linguistics.
Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In *Proceedings of the 18th BioNLP Workshop and Shared* Task, pages 319–327, Florence, Italy. Association for Computational Linguistics.
Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An assessment of the accuracy of automatic evaluation in summarization. In Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization, pages 1–9, Montréal, Canada. Association for Computational Linguistics.
Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. *CoRR*, abs/2104.13346.
Thomas H. Payne, W. David Alonso, J. Andrew Markiel, Kevin Lybarger, and Andrew A. White. 2018. Using voice to create hospital progress notes: Description of a mobile application and supporting system integrated with a commercial electronic health record.
Journal of Biomedical Informatics, 77:91–96.
Maxime Peyrard. 2019. Studying summarization evaluation metrics in the appropriate scoring range. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5093–
5100, Florence, Italy. Association for Computational Linguistics.
Thibault Sellam, Dipanjan Das, and Ankur P. Parikh.
2020. BLEURT: learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL 2020, Online, July 5-10, 2020, pages 7881–7892.
Association for Computational Linguistics.
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020.
Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5008–5020. Association for Computational Linguistics.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text generation. In *Advances in Neural Information Processing* Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 27263–27277.
Shiyue Zhang, David Wan, and Mohit Bansal. 2022.
Extractive is not faithful: An investigation of broad unfaithfulness problems in extractive summarization.
CoRR, abs/2209.03549.
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In *International* Conference on Learning Representations.
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore:
Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in
## A Inter-Annotator Agreements (Iaa) B Finetuning-Based Metric: Weight Scheme
annotations kappa f1 f1(tol=1) f1(tol=2) pearson crit-ommissions 0.29 0.48 0.65 0.75 0.75 hallucinations 0.46 0.73 0.87 0.92 0.97
correct-facts 0.12 0.13 0.30 0.40 0.79
incorrect-facts 0.58 0.73 0.90 1.00 0.89 annotations kappa f1 f1(tol=1) f1(tol=2) pearson crit-ommissions 0.26 0.34 0.66 0.85 0.81 hallucinations 0.36 0.76 0.96 0.98 0.34
correct-facts 0.07 0.16 0.60 0.82 0.76
incorrect-facts 0.06 0.64 0.79 0.90 0.07
Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics.
Table 8: Averaged pairwise IAA for the annotation of 34 summary-note pairs from the MEDIQA-RRS dataset.
error_type original weight normalized weight critical 3 1 non-critical 1 13 spelling/grammar 141 12 Table 9: Error score weights used in production for evaluating produced notes during a QA review. The normalized versions of the weights are used in our calculations so that the number of errors will not exceed over 1 per sentence unless there is more than 1 critical error.
Table 7: Averaged pairwise IAA for the annotation of 20 transcript-section pairs from the CONSULT-FACTS
dataset.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3.4
## C ✓ **Did You Run Computational Experiments?** 6
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
3
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
3 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
hao-etal-2023-rethinking | Rethinking Translation Memory Augmented Neural Machine Translation | https://aclanthology.org/2023.findings-acl.162 | This paper rethinks translation memory augmented neural machine translation (TM-augmented NMT) from two perspectives, i.e., a probabilistic view of retrieval and the variance-bias decomposition principle. The finding demonstrates that TM-augmented NMT is good at the ability of fitting data (i.e., lower bias) but is more sensitive to the fluctuations in the training data (i.e., higher variance), which provides an explanation to a recently reported contradictory phenomenon on the same translation task: TM-augmented NMT substantially advances NMT without TM under the high resource scenario whereas it fails under the low resource scenario. Then this paper proposes a simple yet effective TM-augmented NMT model to promote the variance and address the contradictory phenomenon. Extensive experiments show that the proposed TM-augmented NMT achieves consistent gains over both conventional NMT and existing TM-augmented NMT under two variance-preferable (low resource and plug-and-play) scenarios as well as the high resource scenario. | # Rethinking Translation Memory Augmented Neural Machine Translation
Hongkun Hao1∗ Guoping Huang2 **Lemao Liu**2†
Zhirui Zhang2 Shuming Shi2 **Rui Wang**1†
1Shanghai Jiao Tong University {haohongkun, wangrui12}@sjtu.edu.cn 2Tencent AI Lab {donkeyhuang, redmondliu, shumingshi}@tencent.com [email protected]
## Abstract
This paper rethinks translation memory augmented neural machine translation (TMaugmented NMT) from two perspectives, i.e., a probabilistic view of retrieval and the variancebias decomposition principle. The finding demonstrates that TM-augmented NMT is good at the ability of fitting data (i.e., lower bias) but is more sensitive to the fluctuations in the training data (i.e., higher variance),
which provides an explanation to a recently reported contradictory phenomenon on the same translation task: TM-augmented NMT substantially advances vanilla NMT under the high-resource scenario whereas it fails under the low-resource scenario. Then we propose a simple yet effective TM-augmented NMT
model to promote the variance and address the contradictory phenomenon. Extensive experiments show that the proposed TMaugmented NMT achieves consistent gains over both conventional NMT and existing TM-augmented NMT under two variancepreferable (low-resource and plug-and-play)
scenarios as well as the high-resource scenario.
## 1 Introduction
The effectiveness of Translation Memory (TM)
in Machine Translation has long been recognized (Garcia, 2009; Koehn and Senellart, 2010; Utiyama et al., 2011; Wang et al., 2013; Liu et al.,
2019), because a TM retrieved from a bilingual dataset (i.e., training data or an external dataset)
may provide valuable knowledge for the source sentence to be translated. Many notable approaches recently have been proposed to enhance neural machine translation (NMT) by using a TM (Feng et al., 2017; Gu et al., 2018; Cao et al., 2020; Hoang et al., 2022b; Cai et al., 2021; Huang et al., 2021).
For example, on the standard JRC-Acquis task, TM-augmented NMT achieves substantial
∗Partial work was done when Hongkun Hao was interning at Tencent AI Lab.
†Lemao Liu and Rui Wang are corresponding authors.
| Model | High-Resource | Low-Resource |
|---------|-----------------|----------------|
| w/o TM | 60.83 | 54.54 |
| w/ TM | 63.76 ↑ | 53.92 ↓ |
Table 1: Testing BLEU comparison on JRC-Acquis German⇒English task. w/o TM and w/ TM denote the vanilla Transformer and TM-augmented Transformer, respectively; High-Resource and Low-Resource denote full and quarter train data are used for NMT training and TM retrieval.
gains over the vanilla NMT (without TM) under the conventional high-resource training scenario.
Unfortunately, Cai et al. (2021) surprisingly find that TM-augmented NMT fails to advance the vanilla NMT model on the same task under a lowresource scenario, if a quarter of the full data is used for training and TM retrieval, as reproduced in Table 1. Due to the lack of theoretical insights, it is unclear the reason why such a contradictory phenomenon happens. This motivates us to rethink the working mechanism of TM-augmented NMT
as well as its statistical principle.
In this paper, we first cast TM-augmented NMT
as an approximation of a latent variable model where the retrieved TM is the latent variable through a probabilistic view of retrieval. From this statistical viewpoint, we identify that the success of such an approximation depends on the variance of TM-augmented NMT with respect to the latent variable. Then, we empirically estimate the variance of TM-augmented NMT from the principle of variance-bias decomposition in the learning theory. Our findings demonstrate that TMaugmented NMT is worse than the vanilla NMT
in terms of variance which indicates the sensitivity to fluctuations in the training set, although TMaugmented NMT is better in terms of the bias which indicates the ability of fitting data. The finding about the variance takes the responsibility of the contradictory phenomenon in Table 1, because limited training data may amplify its negative effect on variance (Vapnik, 1999; Niyogi and Girosi, 1996; Bishop and Nasrabadi, 2006).
To better trade off the variance and the bias, we further propose a simple yet effective method for TM-augmented NMT. The proposed method is general to be applied on top of any TM-augmented NMT models. To validate the effectiveness of the proposed approach, we conduct extensive experiments on several translation tasks under different scenarios including lowresource scenario, plug-and-play scenario, and high-resource scenario.
Contributions of this paper are three-fold:
- It rethinks and analyzes the variance of TMaugmented NMT models from the probabilistic view of retrieval and the bias-variance decomposition perspective.
- It proposes a simple yet effective lightweight network to ensemble TM-augmented NMT models, which better trades off variance and bias.
- Its experiments show the effectiveness of the aforementioned approach, which outperforms both vanilla Transformer and baseline TM-augmented NMT models under the low-resource scenario, plug-and-play scenario, and conventional highresource scenario.
## 2 Preliminary 2.1 Nmt
Suppose x = {x1, · · · , xn} is a source sentence and y = {y1, · · · , ym} is the corresponding target sentence. NMT builds a probabilistic model with neural networks parameterized by θ, which is used to translate x in the source language to y in the target language. Formally, NMT aims to generate output y given x according to the conditional probability defined by Eq. (1):
$$\begin{split}P(\mathbf{y}|\mathbf{x};\boldsymbol{\theta})&=\prod_{t=1}^{m}P(y_{t}|\mathbf{x},\mathbf{y}_{<t};\boldsymbol{\theta})\\ &=\prod_{t=1}^{m}\operatorname{Softmax}\bigl{(}f(H_{t})\bigr{)}[y_{t}],\end{split}\tag{1}$$
where Ht denotes the NMT decoding state.
## 2.2 Tm-Augmented Nmt
In general, TM-augmented NMT works in the following two-step paradigm. It first retrieves top-K TM bilingual sentences Z = {zk}
K
k=1, where zk = (x tm k
, y tm k
) is the k-th TM; Then it generates the translation y by using the information from the source sentence x and its retrieved TMs Z.
Retrieval Model Following previous works (Gu et al., 2018; Zhang et al., 2018; Xia et al., 2019; He et al., 2021), for x we employ Apache Lucene
(Białecki et al., 2012) to retrieve top-100 similar bilingual sentences from datastore. Then we adopt the similarity function in Eq. (2) to re-rank the retrieved bilingual sentences and maintain top-K
(e.g. K = 5) bilingual sentences as the TMs for x:
$$\mathrm{sim}({\bf x},{\bf z}_{k})=1-\frac{\mathrm{dist}({\bf x},{\bf x}_{k}^{\mathrm{tm}})}{\mathrm{max}(|{\bf x}|,|{\bf x}_{k}^{\mathrm{tm}}|)},\qquad(2)$$
where dist denotes the edit-distance.
Generation Model Given a source sentence x and a small set of relevant TMs Z = {zk}
K
k=1, the generation model defines the conditional probability P(y|x, Z; θ):
$$P(\mathbf{y}|\mathbf{x},\mathbf{Z};\boldsymbol{\theta})=\prod_{t=1}^{m}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{Z};\boldsymbol{\theta})\tag{3}$$ $$=\prod_{t=1}^{m}\text{Softmax}\big{(}f(H_{t,\mathbf{Z}})\big{)}[y_{t}],$$ where $H_{t,\mathbf{Z}}$ denotes the decoding state of
TM-augmented NMT. There are different TMaugmented NMT models, and accordingly there are different instantiations of Ht,Z. We refer readers to Gu et al. (2018); Bulte and Tezcan (2019); Cai et al.
(2021); He et al. (2021) for its detailed definitions.
## 3 Rethinking Tm-Augmented Nmt 3.1 Probabilistic View Of Retrieval
Given the source sentence x, the top-K retrieval aforementioned actually can be considered as a probabilistic retrieval model (i.e., P(Z|x)), from which a translation memory Z = {zk}
K
k=1 is sampled. Mathematically, such a retrieval model P(Z|x) is defined as follows:
$$P(\mathbf{Z}|\mathbf{x})=\prod_{k}P(\mathbf{z}_{k}|\mathbf{x})\propto\exp(\sin(\mathbf{x},\mathbf{z}_{k})/T),\tag{4}$$
where sim is defined as in Eq. (2), and T > 0 is a temperature. Note that if T is a sufficiently small number, sampling zk from the above probabilistic retrieval model is similar to the deterministic arg max retrieval widely used in prior studies (Gu et al., 2018; Zhang et al., 2018; Xia et al., 2019).
By using the probabilistic retrieval model P(Z|x), the translation model P(y|x) is related to the variable Z theoretically through the following latent variable model:
$$P(\mathbf{y}|\mathbf{x})=\sum_{\mathbf{Z}}P(\mathbf{Z}|\mathbf{x})P(\mathbf{y}|\mathbf{x},\mathbf{Z})=\mathbb{E}_{\mathbf{Z}}P(\mathbf{y}|\mathbf{x},\mathbf{Z}).\tag{5}$$
In practice, it is impossible to perform the summation over all possible Z. Instead it can be estimated by the Monte Carlo sampling:
$$P(\mathbf{y}|\mathbf{x})\approx P(\mathbf{y}|\mathbf{x},\mathbf{Z}),{\mathrm{~with~}}\mathbf{Z}\sim P(\mathbf{Z}|\mathbf{x}).\quad(6)$$
As a result, according to Eq. (6), we can see the following statement: the TM-augmented NMT model P(y|x, Z) can be considered as an approximation of P(y|x) via Monte Carlo sampling over a latent variable model in Eq. (5). In particular, whether TM-augmented NMT
P(y|x, Z) is a good estimator depends on the **expected approximate error** defined by EZ
P(y|x, Z) − P(y|x)
2. In other words, P(y|x, Z) is a good estimator of P(y|x) if the expected approximate error is small; otherwise P(y|x, Z) is not a good estimator (Voinov and Nikulin, 2012).
Because of the Equation (5), the expected estimation error is actually derived by the variance of P(y|x, Z) with respect to Z as follows:
$$\begin{array}{l}{{\mathbb{E}_{\mathbf{Z}}\big(P(\mathbf{y}|\mathbf{x},\mathbf{Z})-P(\mathbf{y}|\mathbf{x})\big)^{2}=}}\\ {{\mathbb{E}_{\mathbf{Z}}\big(P(\mathbf{y}|\mathbf{x},\mathbf{Z})-\mathbb{E}_{\mathbf{Z}}P(\mathbf{y}|\mathbf{x},\mathbf{Z})\big)^{2}:=\mathbb{V}_{\mathbf{Z}}P(\mathbf{y}|\mathbf{x},\mathbf{Z}).}}\end{array}$$
From the above equation, it is easy to observe that VZP(y|x, Z) actually controls the approximate effect of P(y|x, Z). Therefore, VZP(y|x, Z)
would have a negative effect on TM-augmented NMT due to the fluctuations with respect to the variable Z if VZP(y|x, Z) is relatively large.
It is worth noting that the above analysis on the variance is only related to the variable Z, but has nothing to do with the variables x and y and it is agnostic to neural network architecture. Moreover, it is intractable to estimate the approximate error, i.e., the variance VZP(y|x, Z) with respect to Z because P(y|x) requires the summation over all possible Z. In the next subsection, we will employ the variance-bias decomposition principle to quantify the variance with respect to all variables given specific models.
## 3.2 Variance-Bias Decomposition
Estimation of Bias and Variance Bias-variance trade-off is a fundamental principle for understanding the generalization of predictive learning models and larger variance may induce the lower generalization ability of models (Geman et al.,
1992; Hastie et al., 2009; Yang et al., 2020).
The bias-variance decomposition is typically defined in terms of the Mean Squared Error at the example level for classification tasks and Yang et al. (2020) reorganize its definition in terms of the Cross-Entropy loss. In the machine translation task the optimization loss is the Cross-Entropy loss at the token level, we hence simply extend the variance and bias decomposition in terms of cross-entropy (Yang et al., 2020) to the token level.1 Specifically, assume P0(y|x, y<t) is the empirical distribution (i.e., P0(y|x, y<t) is one if y = yt is the ground-truth word yt and 0 otherwise), and P(y|x, y<t) is the model output distribution. Then the expected cross entropy with respect to a random variable τ = {⟨x, y⟩} (i.e., the training data) can be defined:
$$\mathbb{E}_{\tau}\big(H(P_{0},P)\big)=$$ $$-\mathbb{E}_{\tau}\big(\sum_{t}\sum_{y}P_{0}(y|\mathbf{x},\mathbf{y}_{<t})\log P(y|\mathbf{x},\mathbf{y}_{<t})\big).$$
Further, it can be decomposed as:
$$\mathbb{E}_{\tau}\big(H(P_{0},P)\big)=\underbrace{D_{\mathbf{KL}}\big(P_{0}||{\overline{{P}}}\big)}_{\mathbf{Bias}^{2}}\\ +\underbrace{\mathbb{E}_{\tau}\big(D_{\mathbf{KL}}\big(P||{\overline{{P}}}\big)\big)}_{\mathbf{Variance}},\tag{9}$$
where P is the expected probability after normalization:
$$\overline{P}(y|{\bf x},{\bf y}_{<t})\propto\exp\Big{(}\mathbb{E}_{\tau}\big{(}\log P(y|{\bf x},{\bf y}_{<t})\big{)}\Big{)}.\tag{10}$$ Generally speaking, the bias indicates the ability
of the model P to fit the data whereas the variance measures the sensitivity of the model P to fluctuations in the training data.
We follow classic methods (Hastie et al., 2009; Yang et al., 2020) to estimate the variance in Eq. (9),
1Since it is not clear how to extend the variance-bias decomposition on top of the BLEU score used as an evaluation metric, we instead consider it on top of the token-level crossentropy loss.
| Model | Single Encoder | Dual Encoder | | |
|---------|------------------|----------------|--------|--------|
| Var | Bias2 | Var | Bias2 | |
| w/o TM | 0.2088 | 1.9519 | 0.1573 | 1.9992 |
| w/ TM | 0.2263 | 1.7500 | 0.2168 | 1.8460 |
which is shown in Algorithm 1 of Appendix B. The key idea is to estimate the expectation over the random variable τ and it can be achieved by randomly splitting the given training dataset into several parts and training several models on each part for average. The above bias and variance estimation method is defined for P(y|x, y<t) but it is similar to estimate the bias and variance for the TM-augmented NMT P(y|x, y<t, Z) by retrieving a TM Z for each x via top-K retrieval as default rather than sampling as in §3.1.
Experiments We conduct experiments on JRCAcquis German⇒English task to estimate variance and bias of vanilla Transformer and TM-augmented NMT models. Similar to the preliminary experiment in §1, we retrieve top-5 TMs for TMaugmented NMT. In order to eliminate the effect of model architecture, we use both TM-augmented NMT backbones, which respectively involve a single encoder (Bulte and Tezcan, 2019) and dual encoder (Cai et al., 2021), see details in Appendix C. Following Yang et al. (2020), Bias2is estimated by subtracting the variance from the loss.
Table 2 shows the variance-bias decomposition results of different models. Firstly, within each backbone, we can find that the variance of TMaugmented NMT model is larger than that of vanilla Transformer, which verifies our hypothesis that variance of TM-augmented NMT model is worse than that of vanilla Transformer, which results in poor performance under the low-resource scenario indicated in §1. Secondly, the bias of TM-augmented NMT model within each backbone is smaller than that of vanilla Transformer, which explains the better performance under the highresource scenario indicated in §1. In addition, since the variance is highly dependent on the scale of training data and the limited training data may even amplify its negative effect on variance (Niyogi and Girosi, 1996; Vapnik, 1999; Bishop and Nasrabadi, 2006), the higher variance of TM-augmented NMT
in Table 2 gives an explanation for the contradictory phenomenon in Table 1.
In summary, although the TM-augmented NMT
model has lower bias, resulting in fitting on the training data better especially when the training data size is large, it has non-negligible higher variance, which leads to its poor performance with less training data. Therefore, the inherent flaw drives us to find ways to reduce the variance of TM-augmented NMT models, as shown in the next section. Note that our proposed methods shown below can all theoretically reduce the variance with respect to the variable Z (i.e. TM) in §3.1, and our experiments below empirically show that reducing the variance with respect to Z can actually reduce the model variance introduced here.
## 4 **Proposed Approach For Lower Variance**
In this section, we first propose two techniques to reduce the variance of the TM-augmented NMT
model. Then we propose a new TM-augmented NMT on the basis of the two techniques to address the contradictory phenomenon as presented in §1. Finally we empirically quantify the variance and bias for the proposed TM-augmented NMT. Note that the proposed TM-augmented NMT is general enough to be applied on top of any specific TMaugmented NMT models.
## 4.1 Two Techniques To Reduce Variance
Technique 1: Conditioning on One TM Sentence In conventional TM-augmented NMT, the translation model P(y|x, Z) is conditioned on Z consisting of five2retrieved TMs {z1, z2, z3, z4, z5} with zi = (x tm i
, y tm i
). Similar to the analysis in Eq.(5-6),
we can obtain the following equations:
$$\begin{array}{c}{{P({\bf y}|{\bf x},{\bf z}_{1})}}\\ {{=\sum_{{\bf z}_{2},\cdots,{\bf z}_{5}}P({\bf y}|{\bf x},{\bf z}_{1},{\bf Z}_{>1})P({\bf Z}_{>1}|{\bf x},{\bf z}_{1})}}\\ {{=\sum_{{\bf z}_{2},\cdots,{\bf z}_{5}}P({\bf y}|{\bf x},{\bf z}_{1},{\bf Z}_{>1})P({\bf Z}_{>1}|{\bf x})}}\\ {{\approx}}P({\bf y}|{\bf x},{\bf Z}),\ \mathrm{with}\ {\bf Z}_{>1}\sim P({\bf Z}_{>1}|{\bf x}),}\end{array}$$
where Z>1 = {z2, z3, z4, z5}, and the second equation holds due to the conditional independence assumption in Eq. (6). In this sense, we can see that P(y|x, Z) conditioned on five retrieved 2Since many studies (Gu et al., 2018; Xia et al., 2019; Cai et al., 2021) find that using five TMs is suitable.
TMs is actually an approximation of P(y|x, z1)
conditioned on a single retrieved TM z1. As a result, as analysed in §3.1, whether P(y|x, Z) is a good estimator depends on the variance with respect to Z>1, i.e., VZ>1P(y|x, z1, Z>1).
Based on the above analyses, to alleviate the variance, we directly estimate P(y|x) by using a single sampled sentence pair. Formally, suppose z = ⟨x tm, y tm⟩ is a **single** bilingual sentence sampled from P(z|x), we adopt the following TMaugmented NMT conditioned on a single retrieved sentence z:
**Lemma 1**: $$P(y_{t}|\mathbf{x},\mathbf{y}_{<t})=\mathbb{E}_{z}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z})$$ $$\approx P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z}),$$ $$P(\mathbf{y}|\mathbf{x})=\prod_{t}P(y_{t}|\mathbf{x},\mathbf{y}_{<t})\tag{12}$$ $$\approx\prod_{t}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z}),$$
where P(yt|x, y<t, z) can be any model architecture of TM-augmented NMT models P(yt|x, y<t, Z) by replacing top-K TMs Z
with a single top-1 TM z. In addition, the training of the above model P(yt|x, y<t, z) is the same as the training of the conventional model P(yt|x, y<t, Z).
Technique 2: Enlarging the Sample Size In Eq. (12), EzP(yt|x, y<t, z) is approximated by one sample z, which still induces some potential estimation errors due to the variance VzP(yt|x, y<t, z). In fact, the estimation errors can be further reduced by the estimation using multiple samples as follows.
Proposition 1. If z1, · · · , zK *are independent and* identically distributed random variables sampled from the P(z|x), then the following inequality holds (The proof is presented in Appendix *A.):*
$$\begin{array}{c}{{\mathbb{V}_{\mathbf{z}}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z})\geq}}\\ {{\mathbb{V}_{\mathbf{z}_{1},\cdots,\mathbf{z}_{K}}\left({\frac{1}{K}}\sum_{k}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z}_{k})\right).}}\end{array}$$
$$(13)$$
According to the above Proposition, we propose the method to approximate P(yt|x, y<t) through the average ensemble of all P(yt|x, y<t, zk):
$$P(y_{t}|\mathbf{x},\mathbf{y}_{<t})\approx\frac{1}{K}\sum_{k=1}^{K}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z}_{k}),\tag{14}$$ $$P(\mathbf{y}|\mathbf{x})\approx\prod_{t=1}^{m}\sum_{k=1}^{K}P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z}_{k}).$$
Note that we do not particularly retrain the averaged ensemble model by directly taking the parameters from the model P(yt|x, y<t, z) trained in Technique 1 above in this paper.
## 4.2 Tm-Augmented Nmt Via Weighted Ensemble
In our experiments as shown in §4.3 later, we found that the both techniques deliver better variance but sacrifice its ability of fitting data (i.e., bias) compared to the standard TM-augmented NMT
conditioning on all retrieved Z = {zk}
K
k=1. In order to further improve the bias for a better ability to fit data, we propose the weighted ensemble to establish a stronger relationship between the source sentence and each TM zk by endowing a more powerful representation ability via the weighting coefficient w(x, y<t, zk) in Eq. (15). Formally, the weighted ensemble model is defined as:
$$P(\mathbf{y}|\mathbf{x})\approx\prod_{t=1}^{m}\sum_{k=1}^{K}w(\mathbf{x},\mathbf{y}_{<t},\mathbf{z}_{k})P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{z}_{k}),\tag{15}$$
where P(yt|x, y<t, zk) can be any TM-augmented model architecture as claimed in §4.1, and w(x, y<t, zk) is a weighting network via the following equation:
$$w({\bf x},{\bf y}_{<t},{\bf z}_{k})={\rm Softmax}\big{(}f(H_{t},H_{t,k})\big{)}[k],\tag{16}$$
where f consists of two linear layers with residual connection and layer normalization (Vaswani et al., 2017), Htis the decoding state of a vanilla translation model P(yt|x, y<t), Ht,k is the decoding state for any TM-augmented model P(yt|x, y<t, zk) similar to Ht,Z in Eq. (3). For example, if we implement the weighted ensemble model on top of the model architecture of Cai et al.
(2021), then the source sentence and the k-th TM
are encoded by two separate encoders respectively, resulting in a hidden state Ht for current time step and a contextualized TM representation Ht,k for the k-th TM in current time step.
Similar to the average ensemble method, we do not train the whole network from scratch. Instead, we start from the trained model parameters of P(yt|x, y<t, zk) in §4.1 and then we just fine-tune the whole parameters including those from both P(yt|x, y<t, zk) and w(x, y<t, zk) for only about 2,000 updates on 90% part of valid data while the other 10% part of valid data are used to select the checkpoint for testing.
| Model | w/o TM | TMbase | TMsingle | TMaverage | TMweight |
|---------|----------|--------|--------|--------|--------|
| Var | 0.1573 | 0.2168 | 0.1944 | 0.1918 | 0.1814 |
| Bias2 | 1.9992 | 1.8460 | 1.9369 | 1.9395 | 1.9137 |
Remark The above average ensemble model involves K retrieved sentence pairs similar to the standard translation memory augmented models (Gu et al., 2018; Zhang et al., 2018; Cai et al., 2021), but the notable difference is that our model P(yt|x, y<t, zk) conditions on a single zk whereas those standard TM-augmented NMT
models (see Eq. (3)) directly condition on K
retrieved sentence pairs Z = {zk}
K
k=1.
## 4.3 Empirical Analysis On Bias And Variance
To verify the effectiveness of the proposed methods empirically, we conduct the variancebias decomposition experiments similar to §3.2.
Specifically, we estimate the variance and bias of both three proposed models and two baselines
(please refer to §5.1 for detailed settings).
Table 3 shows the overall results of estimated variance and bias. By comparing results of different models, we can get the following two observations: Firstly, through the results of variance, we can find that the three proposed methods all achieve lower variance compared with the default TM-augmented NMT model. It is notable that the weighted ensemble method achieves the best variance within all the TM-augmented NMT models. Secondly, through the results of bias, we can find that all TM-augmented NMT models achieve lower bias compared to vanilla Transformer (Vaswani et al., 2017) without TM. Although our methods have a slightly higher bias, the proposed weighted ensemble method can achieve comparable bias with regard to the default TM-augmented NMT model.
## 5 Experiments
In this section, we validate the effectiveness of the proposed methods in three scenarios: (1) the lowresource scenario where training pairs are scarce,
(2) the plug-and-play scenario where additional bilingual pairs are added as the data store and the model is not re-trained any more as the data store is enlarged, and (3) the conventional high-resource scenario where the entire training data are used for training and retrieval. We use BLEU score
(Papineni et al., 2002) as the automatic metric for the translation quality evaluation.
## 5.1 Settings
Data We use the JRC-Acquis corpus (Steinberger et al., 2006) and select four translation directions including Spanish⇒English (Es⇒En), En⇒Es, German⇒English (De⇒En), and En⇒De, for evaluation. Besides, we also use the re-split version of the Multi-Domain data set in Aharoni and Goldberg (2020) originally collected by Koehn and Knowles (2017) for our experiments, which includes five domains: Medical, Law, IT,
Koran and Subtitle. Detailed data statistics and descriptions are shown in Appendix E.
Models To study the effect of the proposed methods in §4, we implement a series of model variants by using the fairseq toolkit (Ott et al.,
2019). \#1 vanilla NMT without TM (Vaswani et al.,
2017). We remove the model components related to TM, and only employ the encoder-decoder architecture for NMT. \#2 Default TM-augmented NMT with top-5 TMs. We use top-5 TMs to train and test the model. Note that this is also a baseline model in Cai et al. (2021). \#3 TM-augmented NMT
with single TM. To study the effect of technique 1 in §4.1 which conditions only one TM, we use top-1 TM to test the model. In order to avoid overfitting, during each epoch we use training pairs with top-5 TMs and empty TM to train the model six times, which is similar to He et al. (2021). \#4 TM-augmented NMT with the average ensemble.
To study the effect of technique 2 in §4.1 which enlarges the sample size, we use the trained model in \#3 directly and average ensemble top-5 TMs. \#5 TM-augmented NMT with the weighted ensemble.
We fine-tune the trained model in \#3 with weighted ensemble top-5 TMs. Detailed model descriptions and experimental settings are in Appendix C and D respectively.
## 5.2 Low-Resource Scenario
One of the major advantages of our proposed weighted ensemble is that it has a lower variance, which means that it is less sensitive to fluctuations in the training data. This motivates us to conduct experiments in low-resource scenarios, where we use only a part of the training data to train models.
Specifically, we create low-resource scenario by
Model Medical Law IT Koran Subtitle Average
#1 w/o TM (Vaswani et al., 2017) 47.62 50.85 34.40 14.45 20.22 33.51 #2 TM-base (Cai et al., 2021) 43.53 49.36 32.76 14.43 20.03 32.02 #3 TM-single (ours §4.1 Technique 1) 47.04 50.84 35.33 14.80 21.22 33.85
#4 TM-average (ours §4.1 Technique 2) 47.13 50.82 35.50 14.87 21.30 33.92
#5 TM-weight (ours §4.2) **47.97 52.28 35.84 16.59 22.58 35.05**
Table 4: Experimental results (test set BLEU scores) on Multi-Domain dataset under low-resource scenario.
randomly partitioning each training set in JRCAcquis corpus and Multi-Domain dataset into four subsets of equal size. Then we only use the training pairs in the first subset to train each model.
| Model | Es⇒En | En⇒Es | De⇒En | En⇒De |
|------------|---------|---------|---------|---------|
| w/o TM | 58.44 | 56.11 | 54.54 | 49.97 |
| TM-base | 57.31 | 55.06 | 53.92 | 48.67 |
| TM-single | 57.56 | 55.24 | 54.03 | 48.82 |
| TM-average | 57.08 | 54.91 | 53.77 | 48.41 |
| TM-weight | 59.14 | 56.53 | 55.36 | 50.51 |
Results The test results of the above models on Multi-Domain dataset and JRC-Acquis corpus are presented in Table 4 and Table 5 respectively.
We can get the following observations: (1) Our proposed weighted ensemble method delivers the best performance on test sets across all translation tasks, outperforming the vanilla Transformer by a large margin. (2) The performance of default TM-augmented NMT with top-5 TMs is degraded compared with vanilla Transformer, while single TM method and average ensemble method make up for the degradation issue to some extent.
| Model | Es⇒En | En⇒Es | De⇒En | En⇒De |
|------------|---------|---------|---------|---------|
| w/o TM | 63.26 | 61.63 | 60.83 | 54.95 |
| TM-base | 66.42 | 62.81 | 63.76 | 57.79 |
| TM-single | 65.69 | 62.59 | 63.34 | 57.40 |
| TM-average | 65.29 | 62.72 | 63.01 | 57.42 |
| TM-weight | 66.89 | 63.61 | 64.29 | 58.67 |
## 5.3 Plug-And-Play Scenario
"Plug-and-play" is one of the most remarkable properties of TM (Cai et al., 2021), that is to say, the corpus used for TM retrieval is different during training and testing. This is useful especially in online products because we can adapt a trained model to a new corpus without retraining by adding or using a new TM. Specifically, we directly use models trained in §5.2, and add the second, the third, and the last subset to the TM data store gradually. At each datastore size, we retrieve for the test set again and test the performance of models with the newly retrieved TMs.
Results Figure 1 and Figure 2 show the main results on the test sets of JRC-Acquis corpus and Table 6: Test results on four translation tasks of JRCAcquis corpus under high-resource scenario.
Multi-Domain dataset respectively. The general patterns are consistent across all experiments: The larger the TM becomes, the better translation performance the model achieves. When using all training data (4/4), the translation quality is boosted significantly. At the same time, our proposed weighted ensemble method achieves the best performance all the time.
## 5.4 High-Resource Scenario
Following prior works (He et al., 2021; Cai et al.,
2021) in TM-augmented NMT, we also conduct experiments in the high-resource scenario where all the training pairs of JRC-Acquis corpus are used to train models in order to prove the practicability and universality of our proposed methods.
Results The results are presented in Table 6.
We can see that our proposed weighted ensemble method still has the best performance even under the high-resource scenario, where default TMaugmented NMT using top-5 TMs is also relatively good because in this condition the bias plays an important role in fitting the whole training data to achieve good performance.
## 6 Related Work
This work mainly contributes to the research line of Translation Memory (TM) augmented Neural Machine Translation (NMT). Early works mainly concentrated on model architecture designing
(Feng et al., 2017; Gu et al., 2018; Cao and Xiong, 2018; Cao et al., 2020; Zhang et al., 2018; Xia et al., 2019; Bulte and Tezcan, 2019; Xu
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
et al., 2020; He et al., 2021; Wang et al., 2022d; Hoang et al., 2022a; Cai et al., 2022). Recently, Cai et al. (2021) used a dense retrieval method with a dual encoder to compute the similarity between the source sentence and TMs, which can use monolingual translation memory instead of bilingual ones. Hoang et al. (2022b) proposed to shuffle the retrieved suggestions to improve the robustness of TM-augmented NMT models. Cheng et al. (2022) proposed to contrastively retrieve translation memories that are holistically similar to the source sentence while individually contrastive to each other providing maximal information gains.
The distinctions between our work and prior works are obvious: (1) We rethink TM-augmented NMT from a probabilistic view of retrieval and the variance-bias decomposition principle; (2)
We consider the performance of TM-augmented NMT under low-resource scenario, plug-and-play scenario, and conventional high-resource scenario, instead of only high-resource scenario. (3) Our methods are agnostic to specific model and retrieval method and thus can be applied on top of any advanced architecture.
Another research line highly related to our work is kNN-MT (Khandelwal et al., 2021; Zheng et al.,
2021a,b; Meng et al., 2022; Wang et al., 2021, 2022a; Zhu et al., 2022; Wang et al., 2022c; Du et al., 2022, 2023; Wang et al., 2022b). However, kNN-MT retrieves similar key-value pairs on the token level, resulting in slow retrieval speed (Meng et al., 2022; Dai et al., 2023) and large storage space
(Wang et al., 2022a; Zhu et al., 2022), whereas TM-augmented NMT retrieves similar pairs on the sentence level, which has more efficient retrieval speed and takes up less storage space.
## 7 Conclusion
Existing work surprisingly finds that TMaugmented NMT fails to advance the NMT model under low-resource scenario, but it is unclear the reason why such a contradictory phenomenon happens. This paper rethinks TM-augmented NMT from latent variable probabilistic model view and variance-bias decomposition view respectively, giving the reason for the failure under low-resource scenario. Estimation for variance and bias indicates that TM-augmented NMT is better at fitting data
(bias) yet worse at sensitivity to fluctuations in the training data (variance). To better trade off the bias and variance, this paper proposes a simple yet effective weighted ensemble method for TM-augmented NMT. Experiments under three scenarios demonstrate that the proposed method outperforms both vanilla Transformer and baseline TM-augmented models consistently. Future work could aim to find factors that influence the bias of TM-augmented NMT models, such as the quality of retrieved TMs.
## Limitations
Compared with the standard NMT, TM-augmented NMT models incur extra retrieval time during both training and inference. Besides, since both TMs and the source sentence need to be encoded, the speed of TM-augmented NMT is slower than that of standard NMT. These characteristics indeed slightly induce some overhead. Specifically, experiments show that the latency time cost for all TM-augmented NMT models is higher than for vanilla Transformers. And the decoding time of our proposed weighted ensemble method is twice as much as that of the standard NMT. This issue is one limitation of TM-augmented NMT, which can be further studied and addressed.
## Ethics Statement
This paper will not pose any ethical problems. First, machine translation is a standard task in natural language processing. Second, the datasets used in this paper have been already used in previous papers.
## Acknowledgements
Hongkun and Rui are with MT-Lab, Department of Computer Science and Engineering, School of Electronic Information and Electrical Engineering, and also with the MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai 200204, China.
Rui is supported by the General Program of National Natural Science Foundation of China (6217020129), Shanghai Pujiang Program
(21PJ1406800), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102),
Beijing Academy of Artificial Intelligence (BAAI)
(No. 4), CCF-Baidu Open Fund (F2022018), and the Alibaba-AIR Program (22088682).
## References
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747–7763, Online. Association for Computational Linguistics.
Andrzej Białecki, Robert Muir, Grant Ingersoll, and Lucid Imagination. 2012. Apache lucene 4. In *SIGIR*
2012 workshop on open source information retrieval, page 17.
Christopher M Bishop and Nasser M Nasrabadi. 2006.
Pattern recognition and machine learning, volume 4.
Springer.
Bram Bulte and Arda Tezcan. 2019. Neural fuzzy repair: Integrating fuzzy matches into neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1800–1809, Florence, Italy.
Association for Computational Linguistics.
Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. 2021. Neural machine translation with monolingual translation memory. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 7307–7318, Online.
Association for Computational Linguistics.
Deng Cai, Yan Wang, Lemao Liu, and Shuming Shi.
2022. Recent advances in retrieval-augmented text generation. In *Proceedings of the 45th* International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR
'22, page 3417–3419, New York, NY, USA.
Association for Computing Machinery.
Qian Cao, Shaohui Kuang, and Deyi Xiong. 2020.
Learning to reuse translations: Guiding neural machine translation with examples. In *ECAI*
2020 - 24th European Conference on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020),
volume 325 of Frontiers in Artificial Intelligence and Applications, pages 1982–1989. IOS Press.
Qian Cao and Deyi Xiong. 2018. Encoding gated translation memory into neural machine translation.
In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3042–3047, Brussels, Belgium. Association for Computational Linguistics.
Xin Cheng, Shen Gao, Lemao Liu, Dongyan Zhao, and Rui Yan. 2022. Neural machine translation with contrastive translation memories. In *Proceedings* of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3591–3601, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Yuhan Dai, Zhirui Zhang, Qiuzhi Liu, Qu Cui, Weihua Li, Yichao Du, and Tong Xu. 2023. Simple and scalable nearest neighbor machine translation. arXiv preprint arXiv:2302.12188.
Yichao Du, Weizhi Wang, Zhang Zhirui, Boxing Chen, Tong Xu, Jun Xie, and Enhong Chen.
2022. Non-parametric domain adaptation for end-to-end speech translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 306–320, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Yichao Du, Zhirui Zhang, Bingzhe Wu, Lemao Liu, Tong Xu, and Enhong Chen. 2023. Federated nearest neighbor machine translation. arXiv preprint arXiv:2302.12211.
Yang Feng, Shiyue Zhang, Andi Zhang, Dong Wang, and Andrew Abel. 2017. Memory-augmented neural machine translation. In *Proceedings* of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1390–
1399, Copenhagen, Denmark. Association for Computational Linguistics.
Ignacio Garcia. 2009. Beyond translation memory:
Computers and the professional translator. The Journal of Specialised Translation, 12(12):199–214.
Stuart Geman, Elie Bienenstock, and René Doursat.
1992. Neural networks and the bias/variance dilemma. *Neural computation*, 4(1):1–58.
Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O. K. Li. 2018. Search engine guided neural machine translation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence
(EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5133–5140. AAAI Press.
Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. 2009. *The elements of* statistical learning: data mining, inference, and prediction, volume 2. Springer.
Qiuxiang He, Guoping Huang, Qu Cui, Li Li, and Lemao Liu. 2021. Fast and accurate neural machine translation with translation memory. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3170–3180, Online.
Association for Computational Linguistics.
Cuong Hoang, Devendra Sachan, Prashant Mathur, Brian Thompson, and Marcello Federico. 2022a.
Improving retrieval augmented neural machine translation by controlling source and fuzzy-match interactions. *arXiv preprint arXiv:2210.05047*.
Cuong Hoang, Devendra Sachan, Prashant Mathur, Brian Thompson, and Marcello Federico. 2022b.
Improving robustness of retrieval augmented translation via shuffling of suggestions. arXiv preprint arXiv:2210.05059.
Guoping Huang, Lemao Liu, Xing Wang, Longyue Wang, Huayang Li, Zhaopeng Tu, Chengyan Huang, and Shuming Shi. 2021. Transmart: A practical interactive machine translation system. arXiv preprint arXiv:2105.13072.
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In 9th International Conference on Learning Representations, ICLR
2021, Virtual Event, Austria, May 3-7, 2021.
OpenReview.net.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver.
Association for Computational Linguistics.
Philipp Koehn and Jean Senellart. 2010. Convergence of translation memory and statistical machine translation. In *Proceedings of the Second Joint* EM+/CNGL Workshop: Bringing MT to the User:
Research on Integrating MT in the Translation Industry, pages 21–32, Denver, Colorado, USA.
Association for Machine Translation in the Americas.
Yang Liu, Kun Wang, Chengqing Zong, and Keh-Yih Su. 2019. A unified framework and models for integrating translation memory into phrase-based statistical machine translation. *Computer Speech*
& Language, 54:176–206.
Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, and Jiwei Li. 2022. Fast nearest neighbor machine translation. In *Findings of the* Association for Computational Linguistics: ACL
2022, pages 555–565, Dublin, Ireland. Association for Computational Linguistics.
Partha Niyogi and Federico Girosi. 1996. On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions. *Neural Computation*, 8(4):819–842.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings* of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaž Erjavec, Dan Tufi¸s, and Dániel Varga. 2006. The JRC-Acquis: A
multilingual aligned parallel corpus with 20+
languages. In Proceedings of the Fifth International Conference on Language Resources and Evaluation
(LREC'06), Genoa, Italy. European Language Resources Association (ELRA).
Masao Utiyama, Graham Neubig, Takashi Onishi, and Eiichiro Sumita. 2011. Searching translation memories for paraphrases. In *Proceedings of* Machine Translation Summit XIII: Papers, Xiamen, China.
Vladimir N Vapnik. 1999. An overview of statistical learning theory. *IEEE transactions on neural* networks, 10(5):988–999.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information* Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Vasili˘ı Grigorevich Voinov and Mikhail Stepanovich Nikulin. 2012. *Unbiased estimators and their* applications: volume 1: univariate case, volume 263. Springer Science & Business Media.
Dexin Wang, Kai Fan, Boxing Chen, and Deyi Xiong.
2022a. Efficient cluster-based k-nearest-neighbor machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2175–
2187, Dublin, Ireland. Association for Computational Linguistics.
Dongqi Wang, Haoran Wei, Zhirui Zhang, Shujian Huang, Jun Xie, and Jiajun Chen. 2022b. Nonparametric online learning from human feedback for neural machine translation. Proceedings of the AAAI
Conference on Artificial Intelligence, 36(10):11431–
11439.
Kun Wang, Chengqing Zong, and Keh-Yih Su. 2013.
Integrating translation memory into phrase-based machine translation during decoding. In *Proceedings* of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11–21, Sofia, Bulgaria. Association for Computational Linguistics.
Qiang Wang, Rongxiang Weng, and Ming Chen.
2022c. Learning decoupled retrieval representation for nearest neighbour neural machine translation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5142–5147, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Shuhe Wang, Jiwei Li, Yuxian Meng, Rongbin Ouyang, Guoyin Wang, Xiaoya Li, Tianwei Zhang, and Shi Zong. 2021. Faster nearest neighbor machine translation. *arXiv preprint arXiv:2112.08152*.
Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022d. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3170–
3179, Dublin, Ireland. Association for Computational Linguistics.
Mengzhou Xia, Guoping Huang, Lemao Liu, and Shuming Shi. 2019. Graph based translation memory for neural machine translation. *Proceedings* of the AAAI Conference on Artificial Intelligence, 33(01):7297–7304.
Jitao Xu, Josep Crego, and Jean Senellart. 2020.
Boosting neural machine translation with similar translations. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 1580–1590, Online. Association for Computational Linguistics.
Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi Ma. 2020. Rethinking bias-variance trade-off for generalization of neural networks. In *Proceedings* of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 10767–10777. PMLR.
Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, and Satoshi Nakamura. 2018. Guiding neural machine translation with retrieved translation pieces. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1325–
1335, New Orleans, Louisiana. Association for Computational Linguistics.
Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021a.
Adaptive nearest neighbor machine translation.
In *Proceedings of the 59th Annual Meeting of* the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 368–374, Online. Association for Computational Linguistics.
Xin Zheng, Zhirui Zhang, Shujian Huang, Boxing Chen, Jun Xie, Weihua Luo, and Jiajun Chen. 2021b.
Non-parametric unsupervised domain adaptation for neural machine translation. In *Findings of the* Association for Computational Linguistics: EMNLP
2021, pages 4234–4241, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Wenhao Zhu, Shujian Huang, Yunzhe Lv, Xin Zheng, and Jiajun Chen. 2022. What knowledge is needed?
towards explainable memory for knn-mt domain adaptation. *arXiv preprint arXiv:2211.04052*.
## A Proof Of Proposition 1 **In §4**
Throughout this section, we prove the Proposition 1 in §4. For simplicity, we use f(z) and f(zk)
to denote P(yt|x, y<t, z) and P(yt|x, y<t, zk) in Proposition 1, respectively. Note that each zk are i.i.d sampled from P(z|x).
Proof. Firstly, for the relationship between Ezf(z)
and Ez1,··· ,zK
1 K
Pk f(zk)
we can get the following equations:
$$\mathbb{E}_{\mathbf{z}_1,\cdots,\mathbf{z}_K}\left(\frac{1}{K}\sum_k f(\mathbf{z}_k)\right)$$ $$=\frac{1}{K}\mathbb{E}_{\mathbf{z}_1,\cdots,\mathbf{z}_K}\Big(\sum_k f(\mathbf{z}_k)\Big)$$ $$=\frac{1}{K}\sum_k\Big(\mathbb{E}_{\mathbf{z}_k}f(\mathbf{z}_k)\Big)$$ $$=\frac{1}{K}\sum_k\Big(\mathbb{E}_{\mathbf{z}}f(\mathbf{z})\Big)$$ $$=\frac{1}{K}*K*\mathbb{E}_{\mathbf{z}}f(\mathbf{z})$$ $$=\mathbb{E}_{\mathbf{z}}f(\mathbf{z}).$$
Vzf(z) and Vz1,z2 f(z1) + f(z2)
:
Vz1,z2 f(z1) + f(z2) = Ez1,z2 f(z1) + f(z2) − Ez1,z2 f(z1) + f(z2) 2 = Ez1,z2 f(z1) + f(z2) 2 − Ez1,z2 f(z1) + f(z2) 2 = Ez1 f(z1) 2 + Ez2 f(z2) 2 + 2Ez1,z2 f(z1)f(z2) − Ez1 f(z1) 2 − Ez2 f(z2) 2 − 2 Ez1 f(z1) Ez2 f(z2) = Ez1 f(z1) 2 + Ez2 f(z2) 2 + 2Ez1 f(z1) Ez2 f(z2) − Ez1 f(z1) 2 − Ez2 f(z2) 2 − 2 Ez1 f(z1) Ez2 f(z2) = Ez1 f(z1) 2 − Ez1 f(z1) 2 + Ez2 f(z2) 2 − Ez2 f(z2) 2 = Vz1 f(z1) + Vz2 f(z2) = 2Vzf(z).
Then, we can get the relationship between Vzf(z) and Vz1,z2 1 2 f(z1)+f(z2)
as follows:
$$\begin{array}{r l}{\mathbb{V}_{\mathbf{z}_{1},\mathbf{z}_{2}}\left({\frac{1}{2}}{\big(}f(\mathbf{z}_{1})+f(\mathbf{z}_{2}){\big)}\right)}\\ {={\frac{1}{2^{2}}}\mathbb{V}_{\mathbf{z}_{1},\mathbf{z}_{2}}\left(f(\mathbf{z}_{1})+f(\mathbf{z}_{2})\right)}\\ {={\frac{1}{4}}*2*\mathbb{V}_{\mathbf{z}}f(\mathbf{z})}\\ {={\frac{1}{2}}\mathbb{V}_{\mathbf{z}}f(\mathbf{z})}\\ {<\mathbb{V}_{\mathbf{z}}f(\mathbf{z}).}\end{array}$$
Similarly, for the relationship between Vzf(z)
and Vz1,··· ,zK
$$_{K}\left({\frac{1}{K}}\sum_{k}f(\mathbf{z}_{k})\right)^{-1}\mathbf{w}$$
we can get the Secondly, we start from the relationship between following equations:
Vz1,··· ,zK 1 K X k f(zk) =1 K2 Vz1,··· ,zK X k f(zk) =1 K2 X Ezk f(zk) 2 − Ezk f(zk) 2 k =1 K2 X k Vzk f(zk) =1 K2 X k Vzf(z) =1 K2∗ K ∗ Vzf(z) =1K Vzf(z) ≤ Vzf(z),
where the equality holds if and only if K = 1.
## B Variance-Bias Estimation Method
In §3.2, we mention that we follow classic methods
(Hastie et al., 2009; Yang et al., 2020) to estimate the variance and bias in Eq. (9). Here we detail the method to estimate the variance, which is shown in Algorithm 1. Then Bias2is estimated by subtracting the variance from the loss (Yang et al., 2020). Specifically, in this paper we set N
to be 1 and k to be 4 in Algorithm 1. Besides, we compute the variance for each test point (x, y<t) in test set, and average them to get the final variance for each model.
## C Details About Architecture
In §3.2, we use single encoder backbone (Bulte and Tezcan, 2019) and dual encoder backbone
(Cai et al., 2021) respectively. Here we provide a detailed description of these two architectures.
Single Encoder Architecture For this architecture, we follow Bulte and Tezcan (2019).
Specifically, the model architecture is the same as vanilla Transformer, which consists of an encoder and a decoder. The encoder encodes the concatenation of the source sentence and TM,
relying on the encoder's self-attention to compare the source sentence to the TM and determine which TM phrases are relevant for the translation(Hoang et al., 2022a). Therefore, the only difference is that we need to change the input format to the concatenation of the source sentence and TM.
Algorithm 1 Estimating Variance of NMT Models
4: $\color{blue}{\text{5:}}$ 6: .
Input: Test point (x, y<t), Training data τ .
1: for i = 1 to k do 2: Split the τ into τ
(i)
1, *· · ·* , τ
(i)
N
.
3: for j = 1 to N do 4: Train the model using τ
(i)
j; 5: Evaluate the model at (x, y<t); call the result P
(i)
j(y|x, y<t);
6: Normalize the top 100 probability in P
(i)
j(y|x, y<t) and setting others to be 0
(to reduce computation complexity);
7: **end for** 8: **end for**
9: Compute Pˆ(y|x, y<t) =
exp 1 N·k Pi,j log P
(i)
j(y|x, y<t)
(Pˆ(y|x, y<t) estimates P(y|x, y<t)).
10: Normalize Pˆ(y|x, y<t) to get a probability distribution.
11: Compute the variance var =
1 N·k Pij DKLP
(i)
j(y|x, y<t)||Pˆ(y|x, y<t)
Output: var
Dual Encoder Architecture This architecture is similar to that used by Cai et al. (2021), which achieves relatively good performance and thus can serve as a strong baseline (Cai et al., 2021).
On the encoder side, the source sentence and TM are encoded by two separate encoders respectively, resulting in a set of contextualized token representations {zkj}
Lk j=1, where Lk is the length of the k-th TM.
On the decoder side, after using y<t and x to get the hidden state Ht at current time step t, it can produce the original NMT probability Pnmt(yt|x, y<t) and the TM probability Ptm(yt|x, y<t, Z) concurrently, and finally get the probability P(yt|x, y<t, Z).
Specifically, the original NMT probability Pnmt(yt|x, y<t) is defined by as follows:
$$P_{n m t}(y_{t}|{\bf x},{\bf y}_{<t})=\mathrm{Softmax}(f(H_{t}))[y_{t}],\tag{17}$$
where f is a linear layer that maps hidden state Htto a probability distribution. And for the TM
probability Ptm(yt|x, y<t, Z), the decoder firstly computes a cross attention score α = {αkj} over all tokens of the k-th TM sentence where αkj is the attention score of Htto the j-th token in the k-th
| Model | # Param. | GPU Hours | Hyperparam. | |
|--------------------------|------------|-------------|---------------|-----|
| learning rate | dropout | | | |
| Transformer | 83M | 16h | 7e-4 | 0.1 |
| TM-augmented Transformer | 98M | 19h | 7e-4 | 0.1 |
Table 7: The number of parameters, training budget (in GPU hours), and hyperparameters of each model.
| Usage | Package | License |
|---------------------------------------------|-------------------------------------------------------------------------|-----------|
| Preprocessing | mosesdecoder (Koehn et al., 2007) subword-nmt (Sennrich et al., 2016) 3 | MIT |
| Model training | fairseq (Ott et al., 2019) | |
| Evaluation | BLEU (Papineni et al., 2002) | |
| 1 https://github.com/moses-smt/mosesdecoder | | |
1 https://github.com/moses-smt/mosesdecoder 2 https://github.com/rsennrich/subword-nmt 3 https://github.com/facebookresearch/fairseq Table 8: Packages we used for preprocessing, model training and evaluation.
TM. The computation is shown as follows:
$$\alpha_{k j}=\frac{\exp(H_{t}^{T}W_{t m}z_{k j})}{\sum_{k=1}^{K}\sum_{j=1}^{L_{k}}\exp(H_{t}^{T}W_{t m}z_{k j})},$$
$$(18)$$
where Wtm are parameters. Then we can get the contextualized TM representation Ht,Z for the TM
Z in current time step, as shown in Eq. (19):
$$H_{t,\mathbf{Z}}=W_{h}\sum_{k=1}^{L}\sum_{j=1}^{L_{k}}\alpha_{k j}z_{k j},$$
$$(19)$$
αkjzkj , (19)
where Wh are parameters. Then, the TM
probability Ptm(yt|x, y<t, Z) can be computed as follows:
$$P_{tm}(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{Z})=\sum_{k=1}^{K}\sum_{j=1}^{L_{k}}\alpha_{kj}\mathbb{I}_{z_{kj}=y_{t}},\tag{20}$$
where I is the indicator function.
Finally, the decoder computes the next-token probability as follows:
$$P(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{Z})=(1-\lambda_{t})P_{n m t}(y_{t}|\mathbf{x},\mathbf{y}_{<t})+$$ $$\lambda_{t}P_{t m}(y_{t}|\mathbf{x},\mathbf{y}_{<t},\mathbf{Z})\tag{21}$$
where λtis a gating variable computed by another linear layer whose input is Ht,Z.
## D Detailed Settings For All Experiments
Here we provide a detailed description of our configuration settings in the paper for the preliminary experiment (§1), the variance-bias decomposition experiment (§3.2 and §4) and the main experiment (§5) respectively.
## Settings For Preliminary Experiment In §1 In
this part, we implement vanilla Transformer base model (Vaswani et al., 2017) for the standard NMT.
The learning rate schedule, dropout, and label smoothing are the same as Vaswani et al. (2017).
For the TM-augmented NMT, we follow Cai et al.
(2021) and implement the dual encoder architecture aforementioned. We conduct experiments on the JRC-Acquis German⇒English task. For highresource scenario we use the full training data and train models with up to 100k steps. For lowresource scenario we randomly select a quarter of training data and train models with up to 30k steps.
Settings for Variance-Bias Decomposition Experiment in §3.2 **and §4** For the experiment in §3.2, we use both single encoder and dual encoder architecture introduced above in order to eliminate the effect of model architecture. For dual encoder, since the amount of parameters in vanilla Transformer is originally smaller than that of TM-augmented NMT model, which makes the two models non-comparable (Yang et al., 2020), we use empty TMs to simulate vanilla Transformer and make the two models comparable. We set N = 1 and k = 4 in Algorithm 1 to estimate the variance, so we train models with up to 30k steps. The other settings are the same as the preliminary experiment in §1. For single encoder, the configuration is the same as vanilla Transformer base model (Vaswani et al., 2017). And we also train models with up to 30k steps. For the experiment in §4, we use dual encoder architecture as in Table 3.
Settings for Main Experiment in §5 We build our model using Transformer blocks with the same configuration as Transformer Base (Vaswani et al., 2017) (8 attention heads, 512 dimensional hidden state, and 2048 dimensional feed-forward state). The number of Transformer blocks is 4 for the memory encoder, 6 for the source sentence encoder, and 6 for the decoder side. We retrieve the top 5 TM sentences. The FAISS index code is "IVF1024_HNSW32,SQ8" and the search depth is 64.
We follow the learning rate schedule, dropout, and label smoothing settings described in Vaswani et al. (2017). We use Adam optimizer (Kingma and Ba, 2015) and train models with up to 100K
steps throughout all experiments. When fine-tuning models with the proposed weighted ensemble method, we only need to use 90% valid pairs to finetune with up to 2K steps and use the remaining 10%
valid pairs to choose the checkpoint for testing.
Table 7 provides the number of parameters, training budget, and hyperparameters of each model. All experiments were performed on 8 V100 GPUs. We report the result of a single run for each experiment.
Packages Table 8 shows the packages we used for preprocessing, model training and evaluation.
## E Details About Data Statistics
In this paper, we use the JRC-Acquis corpus
(Steinberger et al., 2006) and the re-split version of the Multi-Domain data set in Aharoni and Goldberg
(2020) for our experiments.
JRC-Acquis For the JRC-Acquis corpus, it contains the total body of European Union (EU)
law applicable to the EU member states. This corpus was also used by Gu et al. (2018);
Zhang et al. (2018); Xia et al. (2019); Cai et al. (2021) and we managed to get the datasets originally preprocessed by Gu et al. (2018). Specifically, we select four translation directions, namely, Spanish⇒English (Es⇒En), En⇒Es, German⇒English (De⇒En), and En⇒De, for evaluation. Table 9 shows the detailed number of train/dev/test pairs for each language pair.
![14_image_0.png](14_image_0.png)
Table 9: Data statistics for the JRC-Acquis corpus.
Multi-Domain Data Set For the Multi-Domain data set, it includes German-English parallel data in five domains: Medical, Law, IT, Koran, and Subtitle. Table 10 shows the detailed number of train/dev/test pairs for each domain.
| Dataset | #Train Pairs | #Dev Pairs | #Test Pairs |
|-----------|----------------|--------------|---------------|
| Medical | 245,553 | 2,000 | 2,000 |
| Law | 459,721 | 2,000 | 2,000 |
| IT | 220,241 | 2,000 | 2,000 |
| Koran | 17,833 | 2,000 | 2,000 |
| Subtitle | 499,969 | 2,000 | 2,000 |
Table 10: Data statistics for the multi-domain dataset.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitations.
✓ A2. Did you discuss any potential risks of your work?
Section Limitations and Section Ethics Statement.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1, 3, 4, 5.
✓ B1. Did you cite the creators of artifacts you used?
Section 1, 3, 4, 5 and Appendix B, C, D, E.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 1, 3, 4, 5 and Appendix B, C, D, E.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 1, 3, 4, 5 and Appendix B, C, D, E.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section Ethics Statement.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 1, 3, 4, 5 and Appendix B, C, D, E.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5 and Appendix E.
## C ✓ **Did You Run Computational Experiments?** Section 1, 3, 4, 5.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 1, 3, 4, 5 and Appendix D.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 1, 3, 4, 5.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 1, 3, 4, 5 and Appendix D.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-controlling | Controlling Styles in Neural Machine Translation with Activation Prompt | https://aclanthology.org/2023.findings-acl.163 | Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience. Earlier studies on this topic typically concentrate on regulating the level of formality and achieve some progress in this area. However, they still encounter two major challenges. The first is the difficulty in style evaluation. The style comprises various aspects such as lexis, syntax, and others that provide abundant information. Nevertheless, only formality has been thoroughly investigated. The second challenge involves excessive dependence on incremental adjustments, particularly when new styles are necessary. To address both challenges, this paper presents a new benchmark and approach. A multiway stylized machine translation (MSMT) benchmark is introduced, incorporating diverse categories of styles across four linguistic domains. Then, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which does not require extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance. | # Controlling Styles In Neural Machine Translation With Activation Prompt
Yifan Wang1,2∗
, Zewei Sun2, Shanbo Cheng2, Weiguo Zheng1**, Mingxuan Wang**2 1 Fudan University, 2 ByteDance [email protected], [email protected]
{sunzewei.v,chengshanbo,wangmingxuan.89}@bytedance.com
## Abstract
Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience.
Earlier studies on this topic typically concentrate on regulating the level of formality and achieve some progress in this area. However, they still encounter two major challenges.
The first is the difficulty in style evaluation.
The style comprises various aspects such as lexis, syntax, and others that provide abundant information. Nevertheless, only formality has been thoroughly investigated. The second challenge involves excessive dependence on incremental adjustments, particularly when new styles are necessary. To address both challenges, this paper presents a new benchmark and approach. A multiway stylized machine translation (**MSMT**) benchmark is introduced, incorporating diverse categories of styles across four linguistic domains. Then, we propose a method named **style a**ctivation prompt (**StyleAP**) by retrieving prompts from stylized monolingual corpus, which does not require extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance.
## 1 Introduction
Natural language texts can be written in various styles while preserving the content, such as politeness, formal, classical, and many others (Hovy, 1987; Jing et al., 2020; Fu et al., 2018). Styles are crucial for communication since every sentence should fit a specific scenario and the appropriate style makes it more user-centric. A speaker needs to switch the styles of words to adapt to different conditions. Using the inappropriate style can be impolite or ridiculous in some societies and result in serious cultural conflicts (Nida and Taber, 2021).
![0_image_0.png](0_image_0.png)
As a cross-lingual generation problem, machine translation performance heavily relies on the appropriate style. Therefore, many commercial translation systems provide multiple style choices, such as Portuguese (European vs. Brazilian) and English (American vs. British) in DeepL1, Korean
(Honorific vs. Non-honorific) in Papago2, Chinese (Modern vs. Classical) in Volctrans3.
Recently, controlling style in machine translation has also drawn much attention in the academic community (Yamagishi et al., 2016; Michel and Neubig, 2018; Feely et al., 2019). Formally, stylized machine translation refers to translating the source sentence into different styles with certain attributes while the translation quality remains satisfactory, as the cases showed in Figure 1. Many previous studies have explored the task and gained promising results (Sennrich et al., 2016; Rabinovich et al.,
2017; Wang et al., 2021). However, challenge still remains in two aspects.
The first challenge is about the benchmark. The style of natural languages consists of many aspects like word preference and grammar structure. However, the well-studied benchmark datasets mainly focus on the formality and politeness of European languages. Due to this limitation, previous work re-1https://www.deepl.com 2https://papago.naver.com 3https://translate.volcengine.com
∗*Work done while Y. Wang was an intern at ByteDance.
stricts styles to a relative narrow scene. In addition, most of the test sets of the previous work have only one reference rather than multiple stylized references, which hinders the automatic evaluation for different styles. As such, a benchmark involving more diverse styles, multiple stylized references and beyond European languages is greatly needed.
The second challenge is about the iterative training framework. Most related work heavily relies on fine-tuning with new stylized data (Sennrich et al., 2016; Wang et al., 2021). Basically, they collect stylized bilingual texts and append tags before the sentence, then conduct fine-tuning to adapt the model to the given style (Sennrich et al., 2016).
However, parallel data in specific styles is pretty sparse and costly to gather. Furthermore, in this way, we have to re-tune the model every time we want to add new styles, which is inconvenient.
Correspondingly, this paper contributes in terms of both benchmark and approach:
For the benchmark, we re-visit this task and push the boundary of styles to a wider range of language phenomena. We propose a dataset **MSMT**, including four directions with diverse language styles.
We collect related public corpus as training sets and provide newly labeled sentences as test sets.
Each source sentence has two references in different styles, which is convenient for automatic evaluation. By broadening the category and providing standard datasets, we hope to effectively push the development of this field.
For the approach, we propose **style a**ctivation prompt (**StyleAP**), a method to avoid re-tuning time after time. The main idea is to extract one sentence of the target style as a prompt to guide the main sentence translation style. The intuition is straightforward. We assume that once the model has been trained on all kinds of data with various styles, it has the potential to generate any style as far as correctly activated. We can activate the ability by language model since it tends to maintain the sequence consistency (Sun et al., 2022b). And the prompt can be easily retrieved in a specific stylized monolingual corpus. In a word, we can obtain a "plug-and-play" model for any new generation style with mere stylized monolingual data instead of iterative fine-tuning. The experiments show that our approach achieves explicit style transformation while well maintaining the text semantics.
## 2 Related Work 2.1 Style Transfer For Machine Translation
Existing studies on style transfer mainly focus upon formality (Feely et al., 2019; Wu et al., 2021). They can be roughly divided into two groups: supervised methods and unsupervised methods. Sennrich et al.
(2016) propose side constraints to control politeness and shows that substantial improvements can be made by limiting translation to the required level of politeness. Niu et al. (2017) propose a FormalitySensitive Machine Translation (FSMT) scenario where lexical formality models are used to control the formality level of NMT product. Since the parallel sentences are of unknown formality level, some work focus on the unsupervised way. Niu and Carpuat (2020) introduce Online Style Inference
(OSI) to generate labels via a pre-trained FSMT
model. Feely et al. (2019) use heuristics to identify honorific verb forms to classify the unlabeled parallel sentences into three groups of different level formality. Wang et al. (2021) propose to use source token, embedding and output bias to control different styles and achieve a remarkable performance.
Wu et al. (2020b) propose a machine translation formality corpus. Diverse translation is also related to this work (Sun et al., 2020; Wu et al., 2020a).
## 2.2 Adaptive Via In-Context Learning
Recent work shows that prompting the large language models (LMs) like GPT-3 (Brown et al., 2020) with a few examples in the context can further leverage the inductive bias of LMs to solve different NLP tasks (Wang et al., 2022). This part of work shows the adaptive ability of LMs learned from analogy. Our work is inspired by it, but we work under the iterative training situation where the supervised data is pretty sparse.
As prompts play a vital role in generic in-context learning, recent work propose different prompting strategies. Ben-David et al. (2021); Sun et al.
(2022a) select the representative keywords of the field for domain adaptation. Zhu et al. (2022)
capture keywords of images as prompts for multimodal translation. Hambardzumyan et al. (2021)
put special tokens into the input and use continuous embeddings as prompts and Li and Liang
(2021) directly optimize prompts in the continuous space. Besides, there is a research direction focusing on retrieval. These methods use two main representations for generating demonstrations. As for the sparse representations, they focus on a rulebased score such as Okapi BM25 (Robertson and Zaragoza, 2009) for retrieval. Wang et al. (2022)
use this method to improve model performance on four NLP tasks. The dense representations are generated by the pre-trained autoencoder model and have higher recall performance on most NLP tasks such as machine translation (Cai et al., 2021). For the sake of accuracy and storage, we use dense representations for retrieval in this paper.
## 3 Task Definition & Msmt: A Multiway Stylized Translation Benchmark
Stylized machine translation refers to the translations with certain language characteristics or styles on the basis of ensuring the quality of translation.
Based upon the definition, we construct a stylized machine translation benchmark including four language directions. In each language direction, we give the illustration of various styles and provide corresponding training and test sets.
Different from traditional stylized machine translation studies, each group of our test sets is consist of one single source and multiple references in parallel. For example, for English-to-Chinese direction, for each English source sentence, we have two parallel Chinese references: classical style and modern style. In this way, we can automatically evaluate the style transformation by measuring the similarity between the stylized hypothesis and stylized references.
All the data has been publicly released and the detailed number is in Table 1. In this section, we will introduce our benchmark construction.
## 3.1 English-To-Chinese Translation
There are two common styles for Chinese: *Classical* and *Modern*. Classical Chinese originated from thousands of years ago and was used in ancient China. Modern Chinese is the normal Chinese that is commonly used currently.
The former is adopted on especially solemn and elegant occasions while the latter is used in daily life. They vary in many aspects like lexis and syntax so can be regarded as two different styles. In this direction, we aim at translating texts from English to Chinese in both styles. Specific data usage is as follows:
- **Basic Parallel Data:** Cleaned WMT2021 corpus plus the back translation of the subset of an open source corpus containing classical Chinese and modern Chinese 4.
- **Stylized Monolingual Data:** The open source corpus containing classical Chinese and modern Chinese and the Chinese part of WMT2021.
- **Development Set:** Newstest2019.
- **Test Set:** English-Classical-Modern triplet parallel data annotated by language experts.
## 3.2 Chinese-To-English Translation
There are two common styles for English: Early Modern and *Modern*. Early Modern English in this paper refers to English used in the Renaissance such as Shakespearean plays. Modern English is the normal English used currently.
The former one is mostly seen in Shakespearean play scripts like Hamlet while the latter one is used in the daily life. They vary in many aspects like grammatical constructions such as two second person forms, thou and you. Therefore, they can be regarded as two styles. In this direction, we aim at translating texts from Chinese to English in both styles. Specific data usage is as follows:
- **Basic Parallel Data:** Cleaned WMT2021 corpus plus the back translation of a crawled corpus: The Complete Works of William Shakespeare 5.
- **Stylized Monolingual Data:** An open source dataset6containing early modern and modern English and the English part of WMT2021.
- **Development Set:** Newstest2019.
- **Test Set:** Chinese-Early-Modern triplet parallel data annotated by language experts.
## 3.3 English-To-Korean Translation
There are seven verb paradigms or levels of verbs in Korean, each with its own unique set of verb endings used to denote the formality of a situation.
We simplify the classification and roughly divide them into two groups: *Honorific* and *Non-honorific* The former is used to indicate the hierarchical relationship with the addressee such as from the young to the old, from the junior to the senior. The latter is used in daily conversations between friends.
They vary in some lexical rules so can be regarded
| en→zh | zh→en | en→ko | en→pt | | | | | |
|-------------|---------|-----------|---------|-------|-----------|----------|----------|-----------|
| Styles | Modern | Classical | Modern | Early | Honorific | Non-hono | European | Brazilian |
| Monolingual | 22M | 967K | 22M | 83.2K | 20.5K | 20.5K | 168K | 234K |
| Parallel | 9.12M | 9.11M | 271K | 412K | | | | |
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
as two styles. In this direction, we aim at translating texts from English to Korean in both styles.
Specific data usage is as follows:
- **Basic Parallel Data:** IWSLT2017 7 plus the back translation of an open source dataset 8containing honorific and non-honorific.
- **Stylized Monolingual Data:** The open source dataset and the crawled corpus from a public translation tool9.
## - **Development Set:** Iwslt17.
- **Test Set:** English-Honorific-Non-honorific triplet parallel data annotated by language experts language experts.
## 3.4 English-To-Portuguese Translation
There are two common styles for Portuguese: *European* and *Brazilian*. European Portuguese is mostly used in Portugal. Brazilian Portuguese is mostly used in Brazil.
They vary in some detailed aspects like pronunciation, grammar and spelling, so can be regarded as two different styles. In this direction, we aim at translating texts from English to Portuguese in both styles. Specific data usage is as follows:
## - **Basic Parallel Data:** Iwslt2017.
- **Stylized Monolingual Data:** European & Brazilian part of the parallel data.
## - **Development Set:** Iwslt17.
- **Test Set:** English-European-Brazilian triplet parallel data annotated by language experts.
## 3.5 Evaluation
Previous style evaluation relies on human resources, which is costly and slow. Since our test sets are all multiway, we can evaluate our stylized hypothesis with the corresponding reference to take both quality and style into consideration at a small 7https://wit3.fbk.eu/
8https://github.com/ezez-refer/
Korean-Honorific-Translation 9https://papago.naver.com/
cost. Moreover, human evaluation is inevitably subjective while our test sets can guarantee the comparison stability.
## 4 Style Activation Prompt
Prior work usually uses the fixed tag to control the generation with expected attributes (Sennrich et al., 2016). However, tag-based methods rely on a large amount of labeled parallel data, requiring relabeling and retraining of models when new styles need to be generated.
We go back to a standard NMT model. During the generation process, the NMT model tries to maximize the probability of the generation sentence. When predicting the i-th token, the model searches from the vocabulary to maximize:
## P(Yi|X, Yj<I)
where yj<i means the past words, indicating that the previous inference results can affect the subsequent generation. Therefore, our intuition is to control the generation style by taking advantage of stylized language model. We suggest that once the basic model has been trained on the data in various kinds of styles, we can activate the ability with the contextual influence.
Specifically, we retrieve an instance as a prompt from the stylized corpus and use it to instruct the NMT model to generate the sentence with the same attributes. To adapt to the prompt training, we extract every sentence in the basic parallel data and retrieve one similar sentence as the prompt. The whole framework is in Figure 2. We introduce the details of our proposed method as follows.
## 4.1 Prompt Retrieval
The prompt retrieval procedure aims at finding the proper sentence prompt. First, we construct a candidate datastore D that contains many (*r, y*) keyvalue pairs, where r is the representation of y. In this paper, we use a multilingual pre-trained language model XLM-R (Conneau et al., 2019) to obtain the sentence representation. By calculating
![4_image_0.png](4_image_0.png)
the similarity between the query representation h and keys, we can extract the needed sentence y:
$$y=\arg\operatorname*{min}_{r\in{\mathcal{D}}}D i s t a n c e(h,r)$$
r∈D
where the search tool is Faiss(Johnson et al., 2021),
a library for efficient similarity search and clustering of dense vectors.
## 4.2 Training
In the training stage, the goal is to retrieve a similar prompt with the current sentence to make the model adapt to the inference pattern. Specifically, we iterate each target-side sentence in the basic parallel data as a query and retrieve the most similar sentence as its prompt.
After obtaining the prompt, we concatenate the prompt and the query sentence by a special token as:
$$p r o m p t,[s],s r c\to p r o m p t,[s],t r g$$
We train the model with this kind of data and normal data together to learn the prompt-based generation as well as basic translation.
## 4.3 Inference
In the inference stage, we first translate the source sentence roughly. Then the draft hypothesis is used as the query. The candidate datastore is constructed with the monolingual data in the given style. After retrieving the prompt, we append it to the beginning of the source sentence with the special token.
After the second inference, the hypothesis can be obtained by splitting the token.
## 4.4 Advantages
We conclude the advantages of StyleAP as follows:
- StyleAP does not need any architecture modification and is easy to deploy.
- StyleAP does not need to assign various tags to all kinds of styles.
- Extra tuning is no longer needed when it comes to a new style. We only need to retrieve the prompt from the new monolingual stylized corpus and then generate the given style.
## 5 Experiments
In this section, we will introduce the details of our experiments.
## 5.1 Setup
We first compare our method with other baseline models on four tasks. Then, we design a manual evaluation to assess whether our method maintains translation quality and achieves diversity. All experiments are implemented in the following settings.
## 5.1.1 Data & Preprocessing
In the previous section, we introduce our provided stylized NMT benchmark MSMT. Our designed experiments and analysis are based upon this benchmark. The statistics information of this benchmark is shown in the Table 1.
We use SentencePiece(Kudo and Richardson, 2018) to jointly learn an unsupervised tokenizer.
We preprocess the training data and filter the parallel sentences with length greater than 256. We set
| en→ zh | zh→en | en→ko | en→pt | | | | | | |
|------------|---------|-----------|---------|-------|-----------|----------|----------|-----------|---------|
| Styles | Modern | Classical | Modern | Early | Honorific | Non-hono | European | Brazilian | Average |
| Baseline | 25.00 | 13.86 | 26.73 | 14.28 | 20.65 | 17.48 | 31.30 | 32.86 | 22.77 |
| Transfer | 24.87 | 20.88 | 11.05 | 7.46 | <5 | <5 | 32.84 | 32.59 | <20 |
| Tag-tuning | 28.43 | 21.21 | 27.16 | 14.48 | 21.05 | 21.11 | 33.67 | 33.84 | 25.11 |
| StyleAP | 29.73 | 24.98 | 26.76 | 17.72 | 21.65 | 20.67 | 33.82 | 34.27 | 26.20 |
hyper-parameters min frequency 5 and max vocabulary 32k.
## 5.1.2 Implementation Details
Here, we introduce more details of our experiment settings. Our experiment is implemented on the open source Seq2Seq tool Neurst10 (Zhao et al.,
2021). Our seq2seq model uses a transformer-base structure with 6 encoder layers and 6 decoder layers, attention with a layer size of 512, and word representations of size 512. We apply post-layer normalization (Ba et al., 2016), adding dropout to embeddings and attention layers with a dropout rate of 0.1. We tie the source and target embeddings. The main training parameters are as follows.
We use Adam Optimizer with β1 = 0.9 and β2 =
0.98. We use label smoothed cross entropy as a criterion with a label smoothing rate of 0.1. We set batch size per GPU 4096 and batch by tokens. And, we use four A100 GPUs to train our model from scratch. We save checkpoints every 1000 steps and stop training when there is no improvement in the continuous 50 checkpoints.
## 5.1.3 Comparing Systems
We use sacreBLEU (Post, 2018) as our metrics and compare StyleAP with three common systems:
- **Baseline:** Transformer that is trained on the raw parallel data.
- **Transfer:** A two-phases processing: Translate first and conduct style transfer (Syed et al.,
2020). We train the translation model with normal parallel data and train the transfer model with stylized data.
- **Tag-tuning:** A tag-based model which is generally used in other work (Sennrich et al.,
2016). They add a special token as the tag at the start of the source text with the known style. In this way, the model can generate different styles with different tokens. This method needs explicit extra fine-tuning.
10https://github.com/bytedance/neurst
en→zh zh→en en→ko en→pt
Styles M C M E H N E B
Base 3.5 3.6 **3.6 3.7** 1.9 1.9 2.7 2.8
StyleAP **3.6 3.6** 3.5 3.6 2.0 2.0 **2.9 2.9**
Table 3: The quality of all systems, ranging from 0 to 4.
StyleAP maintains a comparable quality even the style is transformed.
## 5.2 Results
As is shown in Table 2, we calculate the BLEU
score on the test set to compare StyleAP with the mentioned baselines. The Transfer methods have many drawbacks. Not only does it need two-phases training, but also yields poor results. The attempt in English-to-Korean direction even fails. Tag-tuning gains some improvements and even achieves the best performance in some directions. But overall, StyleAP obtains the best results and outperform the other methods. At the same time, StyleAP needs no extra tag or extra tuning when it comes to new styles. After acquiring the ability to translate with style prompt, StyleAP can handle various styles.
## 5.3 Human Evaluation
We also design a human evaluation to manually check the style transfer ratio as well as the translation quality preservation during the transfer. The quality score ranges from 0 to 4. The style transfer ratio means the percentage of the hypothesis that meets the required style, ranging from 0 to 100. Refer to the appendix for the specific scoring criteria and rules.
## 5.3.1 Quality Preservation
The quality results are in Table 3. StyleAP achieves a comparable performance with the baseline model, which means little semantic loss within the stylized translation.
## 5.3.2 Style Transfer Ratio
As is shown in Figure 3, StyleAP significantly enhances the transfer ratio. The only unsatisfactory is the Chinese-to-English Translation in the Early
![6_image_0.png](6_image_0.png)
Modern style. The reason is that many sentences in the test set are very short like "What about that?"
vs "What of that?". The styles for this kind of extremely short sentences are not meaningful.
## 5.3.3 Conclusion
The quality scores are comparable with the baseline model, while the transferred ratios are much higher than the baseline model in the four tasks. That indicates that our method can effectively translate the source text into a sentence with specific attributes without quality loss.
## 6 Analysis 6.1 Retrieval Strategy Matters
There are many retrieval methods to select a similar sentence from stylized monolingual sentences.
We conduct a detailed comparison in English-toChinese inference with the following strategies:
- **Source:** Directly use the source text representation generated from the pre-trained multilingual language model.
- **Random:** Randomly choose a prompt from candidates.
- **Fixed:** Use the same prompt for all samples.
The results are shown in Table 4. We can see that the our strategy performs the best and the other retrieval methods face different levels of the BLEU loss. The retrieval strategy plays an important role in the translation.
| Modern | Classical | |
|----------|-------------|-------|
| StyleAP | 29.73 | 24.98 |
| -Source | 25.51 | 18.07 |
| -Random | 24.72 | 15.65 |
| -Fixed | 24.52 | 13.75 |
| Modern | Classical | |
|-------------|-------------|-------|
| StyleAP | 29.73 | 24.98 |
| StyleAP (U) | 28.72 | 23.76 |
| Tag-tuning | 28.43 | 21.21 |
## 6.2 Even Unsupervised Prompts Works
In the training phase, we assume that the retrieval range lies within the specific styles. However, one condition that is more close to the real world is that we need to retrieve prompts from more general data, which may cause the style mismatch between the sentence and the prompt. Therefore, we also conduct a unsupervised prompt retrieval in training for English-to-Chinese direction.
As is shown in Table 5, the unsupervised version of StyleAP slightly drops in terms of BLEU but still outperforms Tag-tuning. It is worth mentioning that we have none of the style label of parallel data in this setting. General parallel data and monolingual stylized data is all you need. This again shows the universality and robustness of StyleAP.
## 6.3 Consistent Performance Across Sizes
We are also interested in the situation of unbalanced data and even few stylized data. We implement a comparative experiment of stylized monolingual data on the en→zh task. We control the amount of labeled data at four levels: 1 million, 100 thousand, 10 thousand, and 1 thousand. For the tag-based method, we train the tag-based model from scratch at each level. For a fair comparison, we use the same stylized labeled data as the tag-based method but only as the target monolingual part for retrieval.
The results are shown in Figure 4 , where the horizontal axis represents the sample size, and the vertical axis represents the BLEU score. For the classical direction, our method performs better in all situations. Even when we only use 1,000 labeled stylized monolingual sentences, there is still an
| Source | 现在, | 亲爱的奶妈, | 哦上帝, | 你 | 为什么 | 看起来 | 这么伤心? |
|------------------------------------|---------------------------------------------------------|----------------|------------|------|----------|----------|--------------|
| (Now) (good sweet Nurse) (Oh Lord) | (you) (why) | (look) | (so sad) | | | | |
| Ref (E) | Now, good sweet Nurse, O Lord, why look'st thou sad? | | | | | | |
| Baseline | Now, good sweet Nurse, Oh Lord, why do you look so sad? | | | | | | |
| Prompt | What say'st thou, my dear nurse? | | | | | | |
| Tagged | Now, dear nurse, O God, why look you so sad? | | | | | | |
| StyleAP | Now, sweet nurse, O God, why dost thou look so sad? | | | | | | |
![7_image_1.png](7_image_1.png)
![7_image_0.png](7_image_0.png)
improvement compared with the baseline model. On the contrary, the tag-based method performs poorly in few data and even has a lower BLEU
score than the baseline model.
In conclusion, our method has an overall better performance than the tag-based method at different levels. Especially for extremely few samples, our method still gains significant improvements.
## 6.4 Attention Score Interprets The Effect
We are also interested in how the retrieved prompt affects the translation style. Here is an example of Chinese-to-English task in Figure 5. The model is translating a Chinese sentence meaning "Yeah, you have given me great comfort." into English and the next token is "thou", which means "you" in early modern English.
We show the average attention scores in the Transformer decoder, left for self-attention and right for cross-attention. For self-attention, except for some adjacent tokens, the model mainly pays attention to the token "thou" in the prompt which corresponds to the generating token. For the cross-attention, the model concentrates on the corresponding Chinese token "Ni" (means "you" in Chinese) and the token "thou" in the prompt again.
This result suggests that our retrieval prompt could affect the generation process through the attention mechanism.
## 6.5 Case Study
Finally, one stylized Chinese-to-English translation example is listed in Table 6 to show the effectiveness of our method more intuitively. As in these examples, the Chinese word "Ni" ("you" in English) is translated into "you" in Baseline and the Tag-tuning method. However, under the guidance of the prompt which uses the early modern English word "thou", StyleAP translates the word into
"thou" correspondingly. Obviously, StyleAP can explicitly affect the translation style with prompts.
## 7 Conclusions
In cross-lingual generation fields, most studies focus on the translation quality but ignore the style issue, which happens to be important in the realworld communication. However, the previous studies face two major challenges including the benchmark as well as the approach. For those purposes, we re-visit this task and propose a standard stylized NMT benchmark MSMT with four well-defined tasks to push the boundary of this field. We also propose a new translation style controlling method with activation prompt. With stylized prompts that are retrieved from the stylized monolingual corpus, we successfully guide the translation generation style without iterative fine-tuning. Through automatic evaluation and human evaluation, our method achieves a remarkable improvement over baselines and other methods. A series of analysis also show the advantages of our method.
## Limitation
One limitation of StyleAP is that one extra inference is needed for retrieval. It is mainly due to the monolingual retrieval accuracy is higher than that of crosslingual retrieval (refer to Section 6.1). In the future, we will try stronger multilingual model to mitigate this effect.
## Acknowledgments
We would like to thank the anonymous reviewers for their constructive comments. Weiguo Zheng is the corresponding author.
## References
Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E.
Hinton. 2016. Layer normalization. *CoRR*,
abs/1607.06450.
Eyal Ben-David, Nadav Oved, and Roi Reichart.
2021. PADA: A prompt-based autoregressive approach for adaptation to unseen domains. *CoRR*,
abs/2102.12206.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. 2021. Neural machine translation with monolingual translation memory. In *Proceedings*
of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 7307–7318. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *CoRR*, abs/1911.02116.
Weston Feely, Eva Hasler, and Adrià de Gispert.
2019. Controlling japanese honorifics in englishto-japanese neural machine translation. In *Proceedings of the 6th Workshop on Asian Translation,*
WAT@EMNLP-IJCNLP 2019, Hong Kong, China, November 4, 2019, pages 45–53. Association for Computational Linguistics.
Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence
(EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 663–670. AAAI Press.
Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: word-level adversarial reprogramming. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021,
(Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4921–4933. Association for Computational Linguistics.
Eduard Hovy. 1987. Generating natural language under pragmatic constraints. *Journal of Pragmatics*,
11(6):689–719.
Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. 2020. Neural style transfer: A review. *IEEE Trans. Vis. Comput. Graph.*, 26(11):3365–3385.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021.
Billion-scale similarity search with gpus. IEEE
Trans. Big Data, 7(3):535–547.
Taku Kudo and John Richardson. 2018. Sentencepiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP
2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 66–71. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4582–
4597. Association for Computational Linguistics.
Paul Michel and Graham Neubig. 2018. Extreme adaptation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2:
Short Papers, pages 312–318. Association for Computational Linguistics.
Eugene Nida and Charles Taber. 2021. The theory and practice of translation:(fourth impression). In The Theory and Practice of Translation. Brill.
Xing Niu and Marine Carpuat. 2020. Controlling neural machine translation formality with synthetic supervision. In *The Thirty-Fourth AAAI Conference on* Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI
2020, New York, NY, USA, February 7-12, 2020, pages 8568–8575. AAAI Press.
Xing Niu, Marianna J. Martindale, and Marine Carpuat.
2017. A study of style in machine translation: Controlling the formality of machine translation output.
In *Proceedings of the 2017 Conference on Empirical* Methods in Natural Language Processing, EMNLP
2017, Copenhagen, Denmark, September 9-11, 2017, pages 2814–2819. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 - November 1, 2018, pages 186–191. Association for Computational Linguistics.
Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits.
In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 37, 2017, Volume 1: Long Papers, pages 1074–1084.
Association for Computational Linguistics.
Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. *Found. Trends Inf. Retr.*, 3(4):333–389.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Controlling politeness in neural machine translation via side constraints. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California,
USA, June 12-17, 2016, pages 35–40. The Association for Computational Linguistics.
Zewei Sun, Shujian Huang, Hao-Ran Wei, Xinyu Dai, and Jiajun Chen. 2020. Generating diverse translation by manipulating multi-head attention. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI
2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8976–
8983. AAAI Press.
Zewei Sun, Qingnan Jiang, Shujian Huang, Jun Cao, Shanbo Cheng, and Mingxuan Wang. 2022a. Zeroshot domain adaptation for neural machine translation with retrieved phrase-level prompts. *CoRR*,
abs/2209.11409.
Zewei Sun, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Shujian Huang, Jiajun Chen, and Lei Li. 2022b. Rethinking document-level neural machine translation.
In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3537–3548. Association for Computational Linguistics.
Bakhtiyar Syed, Gaurav Verma, Balaji Vasan Srinivasan, Anandhavelu Natarajan, and Vasudeva Varma. 2020.
Adapting language models for non-parallel authorstylized rewriting. In *The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The* Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI
Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9008–9015. AAAI Press.
Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3170–3179. Association for Computational Linguistics.
Yue Wang, Cuong Hoang, and Marcello Federico. 2021.
Towards modeling the style of translators in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1193–1199. Association for Computational Linguistics.
Xuanfu Wu, Yang Feng, and Chenze Shao. 2020a. Generating diverse translation from model distribution with dropout. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 1088–1097. Association for Computational Linguistics.
Xuanxuan Wu, Jian Liu, Xinjie Li, Jinan Xu, Yufeng Chen, Yujie Zhang, and Hui Huang. 2021. Improving stylized neural machine translation with iterative dual knowledge transfer. In *Proceedings of the Thirtieth* International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 3971–3977. ijcai.org.
Yu Wu, Yunli Wang, and Shujie Liu. 2020b. A dataset for low-resource stylized sequence-to-sequence generation. In *The Thirty-Fourth AAAI Conference on* Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI
2020, New York, NY, USA, February 7-12, 2020, pages 9290–9297. AAAI Press.
Hayahide Yamagishi, Shin Kanouchi, Takayuki Sato, and Mamoru Komachi. 2016. Controlling the voice of a sentence in japanese-to-english neural machine translation. In Proceedings of the 3rd Workshop on Asian Translation, WAT@COLING 2016, Osaka, Japan, December 2016, pages 203–210. The COLING 2016 Organizing Committee.
Chengqi Zhao, Mingxuan Wang, Qianqian Dong, Rong Ye, and Lei Li. 2021. Neurst: Neural speech translation toolkit. In Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL 2021 - System Demonstrations, Online, August 1-6, 2021, pages 55–62. Association for Computational Linguistics.
Yaoming Zhu, Zewei Sun, Shanbo Cheng, Yuyang Huang, Liwei Wu, and Mingxuan Wang. 2022. Beyond triplet: Leveraging the most data for multimodal machine translation. *CoRR*, abs/2212.10313.
## A Appendix
In this section, we supplement human evaluation criterion in this paper and more stylized translation cases. Table 7 fully illustrates the score standard of our language experts. Table 8 shows more English examples of stylized translation.
| Accuracy | Criterion | Description | | | |
|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------|--------------|----|--------|
| 4 | The translation faithfully reflects the semantics and the translation is | There | is | no | errors |
| fluent. | and | no | modification | | |
| required. | | | | | |
| 3 | The translated text basically reflects the semantics of the original text and is basically fluent(the subject, predicate, object and other grammatical components are in correct order), but there are a few non-keywords that are improperly used or inappropriately matched, etc. There are slight mistakes, which will not affect the understanding of the original text such as improper use of words, punctuation, capitalization, irregular date format, etc. | | | | |
| Quality | The meaning is basically correct, but there are partial errors, which will cause certain difficulties in understanding. | | | | |
| 2 | The translation can reflect the semantics of the original text, the translation has one or more general errors, and the translation is basically fluent (the order of grammatical components such as subject, predicate and object is correct), but there are key words that express the semantics of improper translation, omission or mistranslation of non-keywords, etc. | The meaning is basically correct, but there are partial errors, which will cause certain difficulties in understanding. | | | |
| 1 | The translated text cannot reflect the semantics of the original text, and there are multiple serious translation errors in the translation text. One of the following situations exists: a) The translation text contains the main components of the original text, but fails to form a fluent composition due to sequence problems, logical errors, serious grammatical errors (including tenses), etc. The translation; b) The translation is basically fluent, but there are translation errors such as negation and double negation, serious omission of translation, mistranslation of keywords, and more translation of content that is not in the original text. | There are serious errors that have a greater impact on understanding. | | | |
| 0 | The translated text cannot express the meaning of the original text at all: a) The translation text is obscure and difficult to understand, and the content expressed in the original text cannot be judged by the translation text; b) A string of repeated words and garbled characters appear; totally different/completely unrelated; d) the entire sentence is not translated. | The translated text is almost completely wrong or completely incomprehensible. | | | |
| Style | 1 | The translation has the corresponding style. | | | |
| 0 | The translation has not the corresponding style. Table 7: Human evaluation criterion. | | | | |
| Modern English | Baseline | Retrieved Prompt | StyleAP | | | | |
|----------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|--------------------------|------|----|-----|------------------------|
| I swear to you, You have a good heart, and believe me, I'll tell her that. | I tell thee, I, that thou hast | I swear to thee, thou art | | | | | |
| marred her gown. | kind, and believe me, I'll tell her. | | | | | | |
| Now I'll tell you so you | Now I tell you, so you need | To tell thee thou shalt see | Now I tell thee, so thou | | | | |
| don't have to ask. | not ask. | me at Philippi. | shalt not ask. | | | | |
| You're not paying attention | You did not notice me at | God | mark | thee | to | his | Thou dost not mark me. |
| to me. | all. | grace! | | | | | |
| I swear to you, you are kind, trust me, and I'll tell her. | | | | | | | |
| If you were ever yourself, and this sadness was yours, you and your sadness were all for Rosaline. | If thou werest thyself, This sorrow was thy, Thou and thy sorrow were all for Rosaline. | | | | | | |
| Therefore, | the | fact | that | | | | |
| you're awake this early tells me you've been upset with some anxiety. | If you were once yourself, This sorrow is yours, and both you and your sorrow are for Rosalin. | If e'er thou wast thyself and these woes thine, thou and these woes were all for Rosaline. | | | | | |
| Therefore | you | wake | so | | | | |
| early, which tells me you are uneasy about some anxiety. | Unless | thou | tell'st | me | | | |
| where thou hadst this ring, Thou diest within this hour. | Therefore, thou awaken'st so early, That tells me thou art uneasy with some anxiety. | | | | | | |
Table 8: Examples from test sets and the results of the baseline and StyleAP.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4,5
✓ B1. Did you cite the creators of artifacts you used?
3,4,5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3,4,5
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3,4,5
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
3,4,5
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3,4,5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3,4,5
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5,6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
5
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
A
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
xu-etal-2023-focusing | Focusing, Bridging and Prompting for Few-shot Nested Named Entity Recognition | https://aclanthology.org/2023.findings-acl.164 | Few-shot named entity recognition (NER), identifying named entities with a small number of labeled data, has attracted much attention. Frequently, entities are nested within each other. However, most of the existing work on few-shot NER addresses flat entities instead of nested entities. To tackle nested NER in a few-shot setting, it is crucial to utilize the limited labeled data to mine unique features of nested entities, such as the relationship between inner and outer entities and contextual position information. Therefore, in this work, we propose a novel method based on focusing, bridging and prompting for few-shot nested NER without using source domain data. Both focusing and bridging components provide accurate candidate spans for the prompting component. The prompting component leverages the unique features of nested entities to classify spans based on soft prompts and contrastive learning. Experimental results show that the proposed approach achieves state-of-the-art performance consistently on the four benchmark datasets (ACE2004, ACE2005, GENIA and KBP2017) and outperforms several competing baseline models on F1-score by 9.33{\%} on ACE2004, 6.17{\%} on ACE2005, 9.40{\%} on GENIA and 5.12{\%} on KBP2017 on the 5-shot setting. |
## Focusing, Bridging And Prompting For Few-Shot Nested Named Entity Recognition
Yuanyuan Xu1 Zeng Yang1 Linhai Zhang1 Deyu Zhou1∗ Tiandeng Wu2 **Rong Zhou**2 1School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China 2Huawei Technologies Co., Ltd., China
{yuanyuan-xu,yangzeng,lzhang472,d.zhou}@seu.edu.cn
{wutiandeng1,joe.zhourong}@huawei.com
## Abstract
Few-shot named entity recognition (NER),
identifying named entities with a small number of labeled data, has attracted much attention. Frequently, entities are nested within each other. However, most of the existing work on few-shot NER addresses flat entities instead of nested entities. To tackle nested NER in a fewshot setting, it is crucial to utilize the limited labeled data to mine unique features of nested entities, such as the relationship between inner and outer entities and contextual position information. Therefore, in this work, we propose a novel method based on focusing, bridging and prompting for few-shot nested NER without using source domain data. Both focusing and bridging components provide accurate candidate spans for the prompting component. The prompting component leverages the unique features of nested entities to classify spans based on soft prompts and contrastive learning. Experimental results show that the proposed approach achieves state-of-the-art performance consistently on the four benchmark datasets
(ACE2004, ACE2005, GENIA and KBP2017)
and outperforms several competing baseline models on F1-score by 9.33% on ACE2004, 6.17% on ACE2005, 9.40% on GENIA and 5.12% on KBP2017 on the 5-shot setting.
## 1 Introduction
Named entity recognition (NER), aiming at identifying the spans of text and classifying them into pre-defined entity categories, is a fundamental task in natural language processing (Yan et al., 2021).
NER serves as a crucial component for many downstream tasks such as information extraction, sentiment analysis and other NLP applications (Mao and Li, 2021; Peng et al., 2022).
Few-shot NER, focusing on named entity recognition with a small number of labeled data, has attracted much attention. Frequently, entities are
∗ Corresponding author.
Mouse interleukin-2 receptor alpha gene expression.
DNA
![0_image_0.png](0_image_0.png)
Figure 1: (a) An example sentence marked with nested entities in GENIA. (b) The percentages of the entities of Protein being nested with the entities of other categories in GENIA.
nested within each other as shown in Figure 1(a).
However, most of the existing work on few-shot NER addresses flat entities instead of nested entities. Approaches for few-shot flat NER can mainly be divided into three categories: sequence-labelingbased, generative-based and span-based methods.
Sequence-labeling-based mehtods treat NER as sequence labeling that assigns a tag for each token using the BIO or IO tagging scheme (Ma et al., 2022b; Huang et al., 2022b; Das et al., 2022). Generativebased methods autoregressively generate the entity types or the pointer index sequence directly (Cui et al., 2021; Hou et al., 2022; Chen et al., 2022). Span-based methods enumerate text spans in the input text and classify each span based on its corresponding template score (Yang et al., 2022), or the similarity between the span representation and the anchor (Wang et al., 2022a; Ma et al., 2022c; Wang et al., 2022b; Ji et al., 2022).
Directly applying current few-shot flat NER
methods to nested named entities suffers from some weaknesses. For sequence-labeling-based methods, extra strategies such as layering and concatenating the nested entity's multiple labels into one label (Straková et al., 2019; Wang et al., 2020)
are needed. Such adaptation lacks flexibility and makes the already scarce supervision signal even more sparse. Generative-based methods can directly handle nested entities. However, due to the auto-regressive generation manner, the optimization objective is not consistent with the NER task, resulting in some biases learned by the model during the training process (Zhang et al., 2022). In addition, such biases are more difficult to eliminate with limited labeled data.
By enumerating all the text spans, span-based nested NER can be converted into flat NER, which seems promising. However, such adaptation faces two challenges. First, it is crucial to utilize the relationship between inner and outer entities in nested NER, which is usually ignored in the previous work. Some types of entities in medical-related datasets are prone to be nested. As shown in Figure 1 (b), in GENIA, the frequencies of the entities of Protein type being nested with the entities of DNA type are nearly five times higher than that with the entities of RNA type. Secondly, the same mention may have different types in polysemy scenarios. Therefore, it is necessary to capture local features and precisely model contextual information.
To address the issues mentioned above, we propose a novel span-based method based on Focusing, brIdging and prompTing (FIT) for few-shot nested NER without using source domain data.
In the focusing stage, inspired by the IO tagging scheme of sequence-labeling-based methods, each token is tagged whether a part of an entity or not.
Then entity-concentrated parts can be obtained by concatenating **continuous** tokens marked with the I-tag. In the bridging stage, for each entityconcentrated part, all spans obtained by enumerating are chosen as candidate spans and filtered according to the boundary score of each candidate span. The bridging stage acts as a bridge connecting the flat entity-concentrated parts with nested NER since nested entities can be obtained by enumerating. In the prompting stage, to make use of the relationship information between nested entities and contextual position information, adversarial prompt-based span classification is proposed. The soft prompts directly before and after the span are inserted to make full use of the contextual position information near the span for classification. Moreover, contrastive learning is employed to shorten the distance between sentence representations to reduce the interference caused by soft prompts. In this way, we preserve the potential connections between nested entities.
Our main contributions are as follows:
- A novel span-based method based on Focusing, brIdging and prompTing (FIT) for few-
shot nested NER is proposed. To the best of our knowledge, we are the first to tackle fewshot nested NER without using source domain data.
- To make use of the relationship information between nested entities and contextual position information, adversarial prompt-based span classification is proposed.
- Experimental results show that FIT achieves state-of-the-art performance consistently on the four benchmark datasets (ACE2004, ACE2005, GENIA and KBP2017) and outperforms several competing baseline models on F1-score by 9.33% on ACE2004, 6.17%
on ACE2005, 9.40% on GENIA and 5.12%
on KBP2017 on 5-shot setting.
## 2 Related Work 2.1 Nested Ner
Most of the existing nested NER methods focus on the fully supervised learning paradigm. There are sequence-labeling-based methods (Straková et al.,
2019; Wang et al., 2020), generative-based methods (Yan et al., 2021; Tan et al., 2021), span-based methods (Shen et al., 2021; Yuan et al., 2022; Huang et al., 2022a), anchor-based methods Lin et al. (2019) and so on. There are also methods based on hyper-graph, which adopt the hyper-graph to represent all possible nested structures in a sentence (Katiyar and Cardie, 2018; Wang and Lu, 2018). However, these supervised nested NER methods rely on plenty of labeled data to work, which is not suitable for the few-shot setting.
## 2.2 Few-Shot Ner
In recent years, several methods have been proposed to solve the few-shot flat NER task, mainly including sequence-labeling-based (Huang et al.,
2021; Ma et al., 2022b,a; Yang and Katiyar, 2020; Das et al., 2022; Huang et al., 2022b), generativebased (Cui et al., 2021; Hou et al., 2022; Chen et al.,
2022) and span-based (Yang et al., 2022; Wang et al., 2022b) methods. In terms of different definitions of few-shot setting, few-shot NER can also be divided into two categories: in-domain (Huang et al., 2022b) and domain transfer (Das et al., 2022) settings. The former directly uses few samples for training and tests on the complete test set; while the latter pre-trains on the rich-resource source domain dataset and then fine-tunes on a low-resource target domain dataset. To the best of our knowledge, there is only one work dedicated to studying the few-shot nested NER (Ming et al., 2022). For each word, they design a Biaffine representation module for learning the contextual dependency representation, and then merge semantic representation by the residual module. However, they apply max pooling to extract the most important features as span representation, which loses a lot of span information. Moreover, we focus on the in-domain setting, a more difficult scenario, instead of the domain transfer setting. Our approach can be easily adapted to the domain-transfer setting by using the pre-training and fine-tuning paradigm.
## 3 Method
In this section, we will first introduce the task definition of nested NER, then describe the details of FIT. Finally, the training objective is introduced.
## 3.1 Overall Architecture
Given an input sentence x = {x1*, . . . , x*n} of n tokens, nested NER aims to correctly identify the left and right boundary tokens xel and xer for every entity e = {xel
, . . . , xer } in x, and assign e the correct entity type y from a predefined list of categories Y, e.g., Y = {"GPE", "ORG"*, . . .* }. Unlike flat NER, there will be overlapping between entities and the tokens in entity e may be assigned multiple types in nested NER.
We formalize nested NER as span extraction and span classification which further are divided into three subtasks. Figure 2 illustrates how the proposed approach, FIT, works. In the focusing stage, the entity-concentrated parts, such as "state legislatures" shown in Figure 2, are obtained. In the bridging stage, span extraction is conducted on the parts obtained in the focusing stage. Spans such as "representatives to the electoral college" are collected. In the prompting stage, spans obtained in the bridging stage are classified.
## 3.2 Focusing
Given an input text x = {x1*, . . . , x*n} consisting of n tokens, the focusing stage is to find the entityconcentrated parts in x, i.e., all the longest parts where named entities are adjacent as shown in Figure 2, which is important for the following bridging stage. We denote the set of entity-concentrated parts as xr = {xr1*, . . . ,* xrK }, where xri∩xrk =
∅, xrk = {xl, . . . , xr} ⊂ x denotes k-th part, and xl, xr denote the left and right boundary tokens respectively.
The focusing stage is accomplished by constructing an IO tagging module and predicting each token whether a part of an entity or not based on its tag score. Each entity-concentrated part xrk can be obtained by concatenating **continuous** tokens marked with I-tag.
The implementation details are as follows. First, we feed the input text into BERT to obtain the representation h ∈ R
n×d, where d is the dimension of the BERT hidden states. For each token xi, BERT
tokenizer may divide it into multiple subtokens ti = (ti1*, . . . , t*ij ). Consequently, the representation h tag iof each token xiis the concatenation of the mean pooled subtoken representation h p i and the representation of the [CLS] token h
[CLS]. The tag score p tag iis calculated as follows:
$$h_{i}^{p}=\text{MeanPooling}(h_{t_{i1}},\ldots,h_{t_{ij}})\tag{1}$$ $$h_{i}^{tag}=\text{Concat}(h_{i}^{p},h^{[\text{CLS}]})\tag{2}$$ $$p_{i}^{tag}=\text{Softmax}(\text{MLP}_{\text{tag}}(h_{i}^{tag}))\tag{3}$$ where MLP denotes the multilayer perceptron for
$$(1)$$
$\eqref{eq:walpha}$
binary classification. Then whether a token is a
part of an entity can be calculated as:
$${\hat{y}}_{i}^{t a g}=\arg\operatorname*{max}(p_{i}^{t a g})$$
i) (4)
For the binary classifier, we simply use the crossentropy loss:
$$\mathcal{L}_{focus}=\sum_{i}\text{CrossEntropyLoss}(p_{i}^{tag},y_{i}^{tag})\tag{5}$$ where $y_{i}^{tag}$ is the ground truth label; $y_{i}^{tag}$ being 1.
denotes xiis part of an entity and 0 denotes that xi is not part of an entity.
## 3.3 Bridging
In the bridging stage, for each entity-concentrated part xrk obtained in the focusing stage, we enumerate all spans in xrk to obtain candidate nested spans. Candidate nested spans are filtered according to the boundary scores to reduce spans with low-quality.
To calculate the boundary score of each candidate nested span, we need to calculate the probabilities of each token xi ∈ xrk being the left or right boundary of an entity respectively.
![3_image_0.png](3_image_0.png)
Prompting Contrastive Learning Positive Bridging Tag: I I I I O O I I O O I I I I I O O O O O
Focusing
For each entity-concentrated part xrk
, its partrepresentation h r i is the mean pooling of all tokens' representations in xrk
. We concatenate the partrepresentation h r i and the token representation h p i to obtain the representation h boundary ifor each token xi ∈ xrk
, which is used to calculate whether token xiis the left or right boundary of an entity. The probabilities of each token xi being the left and right boundaries can be calculated as follows:
$$h_{i}^{r}=\text{MeanPooling}(h_{x_{l}},\ldots,h_{x_{r}})\tag{6}$$ $$h_{i}^{boundary}=\text{Concat}(h_{i}^{r},h_{i}^{p})\tag{7}$$ $$p_{i}^{left}=\text{Softmax}(\text{MLP}_{\text{left}}(h_{i}^{boundary}))\tag{8}$$ $$p_{i}^{right}=\text{Softmax}(\text{MLP}_{\text{right}}(h_{i}^{boundary}))\tag{9}$$ To train the MLP${}_{\text{left}}$ and MLP${}_{\text{right}}$ classifiers, we obtain a new label as
we need to pre-assign the categories y lef t iand y right iof xi. 1 denotes that xiis the left or right boundary of an entity while 0 denotes that xiis not the boundary of an entity. We simply use the cross-entropy loss:
$$\mathcal{L}_{left}=\sum_{i}\text{CrossEntropyLoss}(p_{i}^{left},y_{i}^{left})\tag{10}$$ $$\mathcal{L}_{right}=\sum_{i}\text{CrossEntropyLoss}(p_{i}^{right},y_{i}^{right})$$
We denote the set of candidate nested spans obtained by enumerating xrk as ˆs = (s1*, . . . ,* sw),
where si = (sli
, . . . , sri
) denotes i-th candidate nested span, and sli
, sri denote the left and right boundary tokens of the span respectively. Then the boundary score of each candidate nested span si can be calculated as:
$$p_{s_{i}}^{s p a n}=p_{s_{l_{i}}}^{l e f t}\odot p_{s_{r_{i}}}^{r i g h t}\qquad\qquad(12)$$
where p lef t slidenotes the probability of the left boundary token sli of the span si being the left boundary of an entity. Likewise, p right sridenotes the probability of the right boundary token sri of the span si being the right boundary of an entity. Note that ⊙ is element-wise multiplication.
Now we sort the set of candidate nested spans ˆs according to the score p span si. For candidate nested spans with partial overlapping, those with low scores are discarded. For simplification, we denote the set of filtered candidate nested spans as S = (s1*, . . . ,* sf ), where S ⊂ ˆs.
## 3.4 Prompting
Let M be a language model pre-trained on largescale corpora, prompt learning formalizes the classification task into a masked language modeling problem. Specifically, prompt learning wraps the input text with a *template*, a piece of natural language text or some marks. The model M should predict the label in [MASK] position. In this work, the prompting stage follows the common promptlearning practice (Schick and Schütze, 2021). In addition, we introduce contrastive learning to achieve adversarial prompt learning. Due to the space limitation, instead of introducing the overall process, we only list some key parts in this subsection.
Soft Prompts Setting. The first key part is how to construct soft prompts. Wikipedia usually uses the fullname(*abbreviation*) pattern when introducing entities. For example, when introducing the "OSI model" in Wikipedia, the first sentence in the first paragraph is "The Open Systems Interconnection model (OSI model) is a conceptual model
. . . "
1. Inspired by that, we build soft prompts using the same pattern entity(tag), making its form closer to the form of sentences in the pre-training corpus. Specifically, for each span si in the filtered candidate nested spans set S, we wrap it into xp = {xpart1
, [p1], si, [p2], [MASK], [p3], xpart2}
where [pi] denotes the soft prompt. For example, assuming we need to classify the span "state" in the sentence x in Figure 2, we wrap it into xp = "U.S.
law allows [p1] state [p2][MASK][p3] legislatures to choose representatives to the electoral college as a last resort."
Then M predicts the probability of each label y being filled in [MASK] token PM([MASK] = y | xp).
The predicted yˆ is
$$\hat{y}=\operatorname*{arg\,max}_{y\in{\mathcal{Y}}}P_{{\mathcal{M}}}([\mathrm{MASK}]=y\mid\mathbf{x}_{\mathrm{p}})\tag{13}$$
y∈Y
This objective function is suitable for optimization by applying a cross-entropy loss on the predicted probability.
Contrastive Learning Setting. As the construction of soft prompts will interfere with nested entities, the connection between inner and outer nested entities may be cut off. To alleviate this problem, we introduce contrastive learning. Inspired by (Chen and He, 2021; Sevegnani et al.,
2022), we abandon the practice of negative pairs used in traditional contrastive learning and only construct positive pairs. Positive pairs are defined as (xp1
, xp2
), where both xp1 and xp2 are different wrapped spans obtained from S. Note that the spans in the set S are all paired in pairs.
Then, we calculate cosine embedding loss by:
$$\mathcal{L}_{contrast}(\mathbf{x_{p_{1}}},\mathbf{x_{p_{2}}})=1-\cos(\mathbf{x_{p_{1}}^{\rm[CLS]}},\mathbf{x_{p_{2}}^{\rm[CLS]}})\tag{14}$$
(14)
where x
[CLS]
piis the [CLS] token representation of
xpi
obtained by BERT.
## 1 https://en.wikipedia.org/wiki/OSI_model
## 3.5 Training Objectives
The overall loss function is:
$$\begin{array}{c}{{{\mathcal{L}}=\alpha{\mathcal{L}}_{f o c u s}+\beta{\mathcal{L}}_{l e f t}+\gamma{\mathcal{L}}_{r i g h t}}}\\ {{\qquad\qquad+\eta{\mathcal{L}}_{p r o m p t}+\lambda{\mathcal{L}}_{c o n t r a s t}}}\end{array}\tag{15}$$
where Lf ocus, Llef t, Lright, L*prompt* and L*contrast* are balanced with hyper-parameters α, β, γ, η and λ respectively, and L*prompt* denotes loss function used in the soft prompt-learning.
## 4 Experiments
In this section, we conduct experiments on four nested NER datasets to evaluate the effectiveness of the proposed method.
## 4.1 Datasets
Experiments are conducted on four nested NER
datasets: ACE20042(Doddington et al., 2004),
ACE20053(Walker et al., 2005), GENIA4(Ohta et al., 2002) and KBP20175(Ji et al., 2017). Please refer to Appendix A.1 for the introduction and statistical information about the datasets.
## 4.2 Experiment Settings
In-Domain Setting. For few-shot learning, we conduct 5, 10, and 20-shot experiments without pre-training on the rich-resource source domain.
For a k-shot experiment, all the original test sets are preserved for testing, and the training and development sets are resampled for training. Following the same sampling method as previous work (Ma et al., 2022b), we sample k instances per class from the original training set to form the few-shot training set and sample another k instances per class from the original development set to form the fewshot development set. It is worth noting that no random seed is searched when sampling. 10 sets of data were sampled for k-shot, and all subsequent metrics were taken from the **average** of these 10 sets of data. The statistical information of few-shot datasets obtained by sampling can be found in Appendix A.1. For all datasets, we train our model for 35 epochs and choose the checkpoint with the best validation performance to test. See Appendix A.2 for more detailed settings.
Evaluation Metrics Setting. Span-level precision, recall, and Micro-F1 scores are used to measure the results in all experiments. Note that the nested NER datasets also contain a certain proportion of flat entities, then the standard metrics end up confusing flat and nested results and, consequently, are not able to reflect well the ability of a model to detect nesting. To alleviate this issue, we analyze the error rates for total entities e*total*,
flat entities e*f lat*, nested entities e*nested*, the inner entities e*inner* and the outer entities e*outer*. See Appendix A.3 for the calculation formulae.
## 4.3 Baselines
We use the following models as baselines for fewshot nested NER: Locate and Label (Shen et al.,
2021), Unified Generative NER (Yan et al., 2021), SEE-Few (Yang et al., 2022), SDNet (Chen et al.,
2022) and ESD (Wang et al., 2022b). The first two baselines are fully supervised methods, and the last three are designed for the few-shot setting.
It should be noted that since most few-shot NER
methods cannot handle few-shot nested NER, the methods available to us are limited. Please refer to Appendix A.4 for detailed information.
## 4.4 Experiment Results
Main Results. Table 1 illustrates the performance of FIT and baselines on ACE2004, ACE2005, GENIA and KBP2017. We can see that: 1) FIT consistently outperforms all the baselines by a large margin. Especially in the 5-shot setting, the F1-scores of our model advance previous models by +9.33%,
+6.17%, +9.40%, +5.12% on ACE2004, ACE2005, GENIA, and KBP2017 respectively. In the ablation study, we will investigate which components bring improvement. 2) For fully supervised methods, both Locate and Label and Unified Generative NER perform poorly. In particular, Unified Generative NER, as a generative-based method, performs more poorly in a few-shot setting. These show that fully supervised methods may inherently flaw in few-shot NER. 3) For few-shot methods, they show competitive performances as the shot rises, especially SEE-Few and ESD. SEE-Few shows competitive performances under the 20-shot setting, but its performance on the 5-shot setting is not satisfactory.
The reason may be the NLI task used in SEE-Few has limitations in context utilization. ESD also shows good performance, which we attribute to its pre-training on the large-scale corpus Few-NERD
(Ding et al., 2021) and a significant part of the GE-
NIA dataset. ESD without pre-training has also been evaluated, and its performance decreases by 15%-25% on the four datasets. The performance of 1-shot experiments can be found in Appendix B.1.
Error Rates for Nested Entities. Table 2 illustrates the error rates on the GENIA dataset under few-shot settings. We can see that: Among all methods, FIT significantly reduces the error rates of nested entities. In particular, FIT significantly reduces e*inner* and makes it even lower than e*outer*,
which shows the effectiveness of FIT for inner entities. The error rates on other datasets can be found in Appendix B.2.
## 4.5 Ablation Study
We conduct ablation experiments on four datasets.
The results on the GENIA dataset are shown in Table 3. The results on other datasets can be found in Appendix B.3.
W/o focusing. We directly enumerate all spans in the sentence as candidate spans and filter them in the bridging stage. A significant performance drop in all settings is observed, which indicates that the focusing stage filters out most of the low-quality parts with only one binary classifier.
W/o filtering. The filtering module in the bridging stage is removed directly. The results show that the filtering module has a positive effect under the 5-shot setting. However, as the number of training data increases, the effect of w/o filtering becomes better. We think that is because the prompting stage can acquire a stronger ability to discriminate low-quality spans as the amount of training data increases, while the filtering module is relatively underfitting at this time. Consequently, some true positives are discarded in advance at the bridging stage, which causes performance loss.
W/o contrastive learning. The contrastive learning module is removed directly. The results show that contrastive learning reduces the interference caused by the soft prompt, and made the model more stable, which is reflected in the reduction of the standard deviation.
W/o series prompt setting. Three kinds of experiments are designed: **w/o soft prompt** replaces soft prompts with discrete prompts (The three prompts are "," "(" and ")" respectively);
w/o contextual prompt does not use context-based prompts, but moves prompts to the end of the sentence (with template "x . si is [MASK] ."). Note that the appropriate template has been searched ;
| Datasets | Methods | 5-shot | 10-shot | 20-shot | | | | | | |
|------------------------------------------------------------------------------------------------------|------------------|----------|------------|------------|-------|-------------|------------|-------|-------------|------------|
| P | R | F1 ↑ | P | R | F1 ↑ | P | R | F1 ↑ | | |
| Locate and Label | 51.59 | 3.93 | 7.20±3.34 | 65.31 | 14.12 | 22.88±6.81 | 67.74 | 29.45 | 41.02±2.79 | |
| Unified NER | 18.18 | 5.86 | 8.87±3.47 | 29.59 | 9.71 | 14.19±6.23 | 43.84 | 21.74 | 28.73±10.83 | |
| SEE-Few | 50.08 | 18.69 | 26.54±6.60 | 57.74 | 29.70 | 38.89±4.07 | 63.53 | 39.91 | 48.94±2.27 | |
| SDNet | 61.40 | 12.45 | 20.55±4.64 | 65.73 | 23.81 | 34.82±4.71 | 67.18 | 31.52 | 42.87±2.13 | |
| ESD | 34.51 | 13.69 | 19.25±5.74 | 53.95 | 35.44 | 42.75±5.11 | 56.94 | 48.27 | 52.17±3.76 | |
| FIT(ours) | 46.87 | 29.31 | 35.87±4.92 | 51.43 | 40.18 | 44.88±4.82 | 60.14 | 48.93 | 53.92±2.99 | |
| ACE2004 | Locate and Label | 50.20 | 6.55 | 11.43±6.56 | 57.80 | 16.52 | 25.13±9.00 | 65.13 | 28.69 | 39.61±6.02 |
| Unified NER | 17.08 | 5.92 | 8.72±4.42 | 18.19 | 9.23 | 13.17±4.01 | 36.10 | 18.30 | 24.26±2.59 | |
| SEE-Few | 49.42 | 17.69 | 25.58±6.61 | 55.92 | 27.45 | 36.36±6.63 | 61.37 | 44.19 | 51.31±2.27 | |
| SDNet | 57.46 | 13.81 | 22.03±6.12 | 61.17 | 22.08 | 32.20±4.89 | 65.84 | 32.03 | 43.00±3.55 | |
| ESD | 36.36 | 28.51 | 31.57±6.45 | 42.99 | 35.72 | 38.81±7.04 | 55.01 | 46.39 | 50.30±3.37 | |
| FIT(ours) | 44.74 | 33.05 | 37.74±5.33 | 46.83 | 38.85 | 42.25±10.65 | 58.02 | 48.5 | 52.71±2.55 | |
| ACE2005 | Locate and Label | 36.12 | 10.42 | 15.57±6.78 | 52.46 | 23.29 | 31.65±6.54 | 62.17 | 41.60 | 49.67±4.46 |
| Unified NER | 13.26 | 2.85 | 4.68±2.27 | 17.23 | 7.88 | 10.62±5.48 | 30.89 | 15.87 | 20.98±3.64 | |
| SEE-Few | 30.92 | 14.41 | 19.31±6.95 | 52.35 | 29.84 | 37.78±5.04 | 59.36 | 45.10 | 50.93±4.66 | |
| SDNet | 41.25 | 11.36 | 17.46±6.97 | 48.57 | 12.18 | 19.03±7.07 | 57.03 | 23.54 | 33.27±3.71 | |
| ESD | 36.44 | 20.24 | 25.03±9.88 | 48.86 | 28.00 | 35.23±4.96 | 55.49 | 41.62 | 47.22±4.36 | |
| FIT(ours) | 40.72 | 30.30 | 34.43±9.06 | 52.91 | 39.51 | 44.95±3.38 | 57.00 | 46.81 | 51.26±3.96 | |
| GENIA | Locate and Label | 69.95 | 9.57 | 16.52±7.67 | 68.33 | 17.54 | 27.17±9.90 | 69.36 | 36.40 | 47.35±7.29 |
| Unified NER | 21.13 | 5.47 | 8.49±7.94 | 27.66 | 12.08 | 16.00±8.28 | 35.17 | 15.62 | 21.30±8.20 | |
| SEE-Few | 47.02 | 15.34 | 22.87±4.82 | 55.07 | 27.48 | 36.26±6.08 | 58.86 | 41.99 | 48.65±5.51 | |
| SDNet | 62.28 | 12.24 | 20.25±3.88 | 65.11 | 21.03 | 31.57±4.55 | 64.92 | 33.98 | 44.48±4.34 | |
| ESD | 34.27 | 24.39 | 28.38±9.02 | 49.13 | 38.61 | 42.99±4.20 | 54.64 | 51.00 | 52.54±3.76 | |
| FIT(ours) | 44.68 | 27.20 | 33.50±4.37 | 50.69 | 39.43 | 44.21±4.64 | 56.39 | 52.70 | 54.27±5.07 | |
| KBP2017 Table 1: Performance comparison of FIT and baselines on four datasets under different shots. | | | | | | | | | | |
w/o prompt directly abandons the prompt setting and trains a multi-class classifier to classify the candidate nested spans. Experimental results show that context-based soft prompts have a positive effect, while directly training classifiers is less effective, illustrating the importance of utilizing contextual information in few-shot nested NER.
## 5 Time Complexity
Theoretically, the number of possible spans in a sentence of length N is N(N+1)
2. If we classify almost all spans into corresponding categories, it will lead to a high computational cost with O(N2) time complexity. However, the focusing stage makes the model only focus on the entity-concentrated part, reducing the time complexity. Although in the worst case, the model keeps the whole sentence as an entity-concentrated part, generating N(N+1)
2 candidate nested spans. The number of candidate spans is reduced as some partial overlap spans are discarded according to the boundary scores.
We also evaluate the efficiency of FIT. In the 5-shot setting of the ACE2004 dataset, compared with few-shot span-based methods SEE-Few training that takes 159.39s, the FIT takes 122.37s for the same 35 epochs, which leads to approximately 23.23% speedup. In the inference phase, FIT also spends 31.99ms for each sample on average, which is 15.50% faster than other results-competitive methods. Time usage on four datasets can be found in Appendix C.
## 6 Discussion
The F1 scores of 10 sets of 20-shot data sampled on the ACE2005 dataset are compared in Figure 3.
The horizontal coordinate is sorted in ascending order by the nested ratio (the lower bound is 22.86%,
and the upper bound is 42.14%). The nested ratio of each set can be found in Appendix A.1. It shows that under a single data set, the performance of the model is more closely related to the quality of the sampled data rather than the nested ratio.
Nevertheless, FIT works better than other methods.
To further explore the effect of the different
Methods5-shot 10-shot 20-shot
etotal ↓ ef lat ↓ enested ↓ einner ↓ eouter ↓ etotal ↓ ef lat ↓ enested ↓ einner ↓ eouter ↓ etotal ↓ ef lat ↓ enested ↓ einner ↓ e*outer* ↓
SEE-Few 85.58 86.07 83.84 83.73 84.10 70.17 69.11 73.97 74.39 73.58 54.90 52.20 64.62 65.84 63.49
SDNet 88.64 86.09 97.79 98.08 97.59 87.82 85.12 97.50 97.96 97.13 76.46 70.92 96.36 96.84 96.03
ESD 79.76 78.96 82.64 81.27 84.41 72.00 70.59 77.08 74.46 80.05 58.38 55.67 68.11 63.81 72.52
FIT(ours) 69.70 68.11 75.40 73.82 77.50 60.49 57.91 69.77 65.81 73.99 53.19 49.89 65.04 60.07 **70.17**
Table 2: The error rates comparison of FIT and baselines on the GENIA dataset under different shots. Orange indicates that e*inner* is smaller than e*outer*. Note that: 1) We follow Wang et al. (2022b) and pre-train ESD on part of the GENIA dataset. 2) We did not mark SDNet's e*inner* because the values are too large to be informative.
F1 F1 Locate and Label Unified NER
SEE-Few FIT
Locate and Label Unified NER
SEE-Few FIT
0.18 0.23
0.28
0.33
0.38
0.43
0.48
0.53
0.58
0.18
0.23
0.28
0.33
0.38
0.43
0.48
0.53
0.58
Locate and Label Unified NER
SEE-Few
FIT
1 2 3 4 5 6 7 8 9 10
set
34.5 39.5 44.5
49.5 54.5 59.5
1 2 3 4 5 6 7 8 9 10
17-24% 30-34% 40-44% 60-64% 70-74%
nested ratio
| 0.23 | | | | | | | | | |
|---------------------------|--------|---------|------------|-------|-------|------------|-------|-------|------------|
| 0.18 | | 0.28 | | | | | | | |
| 0.23 | | | | | | | | | |
| 0.18 | | | | | | | | | |
| 59.5 | | | | | | | | | |
| 54.5 | | | | | | | | | |
| 49.5 | | | | | | | | | |
| 44.5 | | | | | | | | | |
| 39.5 | | | | | | | | | |
| 34.5 | | | | | | | | | |
| Methods | 5-shot | 10-shot | 20-shot | | | | | | |
| Full model | 40.72 | 30.30 | 34.43±9.06 | 52.91 | 39.51 | 44.95±3.38 | 57.00 | 46.81 | 51.26±3.96 |
| -w/o focusing | 33.57 | 9.80 | 14.21±8.54 | 49.90 | 13.25 | 19.57±7.44 | 57.22 | 14.40 | 22.74±4.23 |
| -w/o filtering | 33.62 | 22.11 | 26.56±7.97 | 52.79 | 37.52 | 43.58±4.85 | 57.47 | 48.63 | 52.42±4.03 |
| -w/o contrastive learning | 41.41 | 24.43 | 30.17±9.78 | 52.39 | 38.94 | 44.23±5.01 | 59.71 | 44.16 | 50.47±4.48 |
| -w/o soft prompt | 43.02 | 27.00 | 32.45±6.87 | 49.22 | 38.16 | 42.67±4.55 | 59.23 | 45.28 | 51.14±3.98 |
| -w/o contextual prompt | 36.93 | 21.74 | 26.94±8.04 | 53.51 | 31.90 | 39.63±7.08 | 59.19 | 44.09 | 50.35±4.54 |
| -w/o prompt | 18.05 | 9.20 | 10.99±6.02 | 29.81 | 21.30 | 23.94±4.80 | 40.61 | 35.06 | 37.39±2.68 |
| 1 | | | | | | | | | |
| 2 | | | | | | | | | |
| 3 | | | | | | | | | |
| 4 | | | | | | | | | |
| 5 | | | | | | | | | |
| 6 | | | | | | | | | |
| 7 | | | | | | | | | |
| 8 | | | | | | | | | |
| 9 | | | | | | | | | |
| 10 | | | | | | | | | |
| 0.58 | | | | | | | | | |
| 0.53 | | | | | | | | | |
| 0.48 | | | | | | | | | |
| 0.43 | | | | | | | | | |
| 0.38 | | | | | | | | | |
| 0.33 | | | | | | | | | |
| 0.28 | | | | | | | | | |
| 0.23 | | | | | | | | | |
| 0.18 | | set | | 0.58 | | | | | |
| 0.53 | | | | | | | | | |
| 0.48 | | | | | | | | | |
| 0.43 | | | | | | | | | |
| 0.38 | | | | | | | | | |
| 0.33 | | | | | | | | | |
| 0.28 | | | | | | | | | |
| 0.23 | | | | | | | | | |
| 0.18 | | | | | | | | | |
| total | | | | | | | | | |
| flat | | | | | | | | | |
| nested | | | | | | | | | |
| inner | | | | | | | | | |
| outer | | | | | | | | | |
| F1 | | | | | | | | | |
| 0.58 | | | | | | | | | |
| 0.53 | | | | | | | | | |
| 0.48 | | | | | | | | | |
| 0.43 | | | | | | | | | |
| 0.38 | | | | | | | | | |
| 0.33 | | | | | | | | | |
| 0.28 | | | | | | | | | |
| 0.23 | | 59.5 | | | | | | | |
| 54.5 | | | | | | | | | |
| 49.5 | | | | | | | | | |
| 44.5 | | | | | | | | | |
| 39.5 | | | | | | | | | |
| F1 | | | | | | | | | |
| Locate | | | | | | | | | |
| and | | | | | | | | | |
| Label | | | | | | | | | |
| Unified | | | | | | | | | |
| NER | | | | | | | | | |
| SEE-Few | | | | | | | | | |
| FIT | | F1 | | | | | | | |
| FIT | | | | | | | | | |
| 17-24% | | | | | | | | | |
| 30-34% | | | | | | | | | |
| 40-44% | | | | | | | | | |
| 60-64% | | | | | | | | | |
| 70-74% | | | | | | | | | |
| nested | | | | | | | | | |
| ratio | | Error | | | | | | | |
| Rate | | | | | | | | | |
| F1 | | | | | | | | | |
| Locate | | | | | | | | | |
| and | | | | | | | | | |
| Label | | | | | | | | | |
| Unified | | | | | | | | | |
| NER | | | | | | | | | |
| SEE-Few | | | | | | | | | |
| FIT | | total | | | | | | | |
| flat | | | | | | | | | |
| nested | | | | | | | | | |
| inner | | | | | | | | | |
| outer | | | | | | | | | |
| F1 | | | | | | | | | |
| 1 | | | | | | | | | |
| 2 | | | | | | | | | |
| 3 | | | | | | | | | |
| 4 | | | | | | | | | |
| 5 | | | | | | | | | |
| 6 | | | | | | | | | |
| 7 | | | | | | | | | |
| 8 | | | | | | | | | |
| 9 | | | | | | | | | |
| 10 | | | | | | | | | |
| 59.5 | | | | | | | | | |
34.5 39.5 44.5
49.5 54.5
59.5
F1
1 2 3 4 5 6 7 8 9 10
| 56 |
|------|
| 54 |
| 52 |
| 50 |
| 48 |
| 46 |
| F1 |
44 46 48 50 52 54 56
![7_image_0.png](7_image_0.png)
54.5 59.5 F1 17-24% 30-34% 40-44% 60-64% 70-74%
17-24% 30-34% 40-44% 60-64% 70-74%
1 2 3 4 5 6 7 8 9 10 F1 group SEE-Few SDNet 0.35
nested ratios of the training sets on FIT, we randomly sample 200 sets of 20-shot data from the ACE2005 dataset and preserve sets that satisfy specific nested ratios. Finally, 50 sets are kept and divided into 5 groups. (Note that each group contains 10 sets and 5 groups corresponding to nested ratios of 17-24%, 20-34%, 40-44%, 60-64%, and 70-74%.) As shown in Figure 4, the F1 score tends to decrease as the nested ratio increases. However, the e*nested* maintains a decreasing trend while the e*f lat* increases. It can be seen that the increase in the nested ratio may help the model learn nested entities better, and most of the decrease in F1 is due to the misjudgment of flat entities.
44 46 48 50 52 54 56
![7_image_1.png](7_image_1.png)
F1 17-24% 30-34% 40-44% 60-64% 70-74%
## 7 Conclusion
In this work, we propose a span-based method for few-shot nested NER without using source domain data. First, the candidate nested spans are generated by the focusing and bridging components.
Then the adversarial prompt-based span classification method is proposed to classify candidate spans into the corresponding categories. Our proposed method, FIT, can make full use of the unique features of nested entities while reducing the computational cost and the impact of low-quality candidate spans. Experimental results show that our method achieves state-of-the-art performance consistently on the four benchmark datasets (ACE2004, ACE2005, GENIA, and KBP2017), and outperforms several competing baseline models on F1score and the error rates of nested entities.
## Limitations
Although our method achieves state-of-the-art performance consistently on the four benchmark datasets, it suffers from the following limitations:
- No optimization for the verbalizer. The verbalizer we use in the prompting stage is just a simple 1-to-1 mapping, This simple design does not fully exploit the capabilities of MLM.
- No explicit modeling of the relationship information between nested entities. We consider that in some other scenarios, the relationship information between nested entities is not very significant. Consequently, explicitly modeling the relationship may introduce new biases. So we just utilize the potential information. But in practice, it is worth exploring how to model such a relationship from a novel perspective.
## Acknowledgements
We would like to thank anonymous reviewers for their valuable comments and helpful suggestions and we thank Huawei for supporting this project.
This work was funded by the National Natural Science Foundation of China (62176053). This work is supported by the Big Data Computing Center of Southeast University.
## References
Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, and Le Sun. 2022. Few-shot named entity recognition with self-describing networks. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5711–5722, Dublin, Ireland. Association for Computational Linguistics.
Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15750–15758.
Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang.
2021. Template-based named entity recognition using BART. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 1835–1845, Online. Association for Computational Linguistics.
Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER:
Few-shot named entity recognition via contrastive learning. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 6338–6353, Dublin, Ireland. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021. Few-NERD: A few-shot named entity recognition dataset. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 3198–3213, Online. Association for Computational Linguistics.
George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In *Lrec*, volume 2, pages 837–840. Lisbon.
Yutai Hou, Cheng Chen, Xianzhen Luo, Bohan Li, and Wanxiang Che. 2022. Inverse is better! fast and accurate prompt for few-shot slot tagging. In Findings of the Association for Computational Linguistics: ACL
2022, pages 637–647, Dublin, Ireland. Association for Computational Linguistics.
Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2021. Fewshot named entity recognition: An empirical baseline study. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 10408–10423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Peixin Huang, Xiang Zhao, Minghao Hu, Yang Fang, Xinyi Li, and Weidong Xiao. 2022a. Extract-select:
A span selection framework for nested named entity recognition with generative adversarial training. In Findings of the Association for Computational Linguistics: ACL 2022, pages 85–96, Dublin, Ireland.
Association for Computational Linguistics.
Yucheng Huang, Kai He, Yige Wang, Xianli Zhang, Tieliang Gong, Rui Mao, and Chen Li. 2022b. COPNER: Contrastive learning with prompt guiding for few-shot named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2515–2527, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Bin Ji, Shasha Li, Shaoduo Gan, Jie Yu, Jun Ma, Huijun Liu, and Jing Yang. 2022. Few-shot named entity recognition with entity-level prototypical network enhanced by dispersedly distributed prototypes. In Proceedings of the 29th International Conference
on Computational Linguistics, pages 1842–1854, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Heng Ji, Xiaoman Pan, Boliang Zhang, Joel Nothman, James Mayfield, Paul McNamee, Cash Costello, and Sydney Informatics Hub. 2017. Overview of tackbp2017 13 languages entity discovery and linking.
In TAC.
Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871, New Orleans, Louisiana. Association for Computational Linguistics.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. *CoRR*, abs/1910.13461.
Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019.
Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182–5192, Florence, Italy. Association for Computational Linguistics.
Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 1956–
1971, Dublin, Ireland. Association for Computational Linguistics.
Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, and Xuanjing Huang. 2022b. Templatefree prompt tuning for few-shot NER. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5721–5732, Seattle, United States. Association for Computational Linguistics.
Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022c. Decomposed metalearning for few-shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1584–1596, Dublin, Ireland. Association for Computational Linguistics.
Rui Mao and Xiao Li. 2021. Bridging towers of multitask learning with a gating mechanism for aspectbased sentiment analysis and sequential metaphor
identification. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13534–13542.
Hong Ming, Jiaoyun Yang, Lili Jiang, Yan Pan, and Ning An. 2022. Few-shot nested named entity recognition. *arXiv preprint arXiv:2212.00953*.
Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, and Junichi Tsujii. 2002. The genia corpus:
An annotated research abstract corpus in molecular biology domain. In *Proceedings of the human language technology conference*, pages 73–77. Citeseer.
Keqin Peng, Chuantao Yin, Wenge Rong, Chenghua Lin, Deyu Zhou, and Zhang Xiong. 2022. Named entity aware transfer learning for biomedical factoid question answering. *IEEE/ACM Transactions on Computational Biology and Bioinformatics*, 19(4):2365–
2376.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics.
Karin Sevegnani, Arjun Seshadri, Tian Wang, Anurag Beniwal, Julian McAuley, Alan Lu, and Gérard Medioni. 2022. Contrastive learning for interactive recommendation in fashion. In *SIGIR 2022 Workshop on eCommerce*.
Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 2782–2794, Online. Association for Computational Linguistics.
Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331, Florence, Italy. Association for Computational Linguistics.
Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, and Yueting Zhuang. 2021. A sequence-to-set network for nested named entity recognition. In *Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21*, pages 3936–
3942. International Joint Conferences on Artificial Intelligence Organization. Main Track.
Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2005. Ace 2005 multilingual training corpus-linguistic data consortium. *URL:*
https://catalog. ldc. upenn. edu/LDC2006T06.
Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–
214, Brussels, Belgium. Association for Computational Linguistics.
Jianing Wang, Chengyu Wang, Chuanqi Tan, Minghui Qiu, Songfang Huang, Jun Huang, and Ming Gao.
2022a. Spanproto: A two-stage span-based prototypical network for few-shot named entity recognition.
CoRR, abs/2210.09049.
Jue Wang, Lidan Shou, Ke Chen, and Gang Chen. 2020.
Pyramid: A layered model for nested named entity recognition. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 5918–5928.
Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, and Zhifang Sui. 2022b.
An enhanced span-based decomposition method for few-shot sequence labeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5012–5024, Seattle, United States. Association for Computational Linguistics.
Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 5808–5822, Online.
Association for Computational Linguistics.
Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365–6375, Online. Association for Computational Linguistics.
Zeng Yang, Linhai Zhang, and Deyu Zhou. 2022. SEEfew: Seed, expand and entail for few-shot named entity recognition. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 2540–2550, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3174–3186, Dublin, Ireland. Association for Computational Linguistics.
Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. 2022. De-bias for generative extraction in unified NER task. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 808–818, Dublin, Ireland. Association for Computational Linguistics.
## A Experiment Settings On Nested Ner A.1 Statistics Of Nested Datasets
We conduct experiments on four nested NER
datasets: ACE20046, ACE20057, GENIA8and KBP20179. GENIA dataset is available under the license of CC-BY 3.0, whereas ACE2004, ACE2005, and KBP2017 require a license from LDC. The details are as follows:
ACE 2004 and ACE 2005 (Doddington et al.,
2004; Walker et al., 2005) are two nested datasets, each of them containing 7 entity categories. The two nested datasets also contain more than two layers of nesting and the proportion of long entities is relatively large. Following (Katiyar and Cardie, 2018; Lin et al., 2019; Shen et al., 2021), we split them into the train, dev, and test sets by 8:1:1.
GENIA (Ohta et al., 2002) is a biology nested named entity dataset and contains five entity types, including DNA, RNA, protein, cell line, and cell type categories. We use the original division provided by the official10, which is nearly 8/1/1 for the train/dev/test set.
KBP2017 (Ji et al., 2017) has 5 entity categories, including GPE, ORG, PER, LOC, and FAC. We randomly split them into train, dev, and test sets by 6:2:2.
In Table 4, We report the number of sentences, the number of sentences containing nested entities, the average sentence length, the total number of entities, the number of nested entities, and the nested ratio on the ACE2004, ACE2005, GENIA, and KBP2017 datasets. In Table 5, We report the nested ratio of our randomly sampled training sets on the ACE2004, ACE2005, GENIA, and KBP2017 datasets.
## A.2 Detailed Parameter Settings
We implement FIT with Huggingface Transformers 4.11.3 and PyTorch 1.7.1. In most exper-
6https://catalog.ldc.upenn.edu/LDC2005T09 7https://catalog.ldc.upenn.edu/LDC2006T06 8http://www.geniaproject.org/genia-corpus 9https://catalog.ldc.upenn.edu/LDC2019T12 10http://www.geniaproject.org/genia-corpus/
relation-corpus
| Dataset Statistics | ACE2004 | ACE2005 | GENIA | KBP2017 | | | | | | | | |
|-------------------------|-----------|-----------|---------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|
| Train | Dev | Test | Train | Dev | Test | Train | Dev | Test | Train | Dev | Test | |
| # sentences | 6202 | 745 | 812 | 7299 | 971 | 1060 | 15023 | 1669 | 1854 | 2126 | 722 | 720 |
| # sent. nested entities | 2712 | 294 | 388 | 2799 | 352 | 340 | 3197 | 325 | 446 | 622 | 208 | 217 |
| avg sentence length | 22.50 | 23.02 | 23.05 | 19.94 | 19.71 | 17.90 | 25.43 | 24.63 | 25.99 | 24.11 | 25.41 | 25.10 |
| # total entities | 22202 | 2514 | 3035 | 24708 | 3218 | 3030 | 46142 | 4367 | 5506 | 7515 | 2630 | 2564 |
| # nested entities | 10148 | 1092 | 1417 | 9940 | 1189 | 1184 | 8265 | 799 | 1199 | 2145 | 725 | 726 |
| nested ratio (%) | 45.71 | 43.44 | 46.69 | 40.23 | 36.95 | 39.08 | 17.91 | 18.30 | 21.78 | 28.54 | 27.57 | 28.32 |
Table 4: Statistics of the four datasets used in the experiments.
| Groups | ACE2004 | ACE2005 | GENIA | KBP2017 | | | | | | | | |
|------------------|-----------|-----------|---------|-----------|---------|--------|---------|---------|--------|---------|---------|-------|
| 5-shot | 10-shot | 20-shot | 5-shot | 10-shot | 20-shot | 5-shot | 10-shot | 20-shot | 5-shot | 10-shot | 20-shot | |
| # 1 | 40.00 | 33.33 | 35.34 | 22.86 | 25.71 | 42.14 | 8.00 | 8.00 | 11.00 | 8.33 | 42.86 | 23.08 |
| # 2 | 6.45 | 32.79 | 34.51 | 31.43 | 25.71 | 26.43 | 8.00 | 16.00 | 6.00 | 27.27 | 5.26 | 34.21 |
| # 3 | 19.35 | 33.33 | 38.14 | 20.00 | 41.43 | 35.71 | 8.00 | 26.00 | 12.00 | 21.74 | 28.95 | 25.97 |
| # 4 | 19.36 | 21.31 | 40.17 | 34.29 | 30.00 | 31.43 | 0.00 | 14.00 | 15.00 | 34.78 | 23.26 | 12.82 |
| # 5 | 25.81 | 24.59 | 40.65 | 25.71 | 25.71 | 28.57 | 16.00 | 10.00 | 12.00 | 24.00 | 20.93 | 21.62 |
| # 6 | 25.82 | 20.97 | 34.45 | 34.29 | 54.29 | 30.71 | 16.00 | 20.00 | 18.00 | 8.33 | 24.39 | 30.14 |
| # 7 | 29.03 | 32.79 | 38.46 | 45.71 | 38.57 | 26.43 | 16.00 | 6.00 | 13.00 | 12.50 | 14.29 | 18.57 |
| # 8 | 19.35 | 36.07 | 33.06 | 31.43 | 28.57 | 22.86 | 0.00 | 14.00 | 7.00 | 16.00 | 30.00 | 27.03 |
| # 9 | 6.06 | 29.51 | 31.30 | 34.29 | 40.00 | 33.57 | 0.00 | 12.00 | 16.00 | 13.64 | 24.39 | 30.14 |
| # 10 | 38.71 | 33.85 | 27.12 | 11.43 | 31.43 | 41.43 | 16.00 | 18.00 | 10.00 | 9.09 | 21.74 | 18.57 |
| avg nested ratio | 22.99 | 29.85 | 35.32 | 29.14 | 34.14 | 31.93 | 8.80 | 14.40 | 12.00 | 17.57 | 23.61 | 24.22 |
iments, we use BERT (Devlin et al., 2019) as PLM. For the GENIA dataset, we replace BERT
with BioBERT (Lee et al., 2019). In the experimental details, we use bert-base-uncased11 for ACE2004, ACE2005 and KBP2017 datasets and dmis-lab/biobert-base-cased-v1.212 for GENIA dataset (the two model sizes: all about 110M).
The soft prompts are initialized by the embedding of "," "(" and ")". The verbalizer is just a simple 1-to-1 mapping as shown in Table 6, that is, only the word corresponding to the semantics of the tag is used as a mapping. We use the Adam Optimizer with a linear warmup-decay learning rate schedule, a dropout before the tag, left boundary and right boundary classifiers with a rate of 0.1. Please see Table 7 for details. We train our model on a single NVIDIA 3090 GPU with 24GB memory.
All baselines follow the settings of their original work. Among them, Locate and Label, SEE-Few, and ESD all uses bert-base-uncased for ACE2004, ACE2005 and KBP2017 datasets, and uses dmis-lab/biobert-base-cased-v1.2 for GENIA dataset. While Unified NER uses facebook/bart-large13 (Lewis et al., 2019)
(model size: about 406M), and SDNet uses t5-base14 (Raffel et al., 2020) (model size: about
| Tags | ACE2004 | ACE2005 | GENIA | KBP2017 |
|-------------|--------------|--------------|---------|--------------|
| # WEA | weapon | weapon | - | - |
| # GPE | geography | geography | - | geography |
| # PER | person | person | - | person |
| # FAC | facility | facility | - | facility |
| # ORG | organization | organization | - | organization |
| # LOC | location | location | - | location |
| # VEH | vehicle | vehicle | - | - |
| # DNA | - | - | DNA | - |
| # RNA | - | - | RNA | - |
| # cell_type | - | - | cell | - |
| # protein | - | - | protein | - |
| # cell_line | - | - | group | - |
| # No Entity | none | none | none | none |
Table 6: Verbalizer used in the prompting stage.
| P | ACE2004 | ACE2005 | KBP2017 | GENIA |
|-------------------------|-----------|-----------|-----------|---------|
| lr | 3e-05 | 3e-05 | 3e-05 | 3e-05 |
| Focus&Bridge batch size | 1 | 1 | 1 | 1 |
| Prompt batch size | 8 | 8 | 8 | 8 |
| α | 1.0 | | | |
| β | 1.0 | | | |
| γ | 1.0 | | | |
| η | 1.0 | | | |
| λ | 1.0 | | | |
| drop out rate | 0.1 | | | |
| lr_warmup | 0.1 | | | |
| weight_decay | 0.01 | | | |
## 220M). A.3 Error Rates Calculation
We analyze the error rates for total entities e*total*,
flat entities e*f lat*, nested entities e*nested*, the inner entities e*inner*, and the outer entities e*outer*. Specifically, we calculate these metrics by dividing the total number of misjudged entities belonging to that entity type by the total number of that entity type.
For example, e*nested* can be calculated by dividing the total number of nested entities misjudged by the total number of nested entities. All the metrics are calculated in the test set. The formulae are as follows:
$$e_{total}=\frac{n_{\text{misjudged_entities}}}{n_{\text{all_entities}}}$$ $$e_{flat}=\frac{n_{\text{misjudged_flat_entities}}}{n_{\text{all_flat_entities}}}$$ $$e_{nested}=\frac{n_{\text{misjudged_nested_entities}}}{n_{\text{all_nested_entities}}}$$
(16) $$\begin{array}{l}\small\text{(17)}\end{array}$$ = (18) $$\begin{array}{l}\small\text{(18)}\end{array}$$ .
$$e_{i n n e r}={\frac{n_{\mathrm{misjudged\_inner\_nested\_entities}}}{n_{\mathrm{all\_inner\_nested\_entities}}}}$$
$$(19)$$
$$e_{o u t e r}={\frac{n_{\mathrm{misjudged\_outer\_nested\_entities}}}{n_{\mathrm{all\_outer\_nested\_entities}}}}$$
$$(20)$$
## A.4 Baselines
We use the following models as baselines for fewshot nested NER. The first two are models under the fully supervised setting, and the last three are models under the few-shot setting. It should be noted that since most few-shot NER methods cannot handle few-shot nested NER, the methods available to us are limited.
- **Locate and Label** (Shen et al., 2021) generates candidate spans by filtering and boundary regression on the seed spans, and then labels the boundary-adjusted candidate spans with the corresponding categories. The two-stage method achieves good results on fully supervised nested NER.
- **Unified Generative NER** (Yan et al., 2021)
formulates the NER task as an entity span sequence generation task, which can directly generate nested entity categories.
- **SEE-Few** (Yang et al., 2022) is a span-based method applied to the few-shot flat NER,
which extracts spans with seeding and expanding, then classifies them via natural language inference. It can be naturally extended to fewshot nested NER.
- **SDNet** (Chen et al., 2022) is a self-describing generation model for few-shot NER. In the pre-training stage, the external data is used to jointly train mention describing and entity generation tasks. In the fine-tuning stage, SDNet first conducts mention describing to summarize type concept descriptions, and then conducts entity generation based on the generated descriptions.
- ESD (Wang et al., 2022b) formulates the few-shot sequence labeling task as a spanlevel similarity matching problem between test query and supporting instances to solve few-shot NER. Wang et al. (2022b) mentions that their approach can be extended to fewshot nested NER by modifying pre-training datasets. Specifically, they sample from FewNERD (Ding et al., 2021) dataset and GENIA dataset in a certain proportion to form the FewNERD-nested dataset, and then pretrained on it. In our experiments, we control the sampling ratio of the two at 6:4 (FewNERD:GENIA).
## B Experiment Results On Nested Ner B.1 1-Shot Experiments
We show the performance of 1-shot experiments on ACE2004, ACE2005, and KBP2017 datasets in Table 8. We can see that FIT significantly outperforms all methods.
## B.2 Error Rates
We show the error rates on ACE2004, ACE2005, and KBP2017 datasets in Table 9. We can see that FIT significantly reduces the error rates of nested entities among all methods.
## B.3 Ablation Studys
We conduct ablation experiments to elucidate the main components of our proposed method FIT.
The results on ACE2004, ACE2005 and KBP2017 datasets are shown in Table 10.
## C Time Usage
The time usage on ACE2004, ACE2005, and KBP2017 datasets is shown in Table 11.
Datasets Methods1-shot
P R F1 ↑
| Datasets | Methods | 1-shot |
|-------------------------------|-----------|----------|
| ACE2004 ACE2005 GENIA KBP2017 | | |
SEE-Few 37.65 2.27 4.15±2.65 SDNet 55.26 7.54 12.98±5.39
ESD 10.70 3.71 5.24±9.62
FIT(ours) 30.21 11.99 **16.67**±8.94
SEE-Few 38.64 3.01 5.35±5.83 SDNet 48.62 6.21 10.85±4.59
ESD 7.18 2.50 3.52±7.31
FIT(ours) 28.92 9.77 **13.76**±7.36
SEE-Few 14.63 0.97 1.77±1.53 SDNet 32.26 6.53 10.72±3.89
ESD 4.63 3.81 4.14±7.63
FIT(ours) 25.20 14.12 **17.74**±6.74 SEE-Few 39.45 0.55 1.08±1.28 SDNet 54.65 6.77 11.90±4.15 ESD 10.60 3.39 5.04±9.07 FIT(ours) 32.19 10.43 **15.30**±8.52
Datasets Methods5-shot 10-shot 20-shot
etotal ↓ ef lat ↓ enested ↓ einner ↓ eouter ↓ etotal ↓ ef lat ↓ enested ↓ einner ↓ eouter ↓ etotal ↓ ef lat ↓ enested ↓ einner ↓ e*outer* ↓
ACE2004
SEE-Few 81.31 77.71 85.42 89.26 83.40 70.30 64.81 76.58 81.73 74.06 60.09 51.91 69.43 75.71 66.18
SDNet 87.54 77.31 99.24 98.99 99.55 76.19 56.89 98.23 98.03 98.66 68.48 43.05 97.51 97.38 97.96
ESD 86.31 82.39 90.78 94.44 88.78 64.56 57.89 72.17 76.53 70.41 51.73 42.13 62.68 65.22 62.16
FIT(ours) 70.69 63.81 78.53 78.30 **78.99 59.83 51.73 69.07 71.43 68.24 51.07 41.57 61.91 64.26 61.58**
ACE2005
SEE-Few 82.31 78.95 87.55 89.37 86.82 72.55 66.68 81.70 83.84 80.53 55.81 45.86 71.33 76.21 68.64
SDNet 86.19 78.17 98.71 98.63 98.97 77.92 65.22 97.71 98.00 97.83 67.97 49.61 96.59 97.50 96.39
ESD 71.50 65.36 81.06 82.47 80.57 64.28 56.84 75.87 77.37 75.25 53.61 45.11 66.86 67.37 67.41
FIT(ours) 66.95 60.77 76.59 77.39 76.42 61.15 53.22 73.51 74.85 73.06 51.50 43.04 64.68 63.83 **66.25**
KBP2017
SEE-Few 84.42 83.54 86.67 91.63 81.72 72.35 69.54 79.42 89.25 70.76 57.87 53.31 69.44 79.22 60.32
SDNet 87.75 83.23 99.19 99.04 99.44 78.88 71.14 98.47 98.32 98.82 65.89 53.43 97.44 97.14 98.06
ESD 75.43 72.36 83.20 90.73 76.47 61.25 54.91 77.30 87.59 68.53 48.88 41.59 67.36 76.39 59.73
FIT(ours) **72.63 68.88 82.12 88.87 76.05 60.43 55.06 74.04 84.95 64.83 47.19 40.11 65.37 74.76 57.52**
| Datasets | Methods | 5-shot | 10-shot | 20-shot | | | | | | |
|---------------------------|------------|----------|-------------|------------|-------|-------------|-------------|-------|------------|------------|
| P | R | F1 ↑ | P | R | F1 ↑ | P | R | F1 ↑ | | |
| Full model | 46.87 | 29.31 | 35.87±4.92 | 51.43 | 40.18 | 44.88±4.82 | 60.14 | 48.93 | 53.92±2.99 | |
| -w/o focusing | 57.11 | 15.26 | 23.39±5.17 | 58.45 | 24.63 | 34.39±5.01 | 67.56 | 27.85 | 39.28±3.35 | |
| -w/o filtering | 47.69 | 28.22 | 35.03±7.65 | 53.55 | 38.35 | 44.56±3.58 | 62.10 | 52.51 | 56.76±1.97 | |
| -w/o contrastive learning | 44.73 | 28.39 | 34.60±8.27 | 52.41 | 39.18 | 44.71±5.70 | 58.67 | 49.48 | 53.58±3.11 | |
| -w/o soft prompt | 45.27 | 28.90 | 34.83±8.28 | 51.02 | 37.42 | 43.11±4.18 | 58.94 | 48.37 | 53.00±3.38 | |
| -w/o contextual prompt | 48.35 | 27.29 | 33.82±5.07 | 52.13 | 38.43 | 43.94±4.75 | 57.63 | 44.29 | 49.97±4.00 | |
| -w/o prompt | 23.55 | 9.82 | 12.64±4.08 | 34.65 | 22.66 | 26.06±5.85 | 43.45 | 34.12 | 37.65±3.88 | |
| ACE2004 | Full model | 44.74 | 33.05 | 37.74±5.33 | 46.83 | 38.85 | 42.25±10.65 | 58.02 | 48.5 | 52.71±2.55 |
| -w/o focusing | 44.85 | 19.13 | 26.27±10.72 | 55.20 | 24.38 | 33.11±6.11 | 68.60 | 31.95 | 43.52±2.81 | |
| -w/o filtering | 39.96 | 26.13 | 31.40±10.17 | 52.36 | 41.32 | 45.93±4.27 | 58.26 | 51.72 | 54.67±2.52 | |
| -w/o contrastive learning | 41.56 | 30.92 | 35.35±7.52 | 45.87 | 35.09 | 39.53±11.51 | 53.90 | 49.97 | 51.82±2.79 | |
| -w/o soft prompt | 40.86 | 30.23 | 34.32±6.92 | 48.88 | 35.80 | 40.53±8.66 | 55.42 | 49.25 | 52.46±4.16 | |
| -w/o contextual prompt | 39.73 | 27.46 | 32.25±10.21 | 51.49 | 36.31 | 41.87±10.25 | 55.57 | 47.80 | 51.31±3.19 | |
| -w/o prompt | 22.02 | 12.37 | 13.59±8.54 | 34.05 | 19.86 | 24.09±7.54 | 46.02 | 35.69 | 39.88±3.48 | |
| ACE2005 | Full model | 44.68 | 27.20 | 33.50±4.37 | 50.69 | 39.43 | 44.21±4.64 | 56.39 | 52.70 | 54.27±5.07 |
| -w/o focusing | 52.21 | 24.08 | 32.59±7.89 | 60.27 | 33.99 | 43.24±7.55 | 59.75 | 42.31 | 48.79±4.75 | |
| -w/o filtering | 41.18 | 21.37 | 27.14±6.96 | 52.24 | 33.21 | 40.38±7.48 | 57.06 | 51.26 | 53.73±5.33 | |
| -w/o contrastive learning | 46.21 | 25.35 | 32.45±4.54 | 51.53 | 38.53 | 44.04±4.75 | 54.94 | 51.59 | 52.99±5.65 | |
| -w/o soft prompt | 47.76 | 25.77 | 33.16±6.00 | 52.11 | 41.64 | 45.87±5.28 | 54.97 | 51.56 | 53.16±5.45 | |
| -w/o contextual prompt | 46.13 | 26.04 | 32.86±6.21 | 55.75 | 36.00 | 43.32±4.79 | 55.21 | 50.28 | 52.41±3.53 | |
| -w/o prompt | 13.55 | 7.74 | 9.34±4.36 | 20.21 | 14.65 | 16.33±9.35 | 24.60 | 20.73 | 22.05±7.41 | |
| KBP2017 | | | | | | | | | | |
| Methods | ACE2004 | ACE2005 | GENIA | KBP2017 | | | | |
|------------------|-----------|-----------|---------|-----------|---------|---------|---------|---------|
| train | test | train | test | train | test | train | test | |
| Locate and Label | 88.41s | 39.28ms | 88.95s | 24.97ms | 89.57s | 35.28ms | 65.30s | 39.12ms |
| SEE-Few | 159.39s | 61.82ms | 207.70s | 36.14ms | 160.18s | 37.00ms | 150.72s | 52.44ms |
| SDNet | 58.04s | 37.86ms | 71.86s | 46.66ms | 121.05s | 50.97ms | 40.04s | 31.63ms |
| FIT(ours) | 122.37s | 31.99ms | 147.40s | 26.94ms | 156.24s | 36.89ms | 105.15s | 42.57ms |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8 (Limitations)
✗ A2. Did you discuss any potential risks of your work?
We do not discuss them due to the space limitation, but we believe our study does not involve these potential risks.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract; 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 Experiments
✓ B1. Did you cite the creators of artifacts you used?
4.1 Datasets; 4.2 Experiment Settings; 4.3 Baselines; A.1 Statistics of Nested Datasets; A.2 Detailed Parameter Settings; A.4 Baselines
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
A.1 Statistics of Nested Datasets;
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We do not discuss them due to the space limitation, but the artifacts we have used are consistent with their intended use.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data we use are from datasets commonly used in the previous work, and sensitive information has been handled in the previous work.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
A.1 Statistics of Nested Datasets
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
A.1 Statistics of Nested Datasets The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** 4 Experiment
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
A.2 Detailed Parameter Settings; 5 Time Complexity
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
A.2 Detailed Parameter Settings
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4.2 Experiment Settings; 4.4 Experiment Results; 4.5 Ablation Study; 6 Discussion
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
A.2 Detailed Parameter Settings
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
luo-etal-2023-together | Together We Make Sense{--}Learning Meta-Sense Embeddings | https://aclanthology.org/2023.findings-acl.165 | Sense embedding learning methods learn multiple vectors for a given ambiguous word, corresponding to its different word senses. For this purpose, different methods have been proposed in prior work on sense embedding learning that use different sense inventories, sense-tagged corpora and learning methods. However, not all existing sense embeddings cover all senses of ambiguous words equally well due to the discrepancies in their training resources. To address this problem, we propose the first-ever meta-sense embedding method {--} Neighbour Preserving Meta-Sense Embeddings, which learns meta-sense embeddings by combining multiple independently trained source sense embeddings such that the sense neighbourhoods computed from the source embeddings are preserved in the meta-embedding space. Our proposed method can combine source sense embeddings that cover different sets of word senses. Experimental results on Word Sense Disambiguation (WSD) and Word-in-Context (WiC) tasks show that the proposed meta-sense embedding method consistently outperforms several competitive baselines. An anonymised version of the source code implementation for our proposed method is submitted to reviewing system. Both source code and the learnt meta-sense embeddings will be publicly released upon paper acceptance. | # Together We Make Sense**– Learning Meta-Sense Embeddings From** Pretrained Static Sense Embeddings
Danushka Bollegala†,‡
Haochen Luo† **Yi Zhou**♢
University of Liverpool†, Cardiff University♢, Amazon‡
[email protected] [email protected] [email protected]
## Abstract
Sense embedding learning methods learn multiple vectors for a given ambiguous word, corresponding to its different word senses. For this purpose, different methods have been proposed in prior work on sense embedding learning that use different sense inventories, sensetagged corpora and learning methods. However, not all existing sense embeddings cover all senses of ambiguous words equally well due to the discrepancies in their training resources. To address this problem, we propose the first-ever meta-sense embedding method
- Neighbour Preserving Meta-Sense Embeddings, which learns meta-sense embeddings by combining multiple independently trained source sense embeddings such that the sense neighbourhoods computed from the source embeddings are preserved in the meta-embedding space. Our proposed method can combine source sense embeddings that cover different sets of word senses. Experimental results on Word Sense Disambiguation (WSD) and Word-in-Context (WiC) tasks show that the proposed meta-sense embedding method consistently outperforms several competitive baselines.
## 1 Introduction
In contrast to static word embedding methods (Mikolov et al., 2013a; Pennington et al., 2014)
that learn a vector, that represents the meaning of a word, sense embedding methods (Loureiro and Jorge, 2019a; Camacho-Collados and Pilehvar, 2018; Scarlini et al., 2020a,b) learn multiple vectors per word, corresponding to the different senses of an ambiguous word. Prior work has shown that sense embeddings are useful for tasks such as Word Sense Disambiguation (WSD) and sense discrimination tasks such as Word in Context (WiC) (Loureiro and Jorge, 2019b; Pilehvar and Camacho-Collados, 2019). However, existing sense embeddings are trained on diverse resources such as sense tagged corpora or dictionary glosses, with varying levels of sense coverage (e.g. fullycovering all synsets in the WordNet or only a subset), and using different methods. Therefore, the performance reported by the existing sense embeddings on different downstream tasks and datasets vary significantly for different part-of-speech (PoS)
categories. Moreover, it is not readily clear which sense embedding learning method should be used for disambiguating words in a given domain.
Meta-embedding learning has been successfully used to learn accurate and high coverage wordand sentence-level meta-embeddings by combining independently trained multiple source embeddings (Bollegala and O'Neill, 2022; Yin and Schütze, 2016a). However, to the best of our knowledge, meta-embedding learning methods have not been applied for sense embeddings before. Compared to word-level meta-embedding, sense-level meta-embedding has two important challenges.
Challenge 1 (*missing senses*). Compared to learning meta-word embeddings, where each word is assigned a single embedding, in static sense embeddings an ambiguous word is associated with multiple sense embeddings, each corresponding to a distinct sense of the ambiguous word. However, not all of the different senses of a word might be equally covered by all source sense embeddings.
Challenge 2 (*Misalignment between sense and* context embeddings). In downstream tasks such as WSD, we must determine the correct sense s of an ambiguous word w in a given context (i.e. a sentence) c. This is done by comparing the sense embeddings for each distinct sense of w against the context embedding of c, for example, computed using a Masked Language Model (MLM)
such as BERT (Devlin et al., 2019). The sense corresponding to the sense embedding that has the maximum similarity with the context embedding is then selected as the correct sense of w in c. For 2638 sense embeddings such as LMMS (Loureiro and Jorge, 2019a) or ARES (Scarlini et al., 2020b) this is trivially achieved because they are both BERTbased embeddings and the cosine similarity between those sense embeddings and BERT embeddings can be directly computed. However, this is not the case for the meta-sense embeddings that exist in a different vector space than the context embeddings produced by BERT, where a projection between meta-sense and context embedding spaces must be learned before conducting WSD.
To address these challenges, we propose **Neighbourhood Preserving Meta-Sense Embedding**
(NPMS) by incorporating multiple independently trained **source** sense embeddings to learn a **meta**sense embedding such that the sense-related information captured by the source (input) sense embeddings is preserved in the (output) meta-sense embedding. NPMS can combine full-coverage sense embeddings with partial-coverage ones, thereby improving the sense coverage in the latter.
NPMS does not compare the source embeddings directly but require the nearest neighbours computed using source and meta sense embeddings to be similar. We call this *information preservation* criteria, and use Pairwise Inner Product (PIP) to compare the similarity distributions (nearest neighbours) over senses between meta and source embedding spaces. This allows us to address Challenge 1 using shared neighbours to compute the alignment between source and meta embedding spaces, without predicting any missing sense embeddings. To address Challenge 2, NPMS requires meta-sense embedding of a word sense to be similar to the contextualised (word) embeddings of the words that co-occur in the same sentence. We call this contextual alignment, and learn the sense-specific projection matrices that satisfy this criteria. This ensures that meta-sense embeddings could be used in downstream tasks such as WSD or WiC, where we must select the correct sense of an ambiguous word given its context.
We evaluate NPMS on WiC and WSD tasks against several competitive baselines for metaembedding. Experimental results show that NPMS
consistently outperforms all other methods in both tasks. More importantly, we obtain state-of-theart (SoTA) performance for WSD and WiC tasks, reported by any static sense embedding method. Source code for the proposed method is publicly
## 2 Related Work
Our work is related to both static sense embeddings and meta-embedding learning as we review next.
Static Sense Embeddings assign multiple embeddings for a single word, corresponding to its distinct senses. Reisinger and Mooney (2010)
proposed multi-prototype embeddings to represent word senses, which was extended by Huang et al. (2012) combining both local and global contexts. Both methods use a fixed number of clusters to represent a word, whereas Neelakantan et al.
(2014) proposed a non-parametric model, which estimates the number of senses dynamically per each word. Chen et al. (2014) initialised sense embeddings by means of glosses from WordNet, and adapted the skip-gram objective (Mikolov et al.,
2013b) to learn and improve sense embeddings jointly with word embeddings. Rothe and Schütze
(2015) used pretrained word2vec embeddings to compose sense embeddings from sets of synonymous words. Camacho-Collados et al. (2016) created sense embeddings using structural knowledge from the BabelNet (Navigli and Ponzetto, 2010).
Loureiro and Jorge (2019a) constructed sense embeddings by taking the average over the contextualised embeddings of the sense annotated tokens from SemCor. Scarlini et al. (2020a) used the lexical-semantic information in BabelNet to produce sense embeddings without relying on senseannotated data. Scarlini et al. (2020b) also proposed ARES, a knowledge-based approach for constructing BERT-based embeddings of senses by means of the lexical-semantic information in BabelNet and Wikipedia.
Meta embedding learning was first proposed for combining multiple pretrained static word embeddings (Yin and Schütze, 2016b). Vector concatenation (Bollegala, 2022) is known to be a surprisingly strong baseline but increases the dimensionality of the meta-embedding with more sources. Coates and Bollegala (2018) showed that averaging performs comparable to concatenation under certain orthonormal conditions, while not increasing the dimensionality. Learning orthogonal projections prior to averaging has shown to further improve performance (Jawanpuria et al., 2020). Globally 1https://github.com/LivNLP/NPMS
linear (Yin and Schütze, 2016b), locally linear (Bollegala and Bao, 2018a) and autoencoder-based nonlinear projections (Bollegala and Bao, 2018b) have been used to learn word-level meta-embeddings.
Meta-embedding methods have been used for contextualised word embeddings (Kiela et al., 2018)
and sentence embeddings (Takahashi and Bollegala, 2022; Poerner et al., 2020). For an extensive survey on meta-embedding learning see Bollegala and O'Neill (2022). However, to our best knowledge, we are the first to apply meta-embedding learning methods to learn sense embeddings.
## 3 Meta-Sense Embedding Learning
To explain our proposed method in detail, let us first consider a vocabulary V of words w ∈ V.
We further assume that each word w is typically associated with one or more distinct senses s and the set of senses associated with w is denoted by Sw. In meta-sense embedding learning, we assume a sense s of a word to be represented by a set of n source sense embeddings. Let us denote the j-th source embedding of s by xj (s) ∈ R
dj, where dj is the dimensionality of the j-th source embedding.
We project the j-th source embedding by a matrix Pj ∈ R
d×djinto a common meta-sense embedding space with dimensionality d. The meta-sense embedding, m(s) ∈ R
d of s is computed as the unweighted average of the projected source sense embeddings as given by (1).
$$m(s)={\frac{1}{n}}\sum_{j=1}^{n}\mathbf{P}_{j}\mathbf{x}_{j}(s)\qquad\qquad{\mathrm{(1)}}$$
After this projection step, all source sense embeddings live in the same d-dimensional vector space, thus enabling us to add them as done in (1).
An advantage of considering the average of the projected source embeddings as the meta-sense embedding is that, even if a particular sense is not covered by one or more source sense embeddings, we can still compute a meta-sense embedding using the remainder of the source sense embeddings. Moreover, prior work on word-level and sentence-level meta-embedding have shown that averaging after a linear projection improves performance when learning meta embeddings (Coates and Bollegala, 2018; Jawanpuria et al., 2020; Poerner et al., 2020).
If we limit the projection matrices to be orthonormal, they can be seen as optimally rotating the source sense embeddings such that the projected source embeddings could be averaged in the metaembedding space. However, we observed that dropping this regularisation term to produce better metasense embeddings in our experiments. Therefore, we did not impose any orthonormality restrictions on the projection matrices.
We require a meta-sense embedding to satisfy two criteria: (a) **sense information preservation**
and (b) **contextual alignment**. The two criteria jointly ensure that the meta-sense embeddings we learn are accurate and can be used in downstream tasks such as WSD in conjunction with contextualised word embeddings produced by an MLM.
Next, we describe each of those criteria in detail.
## 3.1 Sense Information Preservation
Given that the individual source sense embeddings are trained on diverse sense-related information sources, we would like to preserve this information as much as possible in the meta-sense embeddings we create from those source sense embeddings.
This is particularly important in meta-embedding learning because we might not have access to all the resources that were used to train the individual source sense embeddings, nor we will be training meta-embeddings from scratch but will be relying upon pretrained sense embeddings as the sole source of sense-related information into the metaembedding learning process. Therefore, we must preserve the complementary sense-related information encoded in the source sense embeddings as much as possible in their meta-sense embedding.
It is not possible however to directly compare the meta-sense embeddings computed using (1)
against the source sense embeddings because they have different dimensionalities and live in different vector spaces. This makes it challenging when quantifying the amount of information lost due to meta embedding using popular loss functions such as squared Euclidean distance between source and meta embeddings. To address this problem we resort to PIP, which has been previously used to determine the optimal dimensionality of word embeddings (Yin and Shen, 2018) and learning concatenated word-level meta embeddings (Bollegala, 2022).
Given a source/meta embedding matrix E, the corresponding PIP matrix is given by (2)
$$\mathrm{PIP}(\mathbf{E})=\mathbf{E}\mathbf{E}^{\intercal}$$
$\eqref{eq:walpha}$.
PIP(E) = EE⊤ (2)
Specifically, PIP matrix contains the innerproducts between all pairs of sense embeddings represented by the rows of E. PIP(E) is a symmetric matrix with its number of rows (columns) equal to the total number of unique senses covering all the words in the vocabulary. PIP matrices can be efficiently computed for larger dimensions and vocabularies because the inner-product computation can be parallelized over the embeddings.
Let us denote the source sense embedding matrix for the j-th source by Xj , where the i-th row represents sense embedding xj (si) learnt for the i-th sense si. Likewise, let us denote by M the meta-sense embedding matrix, where the i-th row represents the meta-sense embedding m(si) computed for si using (1). Because the shape of PIP
matrices are independent from the dimensionalities of the embedding spaces, and the rows are aligned
(i.e. sorted by the sense ids si), we can compare the meta-sense embedding against the individual source sense embedding using PIP loss, Lpip, given by (3).
$$L_{\text{pip}}=\sum_{j=1}^{n}||\text{PIP}(\mathbf{X}_{j})-\text{PIP}(\mathbf{M})||_{F}^{2}\tag{3}$$ Here, $||\mathbf{A}||_{F}=\sqrt{\sum_{l,m}a_{lm}^{2}}$ denotes the Frobe-1.
lm denotes the Frobenius norm of the matrix A. PIP loss can be seen as comparing the distributions of similarity scores computed using the meta-sense embedding and each of the individual source sense embeddings for the same set of senses. Although the actual vector spaces might be different and initially not well-aligned due to the projection and averaging steps in (1), we would require the neighbourhoods computed for each word to be approximately similar in the meta-sense embedding space and each of the source sense embedding spaces. PIP loss given in (3) measures this level of agreement between meta and source embedding spaces.
## 3.2 Contextual Alignment
The context in which an ambiguous word has been used provides useful clues to determine the correct sense of that word (Zhou and Bollegala, 2021).
For example, consider the following two sentences:
(S1) I went to the bank *to withdraw some cash.*,
and **(S2)** The river bank *was crowded with people* doing BBQs. Words *cash* and *withdraw* indicate the financial institute sense of bank in S1, whereas the words river, BBQ indicate the *sloping land* sense of bank in S2.
Let us denote the contextualised word embedding of a word w in a context c by f(w; c). MLMs such as BERT and RoBERTa (Liu et al., 2019)
have been used in prior work in WSD to compute context-sensitive representations for ambiguous words. Then, the above-described agreement between the sense s of w and its context c can be measured by the similarity between the meta-sense embedding m(s) and the contextualised embedding f(w; c). We refer to this requirement as the contextual alignment between a meta-sense embedding and contextualised word embeddings.
Given a sense annotated dataset such as SemCor, we represent it by a set T of tuples (*w, s, c*), where the word w is annotated with its correct sense s in context c. Then, we define the contextual alignment loss Lcont as (negative) average cosine similarity between m(s) and f(w; c), given by (4).
$$L_{\mathrm{cont}}=-\sum_{(w,s,c)\in\mathcal{T}}\frac{\mathbf{m}(s)^{\top}\mathbf{f}(w;c)}{||\mathbf{m}(s)||_{2}\,||\mathbf{f}(w;c)||_{2}}\quad(4)$$
Minimising the contextual alignment loss in (4),
will maximise the cosine similarity between the meta-sense embedding and the corresponding contextualised embedding.
In contrast to the PIP-loss defined by (3), which can be computed without requiring sense annotated data, the contextual alignment loss defined by (4) requires sense annotated data. However, SemCor, the sense annotated dataset that we use for computing the contextual alignment loss in this paper, is already being used by many existing pretrained source sense embeddings. Therefore, we emphasise that we are not requesting for any additional training resources during the meta-sense embedding learning process beyond what has been already used to train the source sense embeddings. Moreover, ablation studies (§5) show that PIP-loss alone obtains significant improvements, without the contextual alignment loss.
Contextual alignment loss can also be motivated from an application perspective. Sense embeddings are often used to represent word senses in downstream tasks such as WSD. A typical approach for predicting the sense of an ambiguous word w as used in a given context c is to measure the cosine similarity between each sense embedding of w and the context embedding for c (Scarlini et al., 2020b; Loureiro and Jorge, 2019a). The objective given in
(4) can be seen as enforcing this property directly into the meta-sense embedding learning process.
As we later see in §4, NPMS perform particularly well in WSD benchmarks.
In order to be able to compute the cosine similarity between meta-sense embeddings and contextualised word embeddings, we must first ensure that they have the same dimensionality. This can be achieved by either (a) setting the dimensionality of the meta-sense embeddings equal to that of the contextualised word embeddings, or (b) by learning a projection matrix that adjusts the dimensionality of the meta-sense embeddings to that of the contextualised word embeddings.
## 3.3 Parameter Learning
We consider the linearly-weighted sum of the PIPloss and contextual alignment loss as the total loss, Ltot, given by (5).
$$L_{\mathrm{tot}}(\{\mathbf{P}_{j}\}_{j=1}^{n})=\alpha L_{\mathrm{pip}}+(1-\alpha)L_{\mathrm{cont}}$$
Here, the parameters to be learnt are the projection matrices Pj for the sources j = 1*, . . . , n*. The weighting coefficient α ∈ [0, 1] is a hyperparameter determining the emphasis between the two losses. In our experiments, we tune α using validation set of the Senseval-3 WSD dataset (Snyder and Palmer, 2004).
Compared to the cosine similarity, which is upper bounded by 1, the PIP-loss grows with the size of the PIP matrices being used. Therefore, we found that scaling the two losses by their mean values to be important to stabilise the training. We initialise the projection matrices to the identity matrix and use vanilla stochastic gradient descent with a learning rate of 0.001, determined using the validation set of the Senseval-3 WSD dataset.
## 4 Experiments 4.1 Source Embeddings
Our proposed NPMS is agnostic to the methods used to learn the source sense embeddings, and thus in principle can be used to meta-embed any source sense embedding. In our experiments, we use the following source sense embeddings because of their accuracy, public availability, coverage word senses and diversity (i.e. trained on different resources to have different dimensionalities) such that we can conduct an extensive evaluation.
LMMS (Loureiro and Jorge, 2019a) (Language Modelling Makes Sense) is a supervised approach to learn full-coverage static sense embeddings that cover all of the 206,949 senses in the WordNet. We use three variants of LMMS (Loureiro et al., 2022) embeddings2as sources in our experiments: (a) **LMMS** (uses 1024 dimensional bert-large-cased3embeddings with semantic networks (i.e., WordNet) and glosses to create 2048 dimensional sense emebddings), (b) **LMMS (XLNet)** (uses 1024 dimensional xlnet-large-cased4as the base MLM,
and averages contextualised embeddings computed from SemCor and WordNet glosses), and
(c) **LMMS (RoBERTa)** (uses 1024 dimensional roberta-large5as the base MLM, and and averages contextualised embeddings computed from SemCor and WordNet glosses).
SenseEmBERT (Scarlini et al., 2020a) (Sense Embedded BERT) obviates the need for senseannotated corpora by using the BabelNet6 mappings between WordNet senses and Wikipedia pages to construct sense embeddings with 2048 dimensions, covering all the 146,312 English nominal senses in the WordNet. Each sense embedding consists of two components: (a) the average of the word embedding of the a target sense's relevant words, and (b) the average of the BERT encoded tokens of the sense gloss. For the brevity of the notation, we denote SenseEmBERT as **SBERT** in the remainder of this paper.
ARES (Scarlini et al., 2020b) (context-AwaRe EmbeddingS) is a semi-supervised method that learns sense embeddings with full-coverage of the WordNet and is 2048 dimensional. ARES embeddings are created by applying BERT on the glossary information and the information contained in the SyntagNet (Maru et al., 2019). It outperforms LMMS in WSD benchmarks.
DeConf (Pilehvar and Collier, 2016) are the 50dimensional7 De-conflated Semantic Embeddings created from Wikipedia and Gigaword corpus using GloVe (Pennington et al., 2014). DeConf enables us to evaluate the effect of combining a source that has significantly smaller dimensionality than the other source sense embeddings.
The intersection of the LMMS2048 and ARES
contains 206,949 senses, which is equivalent to the total number of senses in the WordNet because they both cover all the sense in the WordNet (i.e.
2https://github.com/danlou/LMMS
3https://huggingface.co/bert-large-cased 4https://huggingface.co/xlnet-large-cased 5https://huggingface.co/roberta-large 6babelnet.org 7https://pilehvar.github.io/deconf/
full coverage sense embeddings). On the other hand, the intersection between the LMMS2048 and SensEmBERT as well as the intersection between the ARES and SensEmBERT contains 146,312 senses, which is the total number of nominal senses in the WordNet. By using source sense embeddings with different sense coverages we aim to evaluate the ability of meta-sense embedding methods to learn accurate sense embeddings by exploiting the complementary strengths in the sources.
## 4.2 Evaluation Tasks
We compare the accuracy of meta-sense embeddings using two standard tasks that have been used in prior work on sense embedding learning.
Word Sense Disambiguation (WSD): WSD is a longstanding problem in NLP, which aims to assign an ambiguous word in a context with a word sense (Navigli, 2009). To test whether NPMS can disambiguate the different senses of an ambiguous word, we conduct a WSD task using the evaluation framework proposed by Raganato et al. (2017), which contains all-words English WSD datasets: Senseval-2 (SE2; Edmonds and Cotton, 2001), Senseval-3 (SE3; Snyder and Palmer, 2004), SemEval-07 (**SE07**; Pradhan et al.,
2007), SemEval-13 (**SE13**; Navigli et al., 2013)
and SemEval-15 (**SE15**; Moro and Navigli, 2015).
We use the official framework to avoid any discrepancies in the scoring methodology.
We perform WSD following the 1-NN procedure, where we compute the contextualised embedding, f(w; c), produced using an MLM.8 We then measure the cosine similarity, ϕ(m(s), f(w; c)),
between the source/meta sense embedding for each sense s of w, m(s), and f(w; c), and select the sense with the maximum cosine similarity as the correct sense of w in c.
Word-in-Context (WiC): WiC is framed as a binary classification task, where given a target word w and two contexts c2 and c2, the objective is to determine if w occurring in c1 and c2 carries the same meaning. A method that assigns the same vector to all senses of w would report a chancelevel (i.e. 50%) accuracy on WiC.
Given a target word w in two contexts c1 and c2, we first determine the meta-sense embeddings of w, which are m(s1) and m(s2) corresponding to the senses of w used in respectively c1 8In the case of BERT, we average the last four layers for each word w in a test sentence c.
and c2. Let the contextualised word embedding of w in c1 and c2 respectively be f(w; c1) and f(w; c2). We train a binary logistic regression classifier on the WiC training set. Following the work from Zhou and Bollegala (2021), we use the cosine similarities between the two vectors in the following six pairs as features: ϕ(m(s1), m(s2)),
ϕ(f(w; c1), f(w; c2)), ϕ(m(s1), f(w; c1)),
ϕ(m(s2), f(w; c2)), ϕ(m(s1), f(w; c2)) and ϕ(m(s2), f(w; c1)).
## 4.3 Meta-Embedding Methods
We extend prior works on word-level metaembedding learning to meta-sense embedding learning by taking the sense embeddings described in §4.1 as source embeddings, and compare them with **NPMS** embeddings. We compare against the following methods:
- AVG (Coates and Bollegala, 2018) takes the average over the embeddings of a sense from different sources embeddings.
- **CONC** (Yin and Schütze, 2016a) creates meta-embeddings by concatenating the embeddings from different source embeddings.
- SVD (Yin and Schütze, 2016a) performs dimensionality reduction on the concatenated source embeddings.
- **AEME** (Bollegala and Bao, 2018a) is an autoencoder-based method for metaembedding learning, which is the current SoTA unsupervised word-level metaembedding learning method.
We use 2048 output dimensions for both SVD and AEME in the experiments, determined to be the best for those methods on validation data.
As noted in § 4.2, both WSD and WiC tasks require us to compute the cosine similarity, ϕ, between a source/meta sense embedding, m(s), of a sense s and a contextualised word embedding, f(w; c), of the ambiguous word w in context c.
However, unlike for NPMS, which explicitly guarantees that its meta-sense embeddings are directly comparable with the contextualised word embeddings via the contextual loss (4), in general, the meta-sense embeddings produced by other methods do not always exist in the contextualised word embedding space associated with the MLM, which requires careful consideration as discussed next.
As an concrete example, let us consider the metaembedding of the three sources LMMS, ARES
and SenseEmBERT, all of which are 2048 dimensional and computed by concatenating two 1024dimensional BERT embeddings, averaged over different lexical resources. Therefore, using the same 1024-dimensional BERT embeddings and by concatenating f(w; c) twice, we can obtain a 2048dimensional BERT-based contextualised embedding for w that can be used to compute the cosine similarity with a source sense embedding in this case. We consider the meta-embedding of source sense embeddings with different dimensionalities and MLMs other than BERT such as LMMS (XLNet), LMMS (RoBERTa) and DeConf later in our experiments.
Next, let us consider the meta-sense embeddings produced by CONC. Because the inner-product decomposes trivially over vector concatenation, we can copy and concatenate f(w; c) to match m(s)
produced by CONC. For example, if CONC is used with LMMS and ARES, we can concatenate f(w; c) four times, and then compute the innerproduct with the meta-sense embedding. AVG
does not change the dimensionality of the metasense embedding space. Therefore, we only need to concatenate f(w; c) twice when computing the cosine similarity with AVG for any number of source sense embeddings.
Unfortunately, the meta-sense embedding spaces produced by SVD and AEME are not directly comparable against that of contextualised embeddings due to the differences in dimensionality and nonlinear transformations introduced (cf. AEME uses autoencoders). Therefore, we learn a projection matrix, A, between m(s) and f(w; c) by minimising the squared Euclidan distance given by (6),
computed using the SemCor training dataset, T .
$$\sum_{(w,s,c)\in{\mathcal{T}}}||\mathbf{A}m(s)-\mathbf{f}(w;c)||_{2}^{2}\qquad\qquad(6)$$
After training, we compute the cosine similarity, ϕ(Am(s), f(w; c)), between the transformed SVD and AEME meta-sense embedding and contextualised embeddings.
## 5 Results 5.1 Effect Of Meta-Embedding Learning
Table 1 compares the performance of NPMS
against the meta-embedding methods described in
§4.3 on WSD and WiC. We see that NPMS obtains the overall best performance for WSD (ALL) as well as on WiC. Among the three sources, ARES reports the best performance for WSD (ALL), while SBERT does so for WiC. In SE2, SE07 datasets NPMS reports the best performance, whereas AVG,
SBERT and ARES do so respectively in SE3, SE13 and SE15. Among the baseline methods, we see AVG to report the best results, which is closely followed by CONC. Poor performance of SVD shows the challenge of applying dimensionality reduction methods on CONC due to missing sense embeddings. Although AEME has reported the SoTA
performance for word-level meta-embedding, applying it directly on sense embeddings is suboptimal. This shows the difference between wordvs. sense-level meta-embedding learning problems, and calls for sense-specific meta-embedding learning methods.
According to the WiC leader board,9the performance reported by NPMS is second only to SenseBERT (Levine et al., 2020), which is a contextualised sense embedding method obtained by fine tuning BERT on WordNet supersenses. Therefore, the performance of NPMS can be seen as the SoTA
for any *static* sense embedding method.
## 5.2 Effect Of Source Embeddings
The performance of a meta-embedding depends on the source embeddings used. Therefore, we evaluate the ability of NPMS to create meta-sense embeddings from diverse source sense embeddings that have different dimensionalities and created from different MLMs. Due space limitations, in Table 2 we compare NPMS against AVG, which reported the best performance among all other metaembedding learning methods in Table 1. From Table 2, we see that when the dimensionalities of the two source sense embeddings are identical (i.e.
2048 dimensional LMMS + ARES or LMMS +
SBERT configurations) or similar (i.e. 2048 dimensional ARES + 2048 dimensional SBERT configuration), AVG closely matches the performance of NPMS in WSD and WiC evaluations. However, we see a drastically different trend when the two sources are not BERT-based (e.g. XLNet, RoBERTa) or when they have significantly different dimensionalities (1024 dimensional LMMS
(XLNet), LMMS (RoBERTa) and 50 dimensional DeConf). In such settings, we see that NPMS to performs significantly better than AVG across all 9https://pilehvar.github.io/wic/
SE2 SE3 SE07 SE13 SE15 ALL WiC
LMMS 76.34 75.57 68.13 75.12 77.01 75.44 69.30
ARES 78.05 77.08 70.99 77.31 **83.17** 77.91 68.50
SBERT 53.11 52.22 41.37 **78.77** 55.12 59.85 71.14
AVG 79.36 **77.46** 70.33 77.86 80.82 78.17 71.16
CONC 78.22 77.14 70.99 77.37 82.97 77.97 70.38 SVD 75.02 74.22 67.25 72.81 74.85 73.80 63.01
AEME 78.53 76.92 69.01 76.09 78.96 77.03 70.69
NPMS **79.93** 77.30 **71.65** 77.49 81.21 **78.37 71.47**
Table 1: F1 scores on WSD benchmarks and accuracy on WiC are shown for the three sources (top) and for the different meta-embedding methods (bottom).
Table 2: Meta-sense embedding of sources with different dimensionalities (shown in brackets) and MLMs.
WSD benchmarks as well as on WiC. Recall that AVG assumes (a) the source embedding spaces to be orthogonal, and (b) applies zero-padding to the smaller dimensional source embeddings to make them aligned with the rest of the source embeddings. Both of those assumptions do not hold true when the source embeddings are created from diverse MLMs or have significantly different numbers of dimensions, which leads to suboptimal performances in AVG. On the other hand, NPMS does not directly compare source sense embeddings, but instead consider neighbourhoods computed from the source sense embeddings. Moreover, zeropadding is not required in NPMS because the contextual alignment step ensures the proper alignment between the contextual embedding and meta-sense
SE2 SE3 SE07 SE13 SE15 ALL WiC
LMMS(BERT) [2048] + ARES (BERT) [2048]
AVG **78.79** 77.03 69.89 77.13 **81.80** 77.83 70.22 NPMS 78.53 **77.14 71.87 77.37** 81.60 **77.93** 70.22
ARES (BERT) [2048] + SBERT [2048]
AVG 78.57 77.35 71.21 78.10 **81.70** 78.13 71.32 NPMS **78.79** 77.41 **71.65 78.53** 81.41 **78.30** 71.32
LMMS (BERT) [2048] + SBERT [2048]
AVG 77.70 76.16 68.79 78.04 77.69 76.82 69.59
NPMS **78.05 76.86 69.89 78.28 78.28 77.32 71.79**
LMMS(XLNet) [1024] + DeConf [50]
AVG 40.80 35.68 21.32 41.61 43.93 38.89 66.46
NPMS **50.88 41.68 40.66 53.04 53.13 48.70 69.26**
LMMS(RoBERTa) [1024] + DeConf [50]
AVG 39.35 34.97 26.15 41.48 42.47 38.33 66.46
NPMS **48.77 44.81 39.34 53.41 53.52 48.89 69.75**
Table 3: Effect of learning a projection matrix between meta-sense vs. BERT embedding spaces.
embedding spaces. These advantages of NPMS are clearly evident from Table 2.
## 5.3 Effect Of Projection Learning
Table 3 shows the importance of learning a projection matrix via (6) between meta-sense and contex-
| Method | WSD (ALL) | WiC |
|--------------------|-------------|-------|
| SVD with proj. | 74.80 | 66.93 |
| SVD without proj. | 35.90 | 60.34 |
| AEME with proj. | 76.02 | 68.65 |
| AEME without proj. | 41.60 | 53.61 |
| SE2 | SE3 | SE07 | SE13 | SE15 | ALL | WiC | |
|------------|-------|--------|--------|--------|-------|-------|-------|
| Both | 79.93 | 77.30 | 71.65 | 77.49 | 81.21 | 78.37 | 71.47 |
| Lpip only | 79.80 | 77.03 | 71.87 | 77.49 | 80.72 | 78.20 | 70.69 |
| Lcont only | 79.54 | 77.19 | 70.77 | 77.86 | 80.33 | 78.12 | 71.32 |
tualised embeddings, for SVD and AEME. We see that the performance of both of those methods drop significantly without the projection matrix learning step. Even with projection matrices, SVD and AEME do not outperform simpler baselines such as AVG or CONC. On the other hand, NPMS does not require such a projection matrix learning step and consistently outperforms all those methods across multiple WSD and WiC benchmarks.
## 5.4 Effect Of The Two Losses
To understand the contributions of the two loss terms PIP-loss (Lpip) and contextual alignment loss (Lcont), we conduct an ablation study where we train NPMS with three sources using only one of the two losses at a time. From Table 4, we see that in both WiC and WSD (ALL, SE2, SE3, SE15), the best performance is obtained by using both losses. Each loss contributes differently in different datasets, although the overall difference between the two losses is non-significant (according to a paired Student's t-test with p < 0.05).
This is particularly encouraging because PIP-loss can be computed without having access to a sense labelled corpus such as SemCor. Such resources might not be available in specialised domains such as medical or legal texts. Therefore, in such cases we can still apply NPMS trained using only the PIPloss. Although we considered a linearly-weighted combination of the two losses in (5), we believe further improvements might be possible by exploring more complex (nonlinear) combinations of the two losses. However, exploring such combinations is beyond the scope of current paper and is deferred to future work.
## 6 Conclusion
We proposed the first-ever meta-sense embedding learning method. Experimental results on WiC and WSD datasets show that our proposed NPMS surpasses previously published results for static sense embedding, and outperforms multiple word-level meta-embedding learning methods when applied to sense embeddings. Our evaluations were limited to English and we will consider non-English sense embeddings in our future work.
## 7 Limitations
All source sense embeddings we used in our experiments are only covering the English language, which is morphologically limited. Therefore, it is unclear whether our results and conclusions will still be valid for meta-sense embeddings created for languages other than English. On the other hand, there are WSD and WiC benchmarks for other languages such as SemEval-13, SemEval-15, XL-WSD (Pasini et al., 2021) and WiC-XL (Raganato et al., 2020), as well as multilingual sense embeddings such as ARESm (Scarlini et al., 2020b)
and SensEmBERT (Scarlini et al., 2020a). Extending our evaluations to cover multilingual sense embeddings is deferred to future work.
Our meta-sense embedding method requires static sense embeddings, and cannot be applied to contextualised sense embedding methods such as SenseBERT (Levine et al., 2020). There have been some work on learning word-level and sentencelevel (Takahashi and Bollegala, 2022; Poerner et al.,
2020) meta-embeddings using contextualised word embeddings produced by MLMs as the source embeddings. However, contextualised sense embedding methods are limited compared to the numerous static sense embedding methods. This is partly due to the lack of large-scale sense annotated corpora, required to train or fine-tune contextualised sense embeddings. Extending our work to learn meta-sense embeddings using contextualised word embeddings as source embeddings is an interesting future research direction.
## 8 Ethical Considerations
We compared our proposed method, NPMS, with several baselines on WSD and WiC tasks. In this work, we did not annotate any datasets by ourselves and used corpora and benchmark datasets that have been collected, annotated and repeatedly used for evaluations in prior works. To the best of our knowledge, no ethical issues have been reported concerning these datasets. Nevertheless, prior work from Zhou et al. (2022) shows that pretrained sense embeddings encode various types of social biases such as gender and racial biases. Moreover, it has also been reported recently that word-level metaembedding methods can amplify the social biases encoded in the source embeddings (Kaneko et al.,
2022). Therefore, we emphasise that it is important to evaluate the meta-sense embeddings learnt in this work for unfair social biases before they are deployed to downstream applications.
## Acknowledgements
Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon.
## References
Danushka Bollegala. 2022. Learning meta word embeddings by unsupervised weighted concatenation of source embeddings. In Proc. of the 31st International Joint Conference on Artificial Intelligence
(IJCAI-ECAI).
Danushka Bollegala and Cong Bao. 2018a. Learning word meta-embeddings by autoencoding. In *Proceedings of the 27th international conference on computational linguistics*, pages 1650–1661.
Danushka Bollegala and Cong Bao. 2018b. Learning word meta-embeddings by autoencoding. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 1650–1661, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Danushka Bollegala and James O'Neill. 2022. A survey on word meta-embedding learning. In *Proc. of* the 31st International Joint Conference on Artificial Intelligence (IJCAI-ECAI).
Jose Camacho-Collados and Mohammad Taher Pilehvar.
2018. From word to sense embeddings: A survey on vector representations of meaning. *J. Artif. Int. Res.*,
63(1):743–788.
José Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. Artificial Intelligence, 240:36–64.
Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014.
A unified model for word sense representation and
disambiguation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1025–1035, Doha, Qatar.
Association for Computational Linguistics.
Joshua Coates and Danushka Bollegala. 2018. Frustratingly easy meta-embedding - computing metaembeddings by averaging source word embeddings.
In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 194–198, New Orleans, Louisiana. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Philip Edmonds and Scott Cotton. 2001. SENSEVAL2: Overview. In *Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word* Sense Disambiguation Systems, pages 1–5, Toulouse, France. Association for Computational Linguistics.
Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes.
In ACL, pages 873–882.
Pratik Jawanpuria, Satya Dev N T V, Anoop Kunchukuttan, and Bamdev Mishra. 2020. Learning geometric word meta-embeddings. In *Reps4NLP*, pages 39–44, Online.
Masahiro Kaneko, Danushka Bollegala, and Naoaki Okazaki. 2022. Gender bias in meta-embeddings. In Proc. of 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022).
Douwe Kiela, Changhan Wang, and Kyunghyun Cho.
2018. Dynamic meta-embeddings for improved sentence representations. In *EMNLP*, pages 1466–1477.
Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2020. SenseBERT: Driving some sense into BERT. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 4656–4667, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A Robustly Optimized BERT Pretraining Approach.
Daniel Loureiro and Alípio Jorge. 2019a. Language modelling makes sense: Propagating representations through WordNet for full-coverage word sense disambiguation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 5682–5691, Florence, Italy. Association for Computational Linguistics.
Daniel Loureiro and Alípio Mário Jorge. 2019b. Liaad at semdeep-5 challenge: Word-in-context (wic). In SemDeep@IJCAI.
Daniel Loureiro, Alípio Mário Jorge, and Jose CamachoCollados. 2022. LMMS reloaded: Transformerbased sense embeddings for disambiguation and beyond. *Artificial Intelligence*, 305:103661.
Marco Maru, Federico Scozzafava, Federico Martelli, and Roberto Navigli. 2019. SyntagNet: Challenging supervised word sense disambiguation with lexical-semantic combinations. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3534–3540, Hong Kong, China. Association for Computational Linguistics.
Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In *Proceedings of the 26th International* Conference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111–3119, Red Hook, NY, USA. Curran Associates Inc.
Andrea Moro and Roberto Navigli. 2015. SemEval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In *Proceedings of the* 9th International Workshop on Semantic Evaluation
(SemEval 2015), pages 288–297, Denver, Colorado.
Association for Computational Linguistics.
Roberto Navigli. 2009. Word sense disambiguation: A
survey. *ACM computing surveys (CSUR)*, 41(2):1–
69.
Roberto Navigli, David Jurgens, and Daniele Vannella.
2013. SemEval-2013 task 12: Multilingual word sense disambiguation. In *Second Joint Conference* on Lexical and Computational Semantics (*SEM),
Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013),
pages 222–231, Atlanta, Georgia, USA. Association for Computational Linguistics.
Roberto Navigli and Simone Paolo Ponzetto. 2010. Babelnet: Building a very large multilingual semantic network. In *Proceedings of the 48th annual meeting of the association for computational linguistics*,
pages 216–225.
Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In *EMNLP*, pages 1059–1069.
Tommaso Pasini, Alessandro Raganato, and Roberto Navigli. 2021. Xl-wsd: An extra-large and crosslingual evaluation framework for word sense disambiguation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13648–
13656.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* on empirical methods in natural language processing
(EMNLP), pages 1532–1543.
Mohammad Taher Pilehvar and Jose Camacho-Collados.
2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics.
Mohammad Taher Pilehvar and Nigel Collier. 2016. Deconflated semantic representations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680–1690, Austin, Texas. Association for Computational Linguistics.
Nina Poerner, Ulli Waltinger, and Hinrich Schütze. 2020.
Sentence meta-embeddings for unsupervised semantic textual similarity. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7027–7034, Online. Association for Computational Linguistics.
Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic. Association for Computational Linguistics.
Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation:
A unified evaluation framework and empirical comparison. In *EACL*, pages 99–110.
Alessandro Raganato, Tommaso Pasini, Jose CamachoCollados, and Mohammad Taher Pilehvar. 2020. XLWiC: A multilingual benchmark for evaluating semantic contextualization. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7193–7206, Online. Association for Computational Linguistics.
Joseph Reisinger and Raymond Mooney. 2010. Multiprototype vector-space models of word meaning. In NAACL-HLT, pages 109–117.
Sascha Rothe and Hinrich Schütze. 2015. AutoExtend: Extending word embeddings to embeddings for synsets and lexemes. In *Proceedings of the 53rd* Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1793–1803, Beijing, China. Association for Computational Linguistics.
Bianca Scarlini, Tommaso Pasini, and Roberto Navigli.
2020a. SensEmBERT: Context-Enhanced Sense Embeddings for Multilingual Word Sense Disambiguation. In Proceedings of the Thirty-Fourth Conference on Artificial Intelligence, pages 8758–8765. Association for the Advancement of Artificial Intelligence.
Bianca Scarlini, Tommaso Pasini, and Roberto Navigli.
2020b. With More Contexts Comes Better Performance: Contextualized Sense Embeddings for AllRound Word Sense Disambiguation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41–43, Barcelona, Spain. Association for Computational Linguistics.
Keigo Takahashi and Danushka Bollegala. 2022.
Unsupervised attention-based sentence-level metaembeddings from contextualised language models.
In *Proc. of LREC*.
Wenpeng Yin and Hinrich Schütze. 2016a. Learning word meta-embeddings. In *Proceedings of the 54th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1351–
1360.
Wenpeng Yin and Hinrich Schütze. 2016b. Learning word meta-embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1351–
1360, Berlin, Germany. Association for Computational Linguistics.
Zi Yin and Yuanyuan Shen. 2018. On the dimensionality of word embedding. In *Proc. of NeurIPS*, pages 887– 898.
Yi Zhou and Danushka Bollegala. 2021. Learning sensespecific static embeddings using contextualised word embeddings as a proxy. In *Proceedings of the 35th* Pacific Asia Conference on Language, Information and Computation, pages 493–502, Shanghai, China.
Association for Computational Lingustics.
Yi Zhou, Masahiro Kaneko, and Danushka Bollegala.
2022. Sense embeddings are also biased - evaluating social biases in static and contextualised sense embeddings. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 1924–1935, Dublin, Ireland. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 6
✓ A2. Did you discuss any potential risks of your work?
section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
section 3 and 4
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 3 and 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 4
## C ✓ **Did You Run Computational Experiments?** Section 4
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 3 and 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
yang-etal-2023-multimodal | Multimodal Prompt Learning for Product Title Generation with Extremely Limited Labels | https://aclanthology.org/2023.findings-acl.166 | Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1{\%} of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100{\%} of training data; With the full labelled data for training, our method achieves state-of-the-art results. | # Multimodal Prompt Learning For Product Title Generation With Extremely Limited Labels
Bang Yang1∗
, Fenglin Liu2∗
, Zheng Li3†
, Qingyu Yin3, Chenyu You4, Bing Yin3**, Yuexian Zou**1†
1School of ECE, Peking University, China 2University of Oxford, United Kingdom 3Amazon.com Inc, Palo Alto, USA 4 Yale University, USA
{yangbang, zouyx}@pku.edu.cn; [email protected] [email protected]; {amzzhe, qingyy, alexbyin}@amazon.com
## Abstract
Generating an informative and attractive title for the product is a crucial task for e-commerce.
Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1% of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100% of training data; With the full labelled data for training, our method achieves state-of-the-art results.
## 1 Introduction
Product title generation aims to comprehend the content of a given product provided by merchants, which may come in various forms such as an input product image and a set of attributes, and then automatically generate an appealing and informative
∗Equal Contributions. Ordered by a coin toss. †Corresponding authors.
title. The generated title should contain essential product characteristics, along with the product details, e.g., brand name, category, style, size, material, and colour (Song et al., 2022; Mane et al.,
2020; Zhan et al., 2021). Therefore, a desirable title can highlight the characteristics and advantages of the product, leading to time savings for consumers, enhancing their overall shopping experience, and ultimately increasing product sales.
Admittedly, in E-commerce, the ability to perform product title generation automatically offers the possibility of relieving merchants from the timeconsuming analysis of complex product details and writing concise and appealing titles; and alerting merchants of important product characteristics and advantages (Chen et al., 2019; Zhang et al., 2019a; de Souza et al., 2018; Zhang et al., 2019b).
In general, the task of product title generation can be defined as a data-to-text problem. Following existing efforts on data-to-text tasks (Specia et al.,
2016; Hossain et al., 2019; Bahdanau et al., 2015),
Figure 1(a) shows the conventional product title generation approach: the encoder-decoder framework. The image encoder and attribute encoder respectively transform the product image and product attributes into visual and attribute representations, which the text decoder subsequently decodes into a product title. Such encoder-decoder-based methods have achieved great success in advancing the state-of-the-art of various data-to-text tasks, e.g., image captioning (Hossain et al., 2019; Shan et al., 2022), multimodal machine translation (Specia et al., 2016), and video captioning (Yang et al.,
2021; Yu et al., 2016). However, these methods rely on a large volume of annotated data, which is particularly time-consuming to collect. This issue is especially severe in the E-commerce title generation scenario, where products from different categories always contain category-specific attributes. Therefore, the product title generation model trained on existing products cannot be di-
![1_image_0.png](1_image_0.png)
rectly used on novel products, such as with new categories or new designs. Nevertheless, it is difficult to collect and label sufficient training data in a timely manner, which prevents the rapid deployment of such encoder-decoder models online.
As shown in Figure 1(b), we propose the Multimodal Prompt Learning (MPL) framework, which deals with the situation where the training data is scarce. In detail, we observe that novel product title involves different domain product characteristics (e.g., category-specific attributes) and different writing styles, directly adopting a model or transferring a model pre-trained on existing available product data to novel product data will significantly degrade the performance, especially when the labelled data (i.e., image-attribute-title pairs) is insufficient in quantity (Wang et al., 2019). To this end, we first construct a set of multimodal prompts from different modalities, i.e., visual prompts, attribute prompts, and language prompts. During training, given the limited data of novel products (i.e., Image I - Attribute A - Title T), to make full use of it, MPL introduces the unimodal prompt training to enable the different prompts to preserve the corresponding domain characteristics and the writing styles of novel products from different modalities/perspectives. In implementations, (i) we introduce the visual prompts PI to train the model by generating the title T in the I → PI → T pipeline;
(ii) we introduce the attributes prompts PA to train the model in the A → PA → T pipeline; (iii)
we introduces the textual language prompts PT to train the model by reconstructing the title T in the T → PT → T auto-encoding pipeline. It is worth noting that the auto-encoding pipeline aims to reconstruct the same input sentence, therefore, it is straightforward for the model to be trained (Wang et al., 2016; Tschannen et al., 2018) to learn the necessary domain characteristics and the writing styles of novel products via the small amount of data. Besides, the unsupervised auto-encoding process provides opportunities for our model to be further improved by incorporating more unlabelled text-only data (Nukrai et al., 2022). At last, MPL
introduces multimodal prompt training to learn to generate accurate novel product titles with the help of learned multimodal prompts. In the implementation, we first introduce a Cycle Alignment Network to highlight and capture the important characteristics from multiple modalities by cycle aligning three types of prompts; then take the input images I and attributes A of novel products as queries to retrieve the learned domain characteristics in the aligned prompts; and finally rely on the learned writing styles in the text decoder to generate the titles for the novel products.
In this way, the proposed MPL framework can accurately and efficiently generate novel product titles with limited training data by 1) introducing multimodal prompts to learn domain characteristics and writing styles of novel products; 2) learning to accurately highlight the product characteristics and advantages across multiple modalities. It enables our approach to be rapidly well-adapted to the novel product domain, helping sellers save time in deploying new products, optimizing consumers' consumption experience, and thus boosting sales. The experiments and analyses on a largescale dataset, i.e., Amazon Product Dataset (Ni et al., 2019), across five novel product categories prove the effectiveness of our approach.
Overall, the contributions are as follows:
- We propose the Multimodal Prompt Learning
(MPL) framework to generate few-shot novel product titles, where the training data in the novel product domain is scarce.
- Our MPL framework first introduces multiple types of prompts to learn the domain characteristics and writing styles of novel products, and then learns to generate accurate final titles by highlighting and capturing the important characteristics from multiple modalities.
- Our experiments on five novel products prove the effectiveness of our approach, which generates desirable product titles for novel products with only 1% of the training data otherwise required by previous methods, and significantly outperforms state-of-the-art results with the full training data.
## 2 Related Work
The related works are discussed from 1) Product Description and 2) Few-shot Learning.
## 2.1 Product Description
Generating the product titles to describe the given products is similar to the multimodal language generation tasks, e.g., image captioning (Xu et al., 2015; Chen et al., 2015; Liu et al., 2019) and multimodal machine translation (Specia et al., 2016).
To perform multimodal language generation tasks, a large number of encoder-decoder-based models have been proposed (Guo et al., 2022; Zhang et al.,
2023; Shan et al., 2022; Yang et al., 2021; Chen et al., 2015; Anderson et al., 2018; Yang et al., 2019; Cornia et al., 2020; Liu et al., 2020b; Zhu et al., 2023b,a), in which a CNN (Krizhevsky et al.,
2012) and a LSTM/Transformer (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017; Liu et al.,
2020a) is used as the image encoder and text encoder to encode the input images and texts, and an LSTM (Hochreiter and Schmidhuber, 1997) or a Transformer (Vaswani et al., 2017; Liu et al., 2020a) is used as the text encoder to generate the final sentences. Inspired by the great success of an encoder-decoder framework in multimodal language generation tasks, existing efforts on product description have proposed a wide variety of encoder-decoder based frameworks (Song et al.,
2022; Zhang et al., 2019a; Mane et al., 2020; Zhan et al., 2021; Chan et al., 2020; Zhang et al., 2019b; Gong et al., 2019; de Souza et al., 2018; Chen et al.,
2019) to describe given products. However, these existing models are trained on large-scale datasets, while collecting data on novel products, e.g., novel categories and novel designs, to train the models is typically very limited. To this end, we propose multimodal prompt learning to relax the reliance on the training dataset for the few-shot novel product description - with the goal of quick deployment of new products.
## 2.2 Few-Shot Learning
Recently, few-shot learning (Wang et al., 2020) has received growing research interest across many AI
domains (Dhillon et al., 2020; Tian et al., 2020; Perez et al., 2021; Gu et al., 2022; Gao et al., 2021; Tsimpoukelli et al., 2021; Zha et al., 2022; Wang et al., 2022a; Li et al., 2021; Huang et al., 2022; Wang et al., 2022b; Li et al., 2020; Zhang et al.,
2021). Inspired by the success of few-shot learning, several works (Liu et al., 2021; Sreepada and Patra, 2020; Gong et al., 2020; Zhou et al., 2022a; Xu et al., 2021) explored such an approach for the domain of E-commerce. However, most focus on unimodal tasks, either on the graph data
(e.g., node classification, recommendation) (Liu et al., 2021; Sreepada and Patra, 2020; Wang et al.,
2022a; Li et al., 2020; Wang et al., 2022b; Huang et al., 2022), or on the text data (e.g., sentiment analysis and recommendation) (Gong et al., 2020; Xu et al., 2021; Zha et al., 2022), or on the image data (e.g., image classification) (Zhou et al., 2022a).
As a multimodal task incorporating disparities between the visual and the textual modalities (Liang et al., 2022), few-shot product title generation is far more challenging. To prove our hypothesis, we reimplement existing few-shot learning methods for novel product title generation, demonstrating with our experiments that our approach significantly outperforms existing methods.
## 3 Approach
In this section, we will introduce the proposed Multimodal Prompt Learning (MPL) method in detail.
## 3.1 Formulation
Given the basic product information, i.e., product image I and product attribute A, the goal of product title generation is to generate an accurate and concise product title T = {w1, w2*, . . . , w*N }, in2654 cluding N words. Current state-of-the-art methods usually consist of an image encoder and a text encoder to extract the image representations RI and attribute representations RA, and a text decoder to generate the target title T, which is formulated as:
Image Encoder : $I\to R_{I}$; Attribute Encoder : $A\to R_{A}$; Text Decoder : $\{R_{I},R_{A}\}\to T$.
Existing works rely on the annotated data imageattribute-title pairs to train the model by minimizing a supervised training loss, e.g., cross-entropy loss. However, for many novel products, only a small amount of data is available. In this case, we have to collect sufficient data to train the model, while collecting and labelling data is particularly labour-intensive and expensive. As a result, insufficient training data poses a great challenge for building models to describe novel products.
To this end, we propose the MPL generation framework to generate accurate and desirable titles when encountering a novel product. MPL includes two components: Unimodal Prompt Training (UPT) and Multimodal Prompt Training (MPT),
where the former introduces three types of prompts
(visual prompts PI , attribute prompts PA, and textual language prompts PT ), and the latter includes a cycle alignment network. Our proposed framework can be formulated as:
$$\mathbf{U}\mathbf{P}\mathbf{T}$$
Visual Prompts: $I\to{\cal P}_{I}\to T$ Attribute Prompts: $A\to{\cal P}_{A}\to T$ Language Prompts: $T\to{\cal P}_{T}\to T$ (2) Cycle Alignment: $\{{\cal P}_{I},{\cal P}_{A},{\cal P}_{T}\}\to\hat{\cal P}$ Aligned Prompts: $\{I,A\}\to\hat{\cal P}\to T$
$$\mathrm{MPT}$$
The prompts across different modalities are used to learn the novel product domain characteristics from the limited available data in the UPT and then are used by the cycle alignment network to highlight and capture the important characteristics Pˆ, which is retrieved by the image and attributes to learn to generate novel product titles T in the MPT. We adopt the ViT (He et al., 2016) from CLIP (Radford et al., 2021) as the image encoder and the BERT
(Devlin et al., 2019) from CLIP (Radford et al.,
2021) as the attribute/text encoder. For the text decoder, we adopt the Transformer-BASE (Vaswani et al., 2017; Liu et al., 2020a). In particular, CLIP
and Transformer have shown great success in bridging/aligning multi-modalities (Nukrai et al., 2022) and image-based natural language generation (Cornia et al., 2020), respectively. During inference, we directly follow the {I, A} → P →ˆ T pipeline to generate final novel product titles.
## 3.2 Multimodal Prompt Learning
When encountering a new product, the deep learning model usually suffers from significant performance degradation (Alyafeai et al., 2020; Pan and Yang, 2010; Zhuang et al., 2021), which is caused by the new domain characteristics and new writing styles of the novel product. Therefore, to efficiently train and deploy the data-driven deep learning models on a few samples of novel products, we propose the Multimodal Prompt Learning framework, consisting of a Unimodal Prompt Training module and a Multimodal Prompt Training module.
## 3.2.1 Unimodal Prompt Training
The module introduces visual prompts, attribute prompts, and textual language prompts to learn the novel product domain characteristics and the writing styles. We first acquire the representations of image RI , attribute RA, and title RT . Then, we build three sets of trainable soft prompts (Li and Liang, 2021; Qin and Eisner, 2021; Gu et al.,
2022; Zhou et al., 2022b): visual prompts PI , attribute prompts PA, and textual language prompts PT . The dimensions of different prompts are all NP × d, where NP denotes the total number of soft prompts, which are used to learn and store the new characteristics of the novel product through our method, defined as follows:
$$\hat{\mathcal{P}}_{I}=[\mathcal{P}_{I};R_{I}],\hat{\mathcal{P}}_{A}=[\mathcal{P}_{A};R_{A}],\hat{\mathcal{P}}_{T}=[\mathcal{P}_{T};R_{T}]\tag{3}$$
[·; ·] denotes the concatenation operation. Then, the prompts of images, attributes, and titles are directly inputted to the decoder as prefixes to train the model by generating (i.e., reconstructing) the titles.
Given the ground truth T = {w1, w2*, . . . , w*N },
we train the model by minimizing the widely-used natural language generation loss, i.e., cross-entropy loss, defined as follows:
$$L_{\text{XE}}^{I}=-\sum_{t=1}^{N}\log\left(p(w_{t}\mid w_{1:t-1};\widehat{P}_{I},I)\right)$$ $$L_{\text{XE}}^{A}=-\sum_{t=1}^{N}\log\left(p(w_{t}\mid w_{1:t-1};\widehat{P}_{A},A)\right)\tag{4}$$ $$L_{\text{XE}}^{T}=-\sum_{t=1}^{N}\log\left(p(w_{t}\mid w_{1:t-1};\widehat{P}_{T},T)\right)$$
Finally, by combining the L
IXE, L
A
XE, and L
TXE,
the full training objective of the Unimodal Prompt
![4_image_0.png](4_image_0.png)
## Training Process Is:
$${\cal L}_{\mathrm{full}}=\lambda_{1}{\cal L}_{\mathrm{XE}}^{I}+\lambda_{2}{\cal L}_{\mathrm{XE}}^{A}+\lambda_{3}{\cal L}_{\mathrm{XE}}^{T}$$
XE (5)
where λ1,2,3 ∈ [0, 1] is the hyperparameters that controls the regularization. We find that our approach can achieve competitive results with the state-of-the-art models with only 1% training data when setting λ1 = λ2 = λ3 = 1, thus we do not attempt to explore other settings.
Through the above operation, our Unimodal Prompt Training process can enable the model to learn the domain characteristics and the writing styles of novel products on a small amount of data.
It is worth noting that the auto-encoding process in L
TXE, which reconstructs the input titles, is unsupervised. It indicates that our method 1) can be further improved by using more large-scale unlabeled texts; 2) can control the style of the generated titles by adjusting the style of input titles; and 3)
can continuously learn from newly added texts of novel products to boost the performance as novel products are developed.
## 3.2.2 Multimodal Prompt Training
After learning the novel domain characteristics and the new writing styles of novel products in the Unimodal Prompt Training process, we further propose the Multimodal Prompt Training process to train the framework, learning to capture the important characteristics in different prompts and describe the novel product based on the input image and attributes of the novel product. In implementations, we first extract the representations of input image RI and input attributes RA. Then, to boost performance, we propose to capture important characteristics and filter noisy characteristics from the visual prompts PI , attribute prompts PA, and language prompts PT . Considering that important characteristics will appear in the three prompts simultaneously, we introduce the Cycle Alignment Network to perform cycle alignment of different prompts. As shown in Figure 2, we take the visual prompts PI as a 'query' to retrieve the related novel product characteristics preserved in visual prompts PI , attribute prompts PA, and language prompts PT :
$\mathcal{P}_{I\to I}=\alpha\mathcal{P}_{I}=\sum_{k=1}^{N_{\mathrm{P}}}\alpha_{k}p_{k},$ where $\alpha=\mathrm{softmax}(\mathcal{P}_{I}\mathcal{P}_{I}^{\top})$ $\mathcal{P}_{I\to A}=\beta\mathcal{P}_{A}=\sum_{k=1}^{N_{\mathrm{P}}}\beta_{k}p_{k},$ where $\beta=\mathrm{softmax}(\mathcal{P}_{I}\mathcal{P}_{A}^{\top})$ $\mathcal{P}_{I\to T}=\gamma\mathcal{P}_{T}=\sum_{k=1}^{N_{\mathrm{P}}}\gamma_{k}p_{k},$ where $\gamma=\mathrm{softmax}(\mathcal{P}_{I}\mathcal{P}_{T}^{\top})$ Similarly, we can take the attribute prompts $\mathcal{P}_{A}$
$$({\mathfrak{H}})$$
Similarly, we can take the attribute prompts PA
and language prompts PT as a 'query' to retrieve the related novel product characteristics across different modalities, acquiring PA→A, PA→I , PA→T , PT→T , PT→I , PT→A. Then, we can obtain the aligned prompts Pˆ by concatenating them. Finally, given the ground truth titles T = {w1, w2*, . . . , w*N }, we again adopt the crossentropy loss to train our framework to generate the final novel product titles based on Pˆ:
$$L_{\text{XE}}=-\sum_{t=1}^{N}\log\left(p(w_{t}\mid w_{1:t-1};\hat{\mathcal{P}},I,A)\right).\tag{6}$$ During inference, we follow the $\{I,A\}\rightarrow\hat{\mathcal{P}}\to T$
pipeline to generate titles of the test products. In this way, our MPL framework can relax the reliance on large-scale annotated datasets and achieve competitive results with previous works with only 1%
training data.
## 4 Experiments
In this section, we first describe a large-scale dataset, the widely-used metrics, and the settings used for evaluation. Then, we present the results of in-domain and out-of-domain experiments.
## 4.1 Datasets, Metrics, And Settings
Datasets We evaluate our proposed framework on a publicly available dataset, i.e., Amazon Product Dataset (Ni et al., 2019), which consists of
| Settings | Training Data | Testing Data |
|---------------------------|--------------------------|---------------------|
| Out-of-Domain | Natural Images and Texts | Novel Products: PLG |
| Product Images and Texts: | PS | |
| CSJ + HK + 'Electronics'+ | AF | |
![5_image_0.png](5_image_0.png)
Out-of-Domain Natural Images and Texts **Novel Products:**
![5_image_1.png](5_image_1.png)
around 15M products. For data preparation, we first exclude entries without images/attributes/titles, which results in around 5.2M products across 15 categories. The detailed statistics are summarized in the supplementary material. We randomly partition the dataset into 70%-20%-10%
train-validation-test partitions according to products. Therefore, there is no overlap of products between train, validation, and test sets.
Metrics Following common practice in multimodal language generation tasks (Hossain et al., 2019; Specia et al., 2016), we adopt the widelyused generation metrics, i.e., BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), and CIDEr
(Vedantam et al., 2015), which measure the match between the generated and ground truth sentences.
Implementations We follow the state-of-theart method CLIP (Radford et al., 2021), which has shown great success on various multimodal tasks.
Therefore, we adopt CLIP as our base model. In particular, the ViT (Dosovitskiy et al., 2021) is used as the image encoder, the BERT (Devlin et al., 2019) is used as the attribute/text encoder, and the Transformer-BASE (Vaswani et al., 2017) is used as the text decoder. The model size d is set to 512.
Based on the average performance on the validation set, the number of prompts NP is set to 16.
For optimization, we adopt the AdamW optimizer
(Loshchilov and Hutter, 2019) with a batch size of 128 and a learning rate of 1e-4. We perform early stopping based on CIDEr. We apply a beam search of size 3 for inference. Our framework is trained on 4 V100 GPUs using mixed-precision training
(Micikevicius et al., 2018).
Settings As shown in Table 1, we perform the out-of-domain and in-domain experiments.
- *Out-of-Domain Experiments* are conducted by directly transferring the CLIP pre-trained on natural images and texts datasets, such as MSCOCO (Chen et al., 2015), WIT (Deng et al., 2009), and Conceptual Captions (Soricut et al., 2018), to the novel products.
- *In-Domain Experiments* are conducted by pretraining the models on the top ten products in terms of quantity and then testing on the remaining five novel products. Therefore, there is no overlap of products between training and testing sets.
To improve the evaluation significantly, we further re-implement five state-of-the-art fully-supervised multimodal language generation methods, i.e.,
KOBE (Chen et al., 2019), CLIP-Captioning (Radford et al., 2021), M2-Transformer (Cornia et al.,
2020), X-Transformer (Pan et al., 2020), and LVPM3(Guo et al., 2022), in which the KOBE is specifically designed for E-commerce, and two previous few-shot learning methods, i.e., VL-BART (Cho et al., 2021) and VL-ADAPTER (Sung et al., 2022),
in our experiments.
## 4.2 Out-Of-Domain Results
The results are reported in Table 2, which shows the superior performance of our approach. As we can see, our framework outperforms previous few-shot learning methods by an average of 3.76% BLEU-4, 7.9% ROUGE-L, and 10.46% CIDEr scores. Therefore, our MPL framework not only significantly outperforms previous few-shot learning methods, but also achieves competitive results with existing state-of-the-art fully-supervised methods trained on 100% training data with 1% training data. It enables our framework to provide a solid bias for novel product title generation, helping sellers save time in deploying new products. As a result, with full training data, our method achieves the best results across different novel products. The performances prove the validity of our method in learning the domain characteristics and the writing styles of novel products, thus relaxing the dependency on the training data to generate accurate titles for novel products with lesser annotated data.
## 4.3 In-Domain Results
Table 3 shows that under the in-domain setting, with only 1% training data, our MPL framework
| Settings | Methods | Ratio of Data |
|---------------------------------------|-----------|-----------------|
| Supervised Learning Few-shot Learning | | |
Settings Methods Ratio of
Data
PLG PS AF IS GGF
B-4 R-L C B-4 R-L C B-4 R-L C B-4 R-L C B-4 R-L C
SupervisedLearning
X-Transformer (Pan et al., 2020) 100% 9.8 17.6 20.5 9.5 16.0 22.9 8.1 13.8 17.8 6.5 11.7 13.4 5.4 8.1 11.5
M2-Transformer (Cornia et al., 2020) 100% 10.3 18.4 22.0 9.7 16.6 23.3 8.4 14.5 19.5 6.6 11.4 13.0 5.2 7.8 10.4
KOBE (Chen et al., 2019) 100% **12.1** 20.4 25.0 **11.4** 19.7 26.1 10.0 17.9 22.9 7.1 13.0 15.3 6.0 10.6 13.3
LVP-M3(Guo et al., 2022) 100% 11.3 19.7 23.0 10.7 20.5 26.8 10.2 18.4 23.6 7.6 14.1 16.9 6.5 11.2 13.4
CLIP-Captioning (Radford et al., 2021) 100% 11.9 **20.9** 24.7 **11.4** 20.3 **27.3** 10.6 19.3 24.4 8.0 14.7 17.8 6.2 11.8 15.6
VL-BART (Cho et al., 2021) 1% 5.9 11.0 12.2 6.1 11.0 12.9 5.7 9.3 12.3 5.6 8.7 9.8 4.7 7.5 7.8
VL-ADAPTER (Sung et al., 2022) 1% 6.7 12.6 13.5 5.7 10.0 13.9 6.5 10.4 13.0 5.2 9.6 10.6 4.6 7.8 8.7 CLIP-Captioning (Radford et al., 2021) 1% 7.1 13.2 15.4 6.2 10.3 13.4 6.9 12.0 13.6 5.6 9.1 10.9 5.0 8.2 8.8
MPL (Ours) 1% 11.5 20.4 **25.3** 10.9 **20.8** 26.1 11.0 20.5 27.7 8.9 16.4 19.0 7.3 14.7 **16.8**
100% 13.5 22.7 30.6 12.8 22.0 29.7 14.1 24.5 33.3 10.6 20.1 23.8 10.1 18.4 **21.9**
Table 2: Results of out-of-domain experiments on five novel products (see Table 1). B-4, R-L, and C are short for BLEU-4, ROUGE-L, and CIDEr, respectively. Higher is better in all columns. The Red- and the Blue- coloured numbers denote the best and the second-best results across all methods, respectively.
Settings Methods Ratio of
Data
PLG PS AF IS GGF
B-4 R-L C B-4 R-L C B-4 R-L C B-4 R-L C B-4 R-L C
SupervisedLearning
X-Transformer (Pan et al., 2020) 100% 12.1 22.0 27.1 12.4 21.3 29.0 11.5 19.9 27.8 8.5 16.1 17.3 6.0 10.6 13.9 M2-Transformer (Cornia et al., 2020) 100% 12.5 21.4 26.7 12.1 20.6 28.8 11.4 20.6 28.1 8.9 16.7 18.5 6.2 11.5 14.0
KOBE (Chen et al., 2019) 100% 13.9 22.8 30.6 15.8 25.9 35.0 13.2 21.5 31.0 9.8 17.3 20.1 7.5 14.7 16.2
LVP-M3(Guo et al., 2022) 100% 13.4 21.9 30.1 14.2 24.8 33.7 14.0 23.1 31.8 10.1 17.9 20.6 8.0 16.3 17.7
CLIP-Captioning (Radford et al., 2021) 100% 14.2 23.5 31.7 15.0 25.2 34.6 13.9 **23.6** 32.3 10.4 18.7 **21.8** 8.5 16.6 18.7 VL-BART (Cho et al., 2021) 1% 6.5 12.3 14.4 6.8 12.5 14.2 6.6 10.9 13.3 6.5 11.0 12.5 5.1 9.8 12.4
VL-ADAPTER (Sung et al., 2022) 1% 7.4 14.0 15.4 6.7 12.2 14.7 6.9 11.5 14.1 6.6 11.0 12.9 5.8 10.3 12.9
CLIP-Captioning (Radford et al., 2021) 1% 7.5 13.7 16.0 7.1 12.9 15.1 7.5 12.9 14.5 7.0 11.2 13.3 6.2 10.7 13.0
MPL (Ours) 1% 12.6 22.4 27.0 12.9 23.3 30.1 13.4 23.5 **32.5** 9.7 17.4 20.5 8.8 17.1 **19.2**
100% 14.9 24.0 32.5 14.6 24.9 35.0 15.3 24.7 34.2 13.5 23.8 27.4 11.0 19.5 **23.6**
| Settings | Methods | Ratio of |
|---------------------------------------|-----------|------------|
| Supervised Learning Few-shot Learning | | |
can surpass several state-of-the-art fully-supervised methods, e.g., X-Transformer (Pan et al., 2020)
and M2-Transformer (Cornia et al., 2020), and significantly outperforms previous few-shot methods across all products on all metrics. Meanwhile, with 100% training data as in previous works, our approach achieves average 1.46%, 1.86%, and 2.72%
absolute margins to current best results produced by CLIP (Radford et al., 2021) in terms of BLEU4, ROUGE-L, and CIDEr, respectively. The best results validate the effectiveness of our approach in producing higher-quality product titles, under both the few-shot and supervised experimental settings, verifying its generalization capabilities.
## 5 Analysis
In this section, we conduct several analyses under the out-of-domain setting to better understand our proposed approach,
## 5.1 Ablation Study
We perform the ablation study of our MPL framework to show how our approach achieves competitive results with previous works with only 1% training data. The results in Table 4 show that our unimodal prompt training and multimodal prompt training of the framework all contribute to improved performances. It proves our arguments and the effectiveness of each proposed component. In detail, by comparing (a-c) and Base, we can observe that the language prompts lead to the best improvements in the few-shot learning setting. It may be explained by the fact that the language prompts PT are used to reconstruct the original same input sentence, it is straightforward for the model to be trained through auto-encoding to learn the necessary domain characteristics and the writing styles using a small amount of data in the few-shot setting.
Meanwhile, the visual prompts PI lead to the best improvements in the supervised learning setting. It means that when the training data is sufficient, it is important to further capture accurate and rich visual information from the product's image to generate a desirable and concise title. We observe an overall improvement in setting (d) by combining the three unimodal prompts, which can improve performance from different perspectives. Table 4 (d) and MPL show that the MPT, which includes a cycle alignment network, can bring improvements on all metrics. It proves the effectiveness of highlighting
| Settings | UPT | MPT | Few-shot (1%) | Supervised (100%) | | | | | | |
|------------|-------|-------|-----------------|---------------------|------|------|------|------|------|------|
| PI | PA | PT | Cycle Alignment | B-4 | R-L | C | B-4 | R-L | C | |
| Base | 5.0 | 8.2 | 8.8 | 6.2 | 11.8 | 15.6 | | | | |
| (a) | √ | 5.4 | 8.9 | 10.5 | 8.0 | 14.8 | 18.0 | | | |
| (b) | √ | 5.6 | 9.4 | 10.7 | 6.8 | 12.9 | 16.3 | | | |
| (c) | √ | 6.0 | 10.7 | 12.6 | 7.3 | 13.5 | 16.7 | | | |
| (d) | √ | √ | √ | 6.4 | 12.9 | 13.5 | 8.4 | 15.6 | 19.2 | |
| MPL | √ | √ | √ | √ | 7.3 | 14.7 | 16.8 | 10.1 | 18.4 | 21.9 |
![7_image_0.png](7_image_0.png)
and capturing important characteristics by aligning prompts across multiple modalities to improve performances under both few-shot and supervised settings.
## 5.2 Qualitative Analysis
Figure 3 gives an example to better understand our method. As shown in the Blue-colored text, our method is significantly better aligned with ground truth than CLIP. For example, our framework correctly describes the key characteristics, e.g., the brand name "*Lenox*" and the category "wedding cake", and advantages, e.g., "*tasty cake*". However, the CLIP generates several wrong words (Redcolored text) and can not well describe the products.
More importantly, the visualization of the prompts shows that our approach can accurately learn the novel product domain characteristics to boost the generation of novel product titles. For example, the visual prompts can accurately capture the "*cake*",
especially the attribute prompts can correctly capture the brand name "*Lenox*" and characteristics
"*bride and groom*", and the language prompts can capture the "*tasty*" and "*wedding*" according to the
"*cake*" and "*bride and groom*", respectively.
Overall, it qualitatively proves that our approach can capture important domain characteristics of novel products by multimodal prompt learning. It results in achieving competitive results with the previous supervised method CLIP with only 1%
labelled data for training, which qualitatively verifies the effectiveness of our approach in novel title generation with extremely limited labels.
## 6 Conclusion
In this paper, we present the Multimodal Prompt Learning (MPL) framework to accurately and efficiently generate titles of novel products with limited training data. Our MPL introduces various prompts across different modalities to sufficiently learn novel domain characteristics and writing styles, which are aligned and exploited to generate desirable novel product titles. The out-ofdomain and in-domain experiments on a large-scale dataset across five novel product categories show that, with only 1% downstream labelled data for training, our approach achieves competitive results with fully-supervised methods. Moreover, with the full training data used in previous works, our method significantly sets the state-of-the-art performance, which proves the effectiveness of our approach and shows its potential to deploy novel products online in time to boost product sales.
## Limitations
This paper introduces the problem of few-shot novel product title generation to efficiently and accurately generate informative and appealing titles for novel products with limited labeled data.
However, the training of our proposed model relies on the paired image-attribute-title data, which may not be easily obtained simultaneously in the real world. Therefore, our model may not work well when high-quality image data or textual profile is missing. The limitations could be alleviated using techniques such as knowledge distillation or self-training. Besides, the writing styles of the generated titles are highly correlated with the training data. Hence, it requires specific and appropriate treatment by experienced practitioners, when deploying new products online.
## Ethics Statement
We conduct the experiments on the public dataset, which is exclusively about E-commerce and does not contain any information that names or uniquely identifies individual people or offensive content.
Therefore, we ensure that our paper conforms to the ethics review guidelines.
## Acknowledgements
This paper was partially supported by NSFC (No:
62176008) and Shenzhen Science & Technology Research Program (No: GXWD2020123116580700720200814115301001).
## References
Zaid Alyafeai, Maged Saeed AlShaibani, and Irfan Ahmad. 2020. A survey on transfer learning in natural language processing. *arXiv preprint* arXiv:2007.04239.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang.
2018. Bottom-up and top-down attention for image captioning and VQA. In *CVPR*.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *ICLR*.
Zhangming Chan, Yuchi Zhang, Xiuying Chen, Shen Gao, Zhiqiang Zhang, Dongyan Zhao, and Rui Yan.
2020. Selection and generation: Learning towards multi-product advertisement post generation. In EMNLP.
Qibin Chen, Junyang Lin, Yichang Zhang, Hongxia Yang, Jingren Zhou, and Jie Tang. 2019. Towards knowledge-based personalized product description generation in e-commerce. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. *arXiv* preprint arXiv:1504.00325.
Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021.
Unifying vision-and-language tasks via text generation. In *ICML*.
Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. 2020. Meshed-memory transformer for image captioning. In *CVPR*.
José GC de Souza, Michael Kozielski, Prashant Mathur, Ernie Chang, Marco Guerini, Matteo Negri, Marco Turchi, and Evgeny Matusov. 2018. Generating ecommerce product titles and predicting their quality.
In Proceedings of the 11th international conference on natural language generation.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. Imagenet: A large-scale hierarchical image database. In *CVPR*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*.
Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. 2020. A baseline for few-shot image classification. In *ICLR*.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *ACL/IJCNLP*.
Hao Gong, Qifang Zhao, Tianyu Li, Derek Cho, and DuyKhuong Nguyen. 2020. Learning to profile:
User meta-profile network for few-shot learning. In CIKM.
Yu Gong, Xusheng Luo, Kenny Q Zhu, Wenwu Ou, Zhao Li, and Lu Duan. 2019. Automatic generation of chinese short product titles for mobile display. In Proceedings of the AAAI Conference on Artificial Intelligence.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2022. PPT: pre-trained prompt tuning for few-shot learning. In ACL.
Hongcheng Guo, Jiaheng Liu, Haoyang Huang, Jian Yang, Zhoujun Li, Dongdong Zhang, and Furu Wei.
2022. LVP-M3: language-aware visual prompt for multilingual multimodal machine translation. In EMNLP.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *CVPR*.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Comput.*, 9(8):1735–
1780.
MD Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, and Hamid Laga. 2019. A comprehensive survey of deep learning for image captioning. ACM
Computing Surveys (CsUR), 51(6):1–36.
Zijie Huang, Zheng Li, Haoming Jiang, Tianyu Cao, Hanqing Lu, Bing Yin, Karthik Subbian, Yizhou Sun, and Wei Wang. 2022. Multilingual knowledge graph completion with self-supervised adaptive graph alignment. In ACL.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In *NIPS*.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In ACL/IJCNLP.
Zheng Li, Mukul Kumar, William Headden, Bing Yin, Ying Wei, Yu Zhang, and Qiang Yang. 2020. Learn to cross-lingual transfer with meta graph learning across heterogeneous languages. In *EMNLP*.
Zheng Li, Danqing Zhang, Tianyu Cao, Ying Wei, Yiwei Song, and Bing Yin. 2021. Metats: Meta teacherstudent network for multilingual sequence labeling with minimal supervision. In *EMNLP*.
Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Zou. 2022. Mind the gap:
Understanding the modality gap in multi-modal contrastive representation learning. arXiv preprint arXiv:2203.02053.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In ACL.
Fenglin Liu, Yuanxin Liu, Xuancheng Ren, Xiaodong He, and Xu Sun. 2019. Aligning visual regions and textual concepts for semantic-grounded image representations. In *NeurIPS*.
Fenglin Liu, Xuancheng Ren, Zhiyuan Zhang, Xu Sun, and Yuexian Zou. 2020a. Rethinking skip connection with layer normalization. In *COLING*.
Fenglin Liu, Xian Wu, Shen Ge, Wei Fan, and Yuexian Zou. 2020b. Federated learning for vision-andlanguage grounding problems. In *AAAI*.
Zemin Liu, Yuan Fang, Chenghao Liu, and Steven CH
Hoi. 2021. Relative and absolute location embedding for few-shot node classification on graph. In *AAAI*.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *ICLR*.
Mansi Ranjit Mane, Shashank Kedia, Aditya Mantha, Stephen Guo, and Kannan Achan. 2020. Product title generation for conversational systems using bert. arXiv preprint arXiv:2007.11768.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David García, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed precision training. In *ICLR*.
Jianmo Ni, Jiacheng Li, and Julian J. McAuley.
2019. Justifying recommendations using distantlylabeled reviews and fine-grained aspects. In EMNLP/IJCNLP.
David Nukrai, Ron Mokady, and Amir Globerson. 2022.
Text-only training for image captioning using noiseinjected clip. *arXiv preprint arXiv:2211.00575*.
Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering.
Yingwei Pan, Ting Yao, Yehao Li, and Tao Mei. 2020.
X-linear attention networks for image captioning. In CVPR.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for automatic evaluation of machine translation. In ACL.
Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021.
True few-shot learning with language models. In NeurIPS.
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In NAACL.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. *arXiv preprint arXiv:2103.00020*.
Bin Shan, Yaqian Han, Weichong Yin, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2022.
Ernie-unix2: A unified cross-lingual cross-modal framework for understanding and generation. *arXiv* preprint arXiv:2211.04861.
Xuemeng Song, Liqiang Jing, Dengtian Lin, Zhongzhou Zhao, Haiqing Chen, and Liqiang Nie. 2022. V2P:
vision-to-prompt based multi-modal product summary generation. In *SIGIR*.
Radu Soricut, Nan Ding, Piyush Sharma, and Sebastian Goodman. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL.
Lucia Specia, Stella Frank, Khalil Sima'An, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In *Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers*,
pages 543–553.
Rama Syamala Sreepada and Bidyut Kr Patra. 2020.
Mitigating long tail effect in recommendations using few shot learning technique. *Expert Systems with* Applications, 140:112887.
Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. VLADAPTER: parameter-efficient transfer learning for vision-and-language tasks. In *CVPR*.
Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B.
Tenenbaum, and Phillip Isola. 2020. Rethinking fewshot image classification: A good embedding is all you need? In *ECCV*.
Michael Tschannen, Olivier Bachem, and Mario Lucic.
2018. Recent advances in autoencoder-based representation learning. *arXiv preprint arXiv:1812.05069*.
Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, and Felix Hill. 2021.
Multimodal few-shot learning with frozen language models. In *NeurIPS*, pages 200–212.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *NIPS*.
Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In *CVPR*.
Ruijie Wang, Zheng Li, Dachun Sun, Shengzhong Liu, Jinning Li, Bing Yin, and Tarek F. Abdelzaher. 2022a.
Learning to sample and aggregate: Few-shot reasoning over temporal knowledge graphs. In *NeurIPS*.
Ruijie Wang, Zheng Li, Danqing Zhang, Qingyu Yin, Tong Zhao, Bing Yin, and Tarek F. Abdelzaher.
2022b. RETE: retrieval-enhanced temporal event forecasting on unified query product evolutionary graph. In WWW.
Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. 2020. Generalizing from a few examples:
A survey on few-shot learning. *ACM Computing* Surveys.
Yasi Wang, Hongxun Yao, and Sicheng Zhao. 2016.
Auto-encoder based dimensionality reduction. *Neurocomputing*, 184:232–242.
Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime G.
Carbonell. 2019. Characterizing and avoiding negative transfer. In *CVPR*.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S.
Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In *ICML*.
Liang Xu, Xiaojing Lu, Chenyang Yuan, Xuanwei Zhang, Huilin Xu, Hu Yuan, Guoao Wei, Xiang Pan, Xin Tian, Libo Qin, et al. 2021. Fewclue: A chinese few-shot learning evaluation benchmark. *arXiv* preprint arXiv:2107.07498.
Bang Yang, Yuexian Zou, Fenglin Liu, and Can Zhang.
2021. Non-autoregressive coarse-to-fine video captioning. In *AAAI*.
Xu Yang, Kaihua Tang, Hanwang Zhang, and Jianfei Cai. 2019. Auto-encoding scene graphs for image captioning. In *CVPR*.
Chenyu You, Weicheng Dai, Yifei Min, Xiaoran Zhang, David A Clifton, S Kevin Zhou, Lawrence Hamilton Staib, and James S Duncan. 2023. Rethinking semi-supervised medical image segmentation:
A variance-reduction perspective. arXiv preprint arXiv:2302.01735.
Chenyu You, Weicheng Dai, Lawrence Staib, and James S Duncan. 2022a. Bootstrapping semisupervised medical image segmentation with anatomical-aware contrastive distillation. *arXiv* preprint arXiv:2206.02307.
Chenyu You, Weicheng Dai, Haoran Su, Xiaoran Zhang, Lawrence Staib, and James S Duncan. 2022b. Mine your own anatomy: Revisiting medical image segmentation with extremely limited labels. *arXiv* preprint arXiv:2209.13476.
Chenyu You, Ruihan Zhao, Siyuan Dong, Sandeep P
Chinchali, Lawrence Hamilton Staib, James s Duncan, et al. 2022c. Class-aware adversarial transformers for medical image segmentation. In *NeurIPS*.
Chenyu You, Ruihan Zhao, Lawrence H Staib, and James S Duncan. 2022d. Momentum contrastive voxel-wise representation learning for semisupervised volumetric medical image segmentation.
In *MICCAI*.
Chenyu You, Yuan Zhou, Ruihan Zhao, Lawrence Staib, and James S Duncan. 2022e. Simcvd: Simple contrastive voxel-wise representation distillation for semi-supervised medical image segmentation. IEEE
Transactions on Medical Imaging, 41(9):2228–2237.
Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In *CVPR*.
Juan Zha, Zheng Li, Ying Wei, and Yu Zhang. 2022.
Disentangling task relations for few-shot text classification via self-supervised hierarchical task clustering.
In *EMNLP (Findings)*.
Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Zhuoye Ding, Yongjun Bao, Weipeng Yan, and Yanyan Lan. 2021. Probing product description generation via posterior distillation. In *AAAI*.
Danqing Zhang, Zheng Li, Tianyu Cao, Chen Luo, Tony Wu, Hanqing Lu, Yiwei Song, Bing Yin, Tuo Zhao, and Qiang Yang. 2021. QUEACO: borrowing treasures from weakly-labeled behavior data for query attribute value extraction. In *CIKM*.
Jianguo Zhang, Pengcheng Zou, Zhao Li, Yao Wan, Xiuming Pan, Yu Gong, and Philip S. Yu. 2019a.
Multi-modal generative adversarial network for short product title generation in mobile e-commerce. In NAACL-HLT.
Tao Zhang, Jin Zhang, Chengfu Huo, and Weijun Ren.
2019b. Automatic generation of pattern-controlled product description in e-commerce. In The World Wide Web Conference.
Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao. 2023. Universal multimodal representation for language understanding. *arXiv preprint* arXiv:2301.03344.
Da-Wei Zhou, Han-Jia Ye, Liang Ma, Di Xie, Shiliang Pu, and De-Chuan Zhan. 2022a. Few-shot classincremental learning by sampling multi-phase tasks.
IEEE Transactions on Pattern Analysis and Machine Intelligence.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022b. Learning to prompt for visionlanguage models. *International Journal of Computer* Vision.
Zhihong Zhu, Xuxin Cheng, Zhiqi Huang, Dongsheng Chen, and Yuexian Zou. 2023a. Towards unified spoken language understanding decoding via labelaware compact linguistics representations. In ACL.
Zhihong Zhu, Weiyuan Xu, Xuxin Cheng, Tengtao Song, and Yuexian Zou. 2023b. A dynamic graph interactive framework with label-semantic injection for spoken language understanding. In *ICASSP 2023*.
Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. 2021. A comprehensive survey on transfer learning. *Proceedings of the IEEE*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Please see Limitations.
✓ A2. Did you discuss any potential risks of your work?
Please see Limitations.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Please see the claimed contributions in Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?**
Please see the Experiment and Analysis (Section 4 and Section 5).
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Please see Section 4.1.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Please see Section 4.1.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Please see Table 1. We conducted 5 runs with different seeds for our experiments and reported the average results.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Please see Section 4.1.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ziems-etal-2023-large | Large Language Models are Built-in Autoregressive Search Engines | https://aclanthology.org/2023.findings-acl.167 | Document retrieval is a key stage of standard Web search engines. Existing dual-encoder dense retrievers obtain representations for questions and documents independently, allowing for only shallow interactions between them. To overcome this limitation, recent autoregressive search engines replace the dual-encoder architecture by directly generating identifiers for relevant documents in the candidate pool. However, the training cost of such autoregressive search engines rises sharply as the number of candidate documents increases. In this paper, we find that large language models (LLMs) can follow human instructions to directly generate URLs for document retrieval. Surprisingly, when providing a few Query-URL pairs as in-context demonstrations, LLMs can generate Web URLs where nearly 90{\%} of the corresponding documents contain correct answers to open-domain questions. In this way, LLMs can be thought of as built-in search engines, since they have not been explicitly trained to map questions to document identifiers. Experiments demonstrate that our method can consistently achieve better retrieval performance than existing retrieval approaches by a significant margin on three open-domain question answering benchmarks, under both zero and few-shot settings. The code for this work can be found at \url{https://github.com/Ziems/llm-url}. | # Large Language Models Are Built-In Autoregressive Search Engines
## Noah Ziems, Wenhao Yu, Zhihan Zhang, Meng Jiang
University of Notre Dame
{nziems2, wyu1, zzhang23, mjiang2}@nd.edu
## Abstract
Document retrieval is a key stage of standard Web search engines. Existing dual-encoder dense retrievers obtain representations for questions and documents independently, allowing for only shallow interactions between them. To overcome this limitation, recent autoregressive search engines replace the dual-encoder architecture by directly generating identifiers for relevant documents in the candidate pool. However, the training cost of such autoregressive search engines rises sharply as the number of candidate documents increases. In this paper, we find that large language models (LLMs) can follow human instructions to directly generate URLs for document retrieval. Surprisingly, when providing a few Query-URL pairs as in-context demonstrations, LLMs can generate Web URLs where nearly 90% of the corresponding documents contain correct answers to open-domain questions. In this way, LLMs can be thought of as built-in search engines, since they have not been explicitly trained to map questions to document identifiers. Experiments demonstrate that our method can consistently achieve better retrieval performance than existing retrieval approaches by a significant margin on three open-domain question answering benchmarks, under both zero and few-shot settings. The code for this work can be found at https://github.com/Ziems/
llm-url.
## 1 Introduction
Along with the success of deep learning, dualencoder based retrievers have become the dominant method for Web searching (Zhu et al., 2021; Zhao et al., 2022). For example, DPR (Karpukhin et al., 2020) employs two independent encoders to encode the question and the document respectively, then estimates their relevance by computing a single similarity score between two representations.
![0_image_0.png](0_image_0.png)
However, these methods suffer from two major drawbacks. First, the representations of questions and documents are typically obtained independently in modern dual-encoder dense retrieval models (Karpukhin et al., 2020), allowing for only shallow interactions between them (Khattab et al.,
2021). Second, the question or document representation is embedded into a single dense vector, potentially missing fine-grained information when computing the similarity between the two vector representations (Khattab and Zaharia, 2020).
Instead of computing similarity between question and document embeddings, autoregressive search engines aim to directly generate document identifiers then map them to complete documents in the predetermined candidate pool. This approach has attracted increasing interest in information retrieval (IR) and related fields (Tay et al., 2022; Bevilacqua et al., 2022; Wang et al., 2022). Compared to dual-encoder dense retrieval methods, autoregressive search engines enjoy a number of advantages. First, autoregressive generation models produce document identifiers by performing deep token-level cross-attention, resulting in a better estimation than shallow interactions in dense retrievers.
Second, autoregressive search engines have been shown to have strong generalization abilities, outperforming BM25 in a zero-shot setting (Tay et al.,
2022). While it is theoretically possible to scale an autoregressive search engine to the size of a large language model (LLM), such as GPT-3 with 175B parameters, in practice it is not feasible due to the computational overhead of training such a large autoregressive search engine from scratch (Tay et al.,
2022). To reduce the high training cost of autoregressive search engine, a smaller model size is preferred. However, the results of our pilot study in Figure 1 show smaller language models are significantly worse at mapping passages to document identifiers than larger ones. Moreover, different retrieval tasks can have unique retrieval requirements. One task may require a model to retrieve factual evidence to support or refute a claim (*i.e.*, fact checking) (Onoe et al., 2021) while another may require a model to retrieve specific trivia information about an entity (*i.e.*, entity linking) (Petroni et al., 2021; Zhang et al., 2022). It would be better if the retriever was capable of generalizing to new retrieval tasks with only a few examples.
In this work, we explore the use of in-context demonstrations to prompt LLMs to directly generate web URLs for document retrieval, namely LLM-URL. Surprisingly, we find that by providing a few (query, URL) pairs as contextual demonstrations, large language models (e.g. GPT-3) generate Web URLs where nearly 90% of the corresponding documents contain answers to opendomain questions. In this way, LLMs can be thought of as built-in search engines, as they have not been explicitly trained to map questions or documents to identifiers. Instead of using newlycreated document identifiers, LLM-URL leverages existing and widely used document identifiers directly, *i.e.*, URLs. We compare our approach to existing document retrieval methods on three different open-domain question answering (QA) datasets:
WebQ (Berant et al., 2013), NQ (Kwiatkowski et al., 2019), and TriviaQA (Joshi et al., 2017).
Further, to avoid exceeding the limit on the number of input tokens of LLMs, we employ an unsupervised passage filtering module to remove irrelevant portions of supporting documents. To summarize, our main contributions are as follows:
1. We reveal that LLMs are built-in autoregressive search engines capable of document re-
trieval by directly generating Web page URLs under both zero and few-shot settings.
2. We show retrieving documents by generating URLs with LLMs significantly outperforms existing methods for document retrieval, as measured by Recall@K. Further, we show that breaking the retrieved documents into passages then using a ranker to filter the passages significantly reduces the number of supporting passages while maintaining high recall.
3. We show the retrieved documents improve downstream QA performance as measured by EM when compared to baseline methods.
## 2 Related Work 2.1 Traditional Document Retrievers
Traditional methods such as TF-IDF and BM25 explore sparse retrieval strategies by matching the overlapping contents between questions and passages (Robertson and Zaragoza, 2009; Chen et al.,
2017; Yang et al., 2019). DPR (Karpukhin et al., 2020) revolutionized the field by utilizing dense contextualized vectors for passage indexing. It is first initialized as a pretrained BERT model, then trained discriminatively using pairs of queries and relevant documents, with hard negatives from BM25. Recent research has improved DPR via better training strategies (Xiong et al., 2020; Qu et al., 2021; Zhang et al., 2023a) and passage reranking (Mao et al., 2021; Yu et al., 2021; Ju et al., 2022). However, representations of questions and documents are typically obtained independently in modern dual-encoder dense retrieval models (Karpukhin et al., 2020; Xiong et al., 2020),
allowing for only shallow interactions between them (Khattab et al., 2021).
## 2.2 Autoregressive Search Engines
Recent works have investigated the use of autoregressive language models to generate identifier strings for documents as an intermediate target for retrieval (Yu et al., 2022), such as Wikipedia page titles (De Cao et al., 2020), root-to-leaf paths in a hierarchical cluster tree (Tay et al., 2022), or distinctive n-grams that can be mapped to full passages (Bevilacqua et al., 2022). Since the series of work was carried out almost simultaneously by different research groups, they are often referred to multiple different names in the literature, such as autoregressive search engine, differential search
![2_image_0.png](2_image_0.png)
index (DSI), and neural document indexers (NDI).
Compared to traditional dense document retrievers, these methods leverage a generation model to produce the document indexes. By forcing the generation model to explain every token in the question and document using cross-attention, the generation abilities of the model significantly improve.
Our work is closely related to these works, showing experimentally that properly prompting pretrained large language models can achieve better performance than traditional dense retrieval models (Ouyang et al., 2022; Yu et al., 2023) .
## 3 Proposed Method
In this section we describe a new method, which we refer to as LLM-URL, that employs a large language model (LLM) to perform effective and efficient web document retrieval for knowledgeintensive NLP tasks such as open-domain question answering (ODQA).
ODQA is a two step process consisting of a *retriever* and a *reader*. Given a question q, the goal of the *retriever* is to find the top-n passages Pn relevant to answering q. Given q and the top-n relevant passages Pn, the goal of the *reader* is to use internal knowledge along with Pn to generate a correct answer a to question q. The passage retriever plays an essential role in this process. When Pn contains more passages that have the correct answer, the reader has a higher chance of finding it. Instead of *heavily training a dedicated retriever*,
our LLM-URL solves the problem in a different way as shown in Figure 2.
Given a question q, our LLM-URL should find a set of relevant passages to Pn and give it to the reader. First, it prompts a LLM (e.g., GPT3) to directly generate m URLs for q. By default, it uses "Which m Wikipedia URLs would have the answer?" as the instruction which is appended to each input question as the prompt. We also append the beginning of the Wikipedia URL
(https://en.wikipedia.org/wiki) to the end of the prompt to encourage the generation of URLs and restrict generation to the Wikipedia article URL format. As LLM has the ability of in-context learning, we take this advantage to enable the fewshot setting in the prompt. The prompt described above also includes a series of in-context demonstrations. Each demonstration contains a question sampled from the training set following the prompt described above. At the end of each demonstration, m URLs which point to gold-labeled documents are listed. In the zero-shot setting, the original prompt is used without any demonstrations. In the few-shot setting, the original prompt appended to a series of d demonstrations (d=10 in this work).
Given the prompt, the LLM returns a generated sequence of tokens. Ideally these tokens would construct a sequence of m separated URLs. In practice, the generated sequence often has extra information such as a proposed answer that is unreliable and needs to be filtered. We use a regular expression to extract all URLs from the sequence and discard all
![3_image_0.png](3_image_0.png)
extra information. This also filters out many URLs that are improperly formatted. After extraction, GET requests are made using the extracted URLs and the contents of each retrieval is used to create a set of fetched documents Df . Often, |Df | < m because some of the generated URLs do not follow a correct format or do not point to real web pages on the Internet.
The set of fetched documents Df can be passed directly to a reader if m is a small value or the reader being used can handle many large documents. However, this is usually not the case. Often, Df needs to be filtered such that only a small number of the most relevant passages are given to the reader. To do this, our LLM-URL first breaks each document d ∈ Df into a set of small passages. The passages from each document are collected into a new set, Pf . A scoring function is used to quantify the relevance of each passage with respect to the question q, with high values indicating high relevance with respect to q and low scores indicating low relevance. A simple scoring function such as BM25 can be used or a more complex one such as DPR (Karpukhin et al., 2020) can. The passages in Pf are then sorted from highest to lowest and the top n are kept as Pn. Finally, Pn are given to a reader along with q to generate an answer.
Advantages of **LLM-URL** : Existing autoregressive retrieval methods such as DSI and SEAL
use a pre-trained large language model then fine tune it to take questions as input and generate relevant document identifiers as output (Tay et al.,
2022; Bevilacqua et al., 2022). Both DSI and SEAL do extensive experiments on a variety of document identifiers which are generated by a heavily trained language model. Examples of these identifiers include unstructured atomic identifiers, naively structured string identifiers, hierarchical document clustering, and others. LLM-URL instead uses pre-existing document identifiers that exist on the internet: URLs. Using URLs instead of the aforementioned identifiers has multiple advantages. URLs often contain words related to the information they link to, allowing for strong association of topics with their URLs. For example, the title of each Wikipedia page is used in its URL, allowing the LLM is able to directly generate the URL by leveraging semantic information from the question. To validate the importance of URLs themselves, we also experiment with prompting the LLM to generate Wikipedia titles instead of URLs and find Recall@1 significantly reduces compared to prompting for URL generation. We believe this is because the URL format itself helps prompt the model for specific information in a specific format.
Further, the use of URLs allows us to simply obtain the evidence document via a HTTP request without any need of training a model or building an index to find the mapping between identifiers and documents.
## 4 Experiments
In this section, we present and discuss results from our experiments to "directly" demonstrate that our LLM-URL is a strong retriever and "indirectly" show that it achieves competitive performance on the ODQA task against state-of-the-art solutions.
Large Language Model: Following Figure 1, the large language model we use to generate URLs for our experiments is GPT-3 *text-davinci-003* with greedy decoding and a temperature of 0. A variety of different prompts are tested for generating URLs, but little difference in performance is observed, so we simply use the best performing prompt which is discussed in Section 3.
Datasets: We use three ODQA datasets including Web Questions, Natural Questions, and Trivia QA. We use them to perform evaluation on both the task of document or passage retrieval and ODQA itself.
## 4.1 Retrieval
We expect retrievers to find the most relevant documents and/or passages. We conduct experiments on both document retrieval and passage retrieval.
Evaluation metrics. Recall@k (k=1, 10, 100) is calculated by measuring the percentage of docu-
| Method | Document Recall@1 | Document Recall@10 | | | | |
|-------------------------------------|---------------------|----------------------|------|------|----------|------|
| WebQ | NQ | TriviaQA | WebQ | NQ | TriviaQA | |
| Contriever (Izacard et al., 2021) | 63.8 | 53.2 | 60.6 | 63.8 | 80.8 | 82.5 |
| BM25 (Robertson and Zaragoza, 2009) | 49.5 | 47.2 | 63.0 | 81.5 | 76.8 | 82.3 |
| Google API | 61.1 | 55.5 | 51.4 | - | - | - |
| LLM-URL (Zero-Shot) | 76.8 | 61.7 | 71.3 | 87.7 | 83.2 | 85.5 |
| LLM-URL (Few-Shot) | 79.7 | 62.6 | 73.5 | 89.9 | 83.9 | 86.8 |
Table 1: Document retrieval as measured by Recall@k. Google API Recall@10 results are left out due to high cost.
| Method | Passage Recall@1 | Passage Recall@10 | Passage Recall@100 | | | | | | |
|---------------------|--------------------|---------------------|----------------------|------|----------|------|------|----------|------|
| WebQ | NQ | TriviaQA | WebQ | NQ | TriviaQA | WebQ | NQ | TriviaQA | |
| Contriever | 18.2 | 18.8 | 34.0 | 55.7 | 54.8 | 67.9 | 79.8 | 79.6 | 83.3 |
| BM25 | 19.1 | 22.8 | 46.2 | 51.8 | 55.6 | 71.7 | 76.6 | 79.6 | 83.9 |
| LLM-URL (Zero-Shot) | 22.2 | 24.0 | 46.7 | 63.1 | 60.6 | 76.6 | 83.8 | 78.3 | 83.6 |
| LLM-URL (Few-Shot) | 22.3 | 25.5 | 49.1 | 64.8 | 60.8 | 77.8 | 85.9 | 79.0 | 84.8 |
ments or passages in the top-k which contain one of the gold labeled answers while exact match is calculated by the percentage of predicted answers which match one of the gold labeled answers. While LLM-URL is not constrained by which URLs can be generated for document retrieval, we restrict all generations to Wikipedia URLs only for fair comparison, as discussed in Section 3 All baseline models also use Wikipedia for retrieval, with some fetching documents in real time and others fetching from an offline corpus.
## 4.1.1 Document Retrieval
Baselines: Contriever (Izacard et al., 2021) and BM25 (Robertson and Zaragoza, 2009) are usually used for passage retrieval. Contriever is a dual encoder which uses a dot product between dense representations of a question and passage to calculate relevance. BM25 is a sparse retriever which uses the overlapping contents between question and passage to calculate relevance. Because we use the same passage size to chunk Wikipedia documents, we were able to map their retrieved passages back to the original documents. We use Google API (Brin and Page, 1998) restricted to Wikipedia as a third baseline to retrieve relevant documents given a question.
Existing works such as DSI and SEAL have investigated the use of autoregressive language models to generate identifier strings for documents as an intermediate target for retrieval. DSI is a Trans-
![4_image_0.png](4_image_0.png)
former which has been trained to map directly from question to document identifiers by memorizing the contents of the entire corpus (Tay et al.,
2022). SEAL is a variant of DSI which uses ngrams as document ids to improve retrieval performance
(Bevilacqua et al., 2022). Neither DSI nor SEAL
report retrieval results on full documents and do not have publicly available implementations, so they are left out and discussed in Table 3 and Section 4.1.2 on passage retrieval.
Unlike the baselines, our LLM-URL employs an LLM. It has two settings: zero-shot and fewshot. In the zero-shot setting, no in-context demonstrations are given whereas in the few-shot setting a few demonstrations are appended to the prompt.
| Method | Recall@1 | Recall@10 |
|------------------------------------------|------------|-------------|
| DSI1 | 25.1 | 56.6 |
| SEAL1 | 26.3 | 74.5 |
| LLM-URL (Zero-Shot) | 24.0 | 60.6 |
| LLM-URL (Few-Shot) | 25.5 | 60.8 |
| 1 explicitly trained for retrieval on NQ | | |
Table 3: Passage retrieval as measured by Recall@1 and Recall@10. LLM-URL is equipped with BM25 for passage ranking. Other datasets are left out due to not being reported in either paper and no public implementations.
Results: The results of our document retrieval experiments are shown in Table 1. In this setting Recall@k is calculated directly after the documents are retrieved with no intermediary steps. LLMURL significantly outperforms baseline methods on all datasets for both Recall@1 and Recall@10.
Specifically, zero-shot LLM-URL improves document Recall@1 relatively by 20.4%, 11.2%, and 13.2% over the strongest baseline on WebQ, NQ,
and TriviaQA, respectively. Few-shot LLM-URL
further expands the improvement to 24.9%, 12.8%,
and 16.7%, respectively. URLs can be extracted from the large-scale parameters of LLMs, and these URLs can lead to more accurate documents than what existing methods can retrieve. Both the LLM
parameters and in-context demonstrations are significantly useful in document retrieval.
Figure 3 shows Recall scores converge when the number of generated URLs m increases. Due to the diminishing returns from increasing m, our experiments do not explore values of m greater than 10.
Are the generated URLs valid? It is worth noting that the generated URLs are not always valid.
Some generated URLs do not have valid URL syntax and some point to Wikipedia pages that do not exist. Rarely, URLs will be generated for domains aside from Wikipedia. For fair comparison, all of these faulty URLs are discarded and only documents coming from valid Wikipedia articles are kept.
Further analysis is done to measure the ratio of valid Wikipedia URLs while the total number of generated URLs m increases from 1 to 10, shown in Figure 4. The number of valid URL generations remains surprisingly high (i.e., higher than 68%)
as m increases from 1 to 10. However, the rate of valid generations appears to fall off as m increases,
| Method | Zero-Shot QA EM | | |
|---------------------------------------------------------------------------------|-------------------|----------|------|
| WebQ | NQ | TriviaQA | |
| Contriever + InstructGPT | 16.8 | 19.1 | 52.4 |
| BM25 + InstructGPT | 16.0 | 20.5 | 53.3 |
| Google + InstructGPT | 19.9 | 27.8 | 58.7 |
| GenRead (InstructGPT) | 24.8 | 28.2 | 59.3 |
| DSI1 + FiD | - | 31.42 | - |
| SEAL1 + FiD | - | 43.6 | 41.8 |
| InstructGPT (no docs.) | 18.6 | 20.9 | 52.6 |
| LLM-URL (Zero-Shot) | 28.1 | 26.4 | 60.1 |
| LLM-URL (Few-Shot) | 29.0 | 27.3 | 60.7 |
| 1 explicitly trained for retrieval on NQ 2 result from Bevilacqua et al. (2022) | | | |
Table 4: Zero-shot open-domain QA performance as measured by exact match (EM). All LLM-URL models use InstructGPT as the reader unless otherwise stated.
indicating there are diminishing returns from each marginal increase of m.
## 4.1.2 Passage Retrieval
Baselines: Four methods, including Contriever, BM25, DSI (Tay et al., 2022), and SEAL (Bevilacqua et al., 2022), were introduced in Section 4.1.1.
Google API was used for document retrieval and not applied to passages.
Results: The results of our passage retrieval experiments are shown in Table 2. In this setting Recall@k is calculated on the top-k passages ranked by the ranker instead of just on the raw documents shown in Table 1. LLM-URL performs slightly better than baseline methods for Recall@1 and Recall@10 and as well as baseline methods for Recall@100. In the zero-shot setting, LLMURL improves relative Recall@1 by 16.2%, 5.3%,
and 1.1% with respect to the strongest baseline on WebQ, NQ, and TriviaA respectively. The few-shot setting of LLM-URL expands the improvement to 16.8%, 11.8%, and 6.3%, respectively. For Recall@10, similar improvements can be seen.
For Recall@100, performance is better relative to baseline models for all datasets except NQ. In the zero-shot setting, LLM-URL improves the relative Recall@100 by 5.0% for WebQ and performs slightly worse than the best baseline method on NQ and TriviaQA by 1.7% and 0.4% respectively. The few-shot setting of LLM-URL for Recall@100 shows a slight improvement on WebQ and TriviaQA, but performs slightly worse than the strongest baseline on NQ.
Despite being limited to only the passages from 10 documents, LLM-URL performs better than baseline methods for smaller k and performs as well as baseline methods for higher values of k.
The comparison between LLM-URL and existing document identifier-based methods such as DSI
and SEAL are shown in Table 3. For Recall@1, zero-shot LLM-URL performs slightly worse than the best baseline by 8.8%. This performance gap is slightly smaller in the few-shot setting with LLMURL performing 3.1% worse than the best baseline.
For Recall@10, zero-shot LLM-URL performs worse than the best baseline by 18.7%. Few-shot LLM-URL performs only slightly better than the zero-shot setting, performing worse than the best baseline by 18.4%.
## 4.2 Open-Domain Question Answering
Evaluation metric: We use exact match (EM),
which is short for *exact string match with the correct answer*, because the goal of ODQA is to find an exact answer to any question using Wikipedia articles.
Results: Here we discuss the downstream QA
performance of LLM-URL. In this setting, an answer only has an exact match if the normalized generated text is within the list of acceptable answers to a question. When combined with InstructGPT as a reader, LLM-URL performs significantly better on WebQ and slightly better on TriviaQA when compared with the best performing baseline methods.
On NQ, LLM-URL+InstructGPT performs worse than baseline NDIs and only slightly worse than the best remaining baseline. In the zero-shot setting, LLM-URL+InstructGPT improves upon the best baseline method by 13.3% and 1.3% on WebQ
and TriviaQA respectively. LLM-URL +InstructGPT performs worse than the best baseline method by 39.5% on NQ. In the few-shot setting, LLMURL+InstructGPT performs better than the best baseline method by 16.9% and 2.3% on WebQ and TriviaQA respectively. LLM-URL+InstructGPT
performs worse than the best baseline method by 37.4% on NQ.
Despite not being explicitly trained for retrieval, LLM-URL+InstructGPT performs significantly better than baseline methods for WebQ, achieves on-par performance with existing methods for TriviaQA, and performs slightly worse than existing methods for NQ.
Our results indicate LLM-URL could be a
![6_image_0.png](6_image_0.png)
promising solution to retrieval for a wide range of knowledge intensive tasks with little to no training data required.
## 4.3 Discussions 4.3.1 Time Sensitive Queries
There are a number of additional qualitative benefits that LLM-URL has over existing methods.
One large advantage of LLM-URL is that the documents are retrieved in real time from the source.
So long as the source stays up to date without the URL itself changing, our proposed method is capable of answering time sensitive queries without any extra modifications.
In contrast, existing dual encoder approaches such as Contriever require a document to be reencoded each time it changes. Existing methods such as SEAL (Bevilacqua et al., 2022) and DSI
are also tricky to keep up to date for time sensitive queries as the LLM would have to be retrained to learn the new content of the updated documents.
## 4.3.2 Frequent Entities Analysis
Following (Mallen et al., 2022), we analyze the retrieval performance of LLM-URL when the goldlabeled answer entity is common versus when it is not. For each question-answer pair in a given dataset we check to see if the labeled entity exists within the top one-million common entities from Wikipedia. Using this, we split our dataset into two distinct subsets: question-answer pairs that contain a common entity and those that do not. In measuring the performance of our model on these two distinct sets across Web Questions, Natural Questions, and TriviaQA, we find LLMURL performs significantly better on common entity question-answer pairs. The results of our analysis are shown in Figure 5. Across all three datasets,
| LLM-URL | Exists | Answer | Contriever | Answer | BM25 | Answer |
|----------------------|----------|----------|-----------------------|----------|---------------------|----------|
| wiki/Jellyfish | ✓ | ✓ | Smack (ship) | ✗ | Collective noun | ✗ |
| wiki/Collective_noun | ✓ | ✗ | Collective noun | ✗ | Determiner | ✗ |
| wiki/Smack_(group) | ✗ | ✗ | Cetacean intelligence | ✗ | Glass sea creatures | ✗ |
| wiki/Cnidaria | ✓ | ✓ | Well smack | ✗ | Minotaur | ✗ |
| wiki/Medusozoa | ✓ | ✓ | Plankton | ✗ | Mass noun | ✗ |
| wiki/Scyphozoa | ✓ | ✓ | Sperm whale | ✗ | Well smack | ✗ |
| wiki/Cubozoa | ✓ | ✓ | Loaded question | ✗ | Nomenclature | ✗ |
| wiki/Hydrozoa | ✓ | ✓ | Jabberwocky | ✗ | Archomental | ✗ |
| wiki/Staurozoa | ✓ | ✓ | Merrow | ✗ | Honey Smacks | ✗ |
| wiki/Rhizostomeae | ✓ | ✓ | Loaded question | ✗ | Well smack | ✗ |
the recall of common-entity question-answer pairs is many times greater than the recall from the rest of the dataset.
Previous work has shown LLMs in the closedbook setting, where the model must rely solely on the information contained within its weights, perform much better on common-entities versus uncommon ones (Mallen et al., 2022). Our results show this problem extends beyond the closed-book setting and also applies to retrieval when using LLM-URL. This also could explain the high word count from documents we found when evaluating LLM-URL. The average Wikipedia article is 644 words, but the average word count from Wikipedia documents retrieved via LLM-URL was 10k. We believe this discrepancy is caused by common entities having much more detail in their Wikipedia articles and in turn having much higher word count.
## 4.3.3 Case Study
In Table 5, we show a case study comparing LLMURL with two baseline retrieval methods, BM25 and Contriever, on the question "A 'smack' is a collective noun for a group of which sea creatures?"
which is in the TriviaQA test set. The gold-labeled answer to this question is "jellyfish".
In the closed-book setting, InstructGPT mistakenly predicts "dolphins" as the answer. When using Contriever to retrieve 10 passages from Wikipedia given the query, none of the passages contains the gold answer. For instance, Contriever retrieves passages about "smack", a kind of fishing vessel, along with other passages about sperm whales, plankton, and other unrelated topics. Similar results are
## Found While Using Bm25 As The Retriever.
In contrast, LLM-URL performs much better in this scenario, retrieving 7 documents which contain the answer. The top retrieved document is exactly about the gold answer "jellyfish". The fourth to the tenth documents all talk about different types of jellyfish. After being chunked into passages then sorted by the ranker, the top 10 passages are concatenated. Among them, it contains "A group of jellyfish is called a smack," which contains the answer to the question and comes directly from the first retrieved document, titled "jellyfish." When InstructGPT is then prompted with these 10 passages along with the question, the gold answer "jellyfish" is correctly generated.
This case study highlights multiple advantages of LLM-URL . First, LLM-URL finds documents related to both the question and the answer. It directly locates documents that talks about "jellyfish" instead while BM25 and Contriever locate documents related to the question only–not the answer.
Second, LLM-URL is more precise than BM25 or Contriever. In this case, 7 out of 10 generated URLs from LLM-URL point to a Wikipedia document that contains the answer. However, both BM25 and Contriever fail to retrieve any documents containing the answer. Third, the set of documents retrieved by LLM-URL are complementary to each other, while in BM25 or contriever, each document in the top-10 is selected independently.
This is because the LLM is able to refer to previous generated URLs before it generates the next one, allowing each newly generated URL to be conditioned on all the previous URLs. This leads to a more informative evidence context in open-domain question answering.
## 5 Conclusion And Future Work
In this paper, we explored whether large language models can generate URLs prompted by human instructions for document retrieval. Surprisingly, we found that by providing a few (query, URL) pairs as in-context demonstrations, large language models (e.g. GPT-3) generated Web URLs where near 90% of the corresponding documents contain correct answers to open-domain questions in WebQ.
Furthermore, by breaking the retrieved documents into passages then ranking them with BM25, we showed a significant number of unnecessary passages could be filtered out while retaining high recall, which outperformed baseline methods by a significant margin.
There are numerous exciting directions for future work. While a number of broad spectrum retrieval benchmarks such as BIER (Thakur et al., 2021)
exist, it remains to be seen whether the few-shot demonstrations shown in this work can be further tuned for specific retrieval tasks. Promptagator
(Dai et al., 2022) shows significant performance improvements can be achieved by tuning prompts in a similar way.
Further, it remains to be seen whether fine tuning the prompt for each individual question can further improve the retrieval performance. As with Promptagator, prior work has shown using clustering to select diverse demonstrations for any given question further improves retrieval performance as well as downstream QA performance.
## Limitations
Despite the strong performance on the presented datasets, our approach is limited in its ability to update knowledge state and adapt to new domains.
A major feature of *retrieve-then-read* is the ability to swap in new documents when new information is learned, such as temporally more recent documents, or adding in documents from a new domain to quickly adapt to a new downstream task. Our approach relies on a large language model to contain all this knowledge and adding new knowledge would likely require some retraining. In addition, large generation models still suffer from hallucination errors, resulting in incorrect predictions. When tasked with generating 10 URLs, LLM-URL may only generate 6 or 7 which link to valid documents.
Finally, our approach involves very large language models, slow web requests, and document processing which may make it cumbersome to use in practice.
## Acknowledgements
This work was supported by NSF IIS-2119531, IIS2137396, IIS-2142827, CCF-1901059, and ONR
N00014-22-1-2507. Wenhao Yu was partly supported by the Bloomberg Data Science Fellowship.
## References
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In *EMNLP*, pages 1533–
1544.
Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen-tau Yih, Sebastian Riedel, and Fabio Petroni.
2022. Autoregressive search engines: Generating substrings as document identifiers. *arXiv preprint* arXiv:2204.10628.
Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the Seventh International Conference on World Wide Web 7, WWW7, page 107–117, NLD.
Elsevier Science Publishers B. V.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In *Procs. of ACL*.
Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B.
Hall, and Ming-Wei Chang. 2022. Promptagator:
Few-shot dense retrieval from 8 examples. *arXiv* preprint arXiv:2209.11755.
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval.
In *International Conference on Learning Representations*.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL, pages 1601–1611.
Mingxuan Ju, Wenhao Yu, Tong Zhao, Chuxu Zhang, and Yanfang Ye. 2022. Grape: Knowledge graph enhanced passage reader for open-domain question answering. *arXiv preprint arXiv:2210.02933*.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Omar Khattab, Christopher Potts, and Matei Zaharia.
2021. Relevance-guided supervision for openqa with colbert. *Transactions of the Association for Computational Linguistics*, 9:929–944.
Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In *Proceedings of the 43rd* International ACM SIGIR conference on research and development in Information Retrieval, pages 39–
48.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A benchmark for question answering research. *TACL*, pages 452–
466.
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi.
2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. *arXiv preprint* ArXiv:2212.10511.
Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen.
2021. Reader-guided passage reranking for opendomain question answering. In *Findings of ACLIJCNLP*.
Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, and Greg Durrett. 2021. CREAK: A dataset for commonsense reasoning over entity knowledge. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155.
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, et al. 2021. Kilt: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. Rocketqa: An optimized training approach to dense passage retrieval for opendomain question answering. In *Procs. of NAACL*.
Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® *in Information Retrieval*, 3(4):333–389.
Yi Tay, Vinh Q Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. arXiv preprint arXiv:2202.06991.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR:
A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In *Thirty-fifth Conference on Neural Information Processing Systems* Datasets and Benchmarks Track (Round 2).
Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Hao Sun, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, and Mao Yang. 2022. A neural corpus indexer for document retrieval. In *Advances in Neural Information Processing Systems*.
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations.
Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019.
End-to-end open-domain question answering with bertserini. In *NAACL 2019 (demo)*.
Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. 2021. Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering. arXiv preprint arXiv:2110.04330.
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are strong context generators. International Conference for Learning Representation (ICLR).
Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, and Meng Jiang. 2022. A
survey of knowledge-enhanced text generation. ACM
Computing Surveys (CSUR).
Zhihan Zhang, Xiubo Geng, Tao Qin, Yunfang Wu, and Daxin Jiang. 2021. Knowledge-aware procedural text understanding with multi-stage training. In WWW '21: The Web Conference 2021.
Zhihan Zhang, Wenhao Yu, Zheng Ning, Mingxuan Ju, and Meng Jiang. 2023a. Exploring contrast consistency of open-domain question answering systems on minimally edited questions. Trans. Assoc. Comput.
Linguistics.
Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, and Meng Jiang. 2023b. A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods. In *Proceedings* of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL
2023.
Zhihan Zhang, Wenhao Yu, Chenguang Zhu, and Meng Jiang. 2022. A unified encoder-decoder framework
with entity memory. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, EMNLP 2022.
Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and JiRong Wen. 2022. Dense text retrieval based on pretrained language models: A survey. *arXiv preprint* arXiv:2211.14876.
Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021.
Retrieving and reading: A comprehensive survey on open-domain question answering. *arXiv preprint* arXiv:2101.00774.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Our limitations section is not numbered, but it is the last section of our paper.
✗ A2. Did you discuss any potential risks of your work?
We do not believe any of the risks mentioned in the checklist apply to this paper.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Our introduction is found in section one and the abstract comes before that.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4 Describes Our Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Yes these details are found in a graph in Section 1 and hyperparameters are found in our experiments section (4).
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Yes we discuss our hyperparameters in section 4
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Our method is deterministic so there is no variation in our results.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhu-etal-2023-beyond | Beyond Triplet: Leveraging the Most Data for Multimodal Machine Translation | https://aclanthology.org/2023.findings-acl.168 | Multimodal machine translation (MMT) aims to improve translation quality by incorporating information from other modalities, such as vision. Previous MMT systems focus on better access and use of visual information and tend to validate their methods on image-related datasets. However, these studies face two challenges. First, they can only utilize a limited amount of data that is composed of bilingual texts and images (referred to as {``}triple data{''}), which is scarce. Second, current benchmarks for MMT are restricted and do not correspond to realistic scenarios. Therefore, this paper correspondingly establishes new methods and a new dataset for MMT. We propose a novel framework for MMT that addresses these challenges by utilizing large-scale non-triple data, such as monolingual image-text and parallel text-only data. Additionally, we construct a new e-commercial multimodal translation dataset, named EMMT, of which the test set is specifically designed to include ambiguous words that require visual context for accurate translation. Experiments show that our method is well-suited for real-world scenarios and can significantly improve translation performance with more non-triple data. In addition, our model also rivals or surpasses various SOTA models in conventional multimodal translation benchmarks. | # Beyond Triplet: Leveraging The Most Data For Multimodal Machine Translation
Yaoming Zhu, Zewei Sun, Shanbo Cheng, Luyang Huang, Liwei Wu, Mingxuan Wang ByteDance
{zhuyaoming,sunzewei.v,chengshanbo}@bytedance.com
{huangluyang,wuliwei.000,wangmingxuan.89}@bytedance.com
## Abstract
Recent work has questioned the necessity of visual information in Multimodal Machine Translation (MMT). This paper tries to answer this question and build a new benchmark in this work. As the available dataset is simple and the text input is self-sufficient, we introduce a challenging dataset called EMMT, whose testset is deliberately designed to ensure ambiguity.
More importantly, we study this problem in a real-word scenario towards making the most of multimodal training data. We propose a new framework 2/3-Triplet which can naturally make full use of large-scale image-text and parallel text-only data. Extensive experiments show that visual information is highly crucial in EMMT. The proposed 2/3-Triplet outperforms the strong text-only competitor by 3.8 BLEU score, and even bypasses a commercial translation system. 1
## 1 Introduction
Multimodal Machine Translation (MMT) is a machine translation task that utilizes data from other modalities, such as images. Previous studies propose various methods to improve translation quality by incorporating visual information and showing promising results (Lin et al., 2020; Caglayan et al.,
2021; Li et al., 2022a; Jia et al., 2021). However, manual image annotation is relatively expensive; at this stage, most MMT work is applied on a small and specific dataset, Multi30K (Elliott et al., 2016).
The current performance of the MMT system still lags behind the large-scale text-only Neural Machine Translation (NMT) system, which hinders the real-world applicability of MMT.
We summarize the limitations of the current MMT in two aspects. The first limitation is the size of the training data. Usually, the performance of MMT heavily relies on the triple training data:
1Codes and data are available at https://github.
com/Yaoming95/23Triplet
![0_image_0.png](0_image_0.png)
parallel text data with corresponding images. The triplets are much rarer for collection and much more costly for annotation than monolingual imagetext and parallel text data, as in Figure 1. Considering that current MT systems are driven by a massive amount of data (Aharoni et al., 2019), the sparsity of multimodal data hinders the large-scale application of these systems. Some researchers have proposed retrieve-based approaches (Zhang et al., 2020; Fang and Feng, 2022), aiming to construct pseudo-multimodal data through text retrieval. However, their constructed pseudo-data face problems like visual-textual mismatches and sparse retrieval. Besides, the models still cannot take advantage of monolingual image-text pairs.
The second limitation is the shortage of proper benchmarks. Although several researchers have examined the benefit of visual context upon the translation when textural information is degradated (Caglayan et al., 2019; Wang and Xiong, 2021), the improvements remain questionable. Wu et al. (2021) and Li et al. (2021) argue that vision contributes minor in previous MMT systems, and the images in the previous benchmark dataset provide limited additional information. In many cases, the translation of sentences relies on textual other than image information. The texts contain complete contexts and are unambiguous, leaving the usage of images doubtful. Therefore, a benchmark in that the sentences can not be easily translated without visual information is much needed.
To address these limitations, we propose models to make the most of training data and build a challenge and real-world benchmark to push the realworld application of MMT research. At first, we propose a new framework, named 2/3-Triplet, which can use both parallel text and image-text data. It provides two different ways of exploiting these data based on the continuous vision feature and discrete prompt tokens, respectively. The two approaches are not mutually exclusive and can be used jointly to improve performance within the same framework. It is also worth mentioning that the prompt approach is easy to deploy without modifying the model architecture.
In addition, we present a new real-world dataset named EMMT. We collect parallel text-image data from several publicly available e-commerce websites and label the translation by 20 language experts. To build a challenge test set, we carefully select ambiguous sentences that can not be easily translated without images. This high-quality dataset contains 22K triplets for training and 1000 test examples, along with extra image-text and parallel text data.
Comprehensive experiments show that 2/3-Triplet rivals or surpasses text-only and other MMT competitors on EMMT, as well as previous benchmarks. Especially, 2/3-Triplet consistently improves the strong text-only baseline by more than 3 BLEU scores in various settings, showing the importance of visual information.
## 2 Related Work
Researchers applied multimodal information to enhance machine translation systems since the statistical machine translation era (Hitschler et al., 2016; Afli et al., 2016). With the rise of neural networks in machine translation, researchers have focused on utilizing image information more effectively. Early work used image features as initialization for neural MT systems (Libovický and Helcl, 2017). More recent studies proposed multimodal attention mechanisms (Calixto et al., 2017; Yao and Wan, 2020),
enhanced text-image representations using graph neural networks (Lin et al., 2020), latent variable models or capsule networks (Yin et al., 2020), and used object-level visual grounding information to align text and image (Wang and Xiong, 2021). Li et al. (2022a) found that a stronger vision model is more important than a complex architecture for multimodal translation.
As we discussed earlier, these methods are limited to bilingual captions with image data, which is scarce. Therefore, some researchers (Zhang et al.,
2020; Fang and Feng, 2022) also design retrievalbased MMT methods that retrieve images with similar topics for image-free sentences. Alternatively, Elliott and Kádár (2017) proposed visual "imagination" by sharing visual and textual encoders.
Recently, Wu et al. (2021) and Li et al. (2021)
have questioned whether the most common benchmark Multi30K (Elliott et al., 2016) is suited for multimodal translation since they found images contribute little to translation. Song et al. (2021)
have contributed a new dataset of the e-commercial product domain. However, we find their datasets still have similar drawbacks.
Several relevant studies about translation and multimodality are noteworthy. Huang et al. (2020)
used visual content as a pivot to improve unsupervised MT. Wang et al. (2022b) proposed a pretraining model by using modality embedding as prefix for weak supervision tasks. Li et al. (2022c)
introduced the VALHALLA, which translates under guidance of hallucinated visual representation.
## 3 Approach
For the fully supervised condition in MMT, we have triplet {(*x, y, i*)}, where x is the source text, y is the target text, and i is the associated image.
Since the triplet is rare, we attempt to utilize partially parallel data like {(*y, i*)} and {(*x, y*)}, which are referred as monolingual image-text data and parallel text data in this paper.
In this section, we propose a new training framework 2/3-Triplet with two approaches to utilize triple and non-triple data at the same time. We name these two approaches as FUSION-**BASED**
and PROMPT-**BASED**, as shown in Figure 2.
For each approach, the model can conduct a mix training with three kinds of data: ((*x, i*) → y),
((x) → y), and ((y∗, i) → y), where y∗indicates the masked target text.
FUSION-BASED approach resembles the conventional models where the encoded vision information is taken as model input and the model is trained in end2end manners, and our design makes it possible to utilize bilingual corpus and image-text pairs
![2_image_0.png](2_image_0.png)
other than multilingual triplets.
PROMPT-BASED approach is inspired by the recent NLP research based on prompts (Gao et al.,
2021; Li and Liang, 2021; Wang et al., 2022a; Sun et al., 2022), where we directly use the image caption as a prompt to enhance the translation model without any modification to the model.
## 3.1 Fusion-**Based**
The common practice to utilize image information is to extract vision features and use them as inputs of the multimodal MT systems. Typically, it's common to cooperate vision and textual features to get a multimodal fused representation, where the textual features are the output state from the Transformer encoder and the vision feature is extracted via a pre-trained vision model.
We incorporate textual embedding and image features by simple concatenation:
$$H^{\mathrm{fused}}=[H^{\mathrm{text}};\mathbf{h}^{\mathrm{img}}]$$
img] (1)
where Htext is the encoded textual features of Transformer encoder, and h img is the visual representation of [CLS] token broadcated to the length of the text sequence.
Then, we employ a gate matrix Λ to regulate the blend of visual and textual information.
$$\Lambda=\operatorname{tanh}(f([H^{\mathrm{text}};H^{\mathrm{fused}}]))$$
$$\left(2\right)$$
Finally, we add the gated fused information to the origin textual feature to get the final multimodal fused representation:
$$H^{\mathrm{out}}=H^{\mathrm{text}}+1(\mathrm{img})\Lambda H^{\mathrm{fused}}\qquad(3)$$
✶(img) indicates whether the image exists. The value is set to zero when image is absent.
It is worth noting that in Eq.2, we employ the hyperbolic tangent (tanh) gate instead of the traditional sigmoid gate (Wu et al., 2021; Li et al.,
2022a) in the multimodal translation scenario. The new choice has two major advantages: (a) The output of the tanh can take on both positive and negative values, thereby enabling model to modulate the fused features Hfused in accordance with the text Htext; (b) The tanh function is centered at zero, thus, when the fused feature is close to zero, the output of the gate is also minimal, which aligns with the scenario where the image is absent naturally (*i.e.* tanh(0) = ✶(no img) = 0).
The next paragraphs illustrate how to utilize three types of data respectively.
$\eqref{eq:walpha}$
Using Triple Data ((*x, i*) → y) Figure 2a 1 :
Based on the basic architecture, we take in the source text for the text encoder and the image for the image encoder. By setting ✶(img) = 1, we naturally leverage vision context for translation.
The inference procedure also follows this flow.
Using Parallel Text ((x) → y) Figure 2a 2 :
We utilize the same architecture as the triple data setting. By setting ✶(img) = 0, we can adapt to the text-only condition. For the image-free bilingual data, the fused term is absent, and the final representation Hout is reduced to textual only, consistent with the learning on unimodal corpus.
Using Monolingual Caption ((y∗, i) → y) Figure 2a 3 : Inspired by Siddhant et al. (2020)'s strategy on leveraging monolingual data for translation, we adapt the mask de-noising task for utilizing monolingual image-text pairs. In a nutshell, we randomly mask some tokens in the text, and force the model to predict the complete caption text based on the masked text and image as input.
## 3.2 Prompt-**Based**
As prompt-based methods have made great success in NLP tasks(Gao et al., 2021; Li et al., 2022b; Wang et al., 2022a; Sun et al., 2022), we also consider whether the image information can be converted to some prompt signals for guiding sentence generation.
The general idea is quite straight: our translation system accepts a sentence of source language along with some keywords of target language, and translates the source sentence into the target language under the instruction of the target keywords. The keywords can be any description of the image that can help disambiguate the translation.
Using Triple Data ((*x, i*) → y) Figure 2b 1 :
First, we generate the prompt from the image with a pre-trained caption model (we will introduce the caption model later). The source sentence is concatenated with the The original source sentence and the prompt are concatenated together to compose the training sources, with a special token [SEP]
as a separator between the two.
Using Parallel Text ((x) → y) Figure 2b 2 :
Since PROMPT-BASED approach adopts a standard Transformer and involves no modification on architecture, it is natural to train on unimodal parallel corpus. We use the parallel data to strengthen the ability to take advantage of the prompt. Without any image, we randomly select several words from the target sentence as the pseudo vision prompt. For translation training, we append the keyword prompt to the end of the original sentence and use a special token as a separator (Li et al., 2022b).
After inference, we extract the translation result by splitting the separator token.
Using Monolingual Caption ((y∗, i) → y) Figure 2b 3 : Like FUSION-BASED approach, we use the de-noising auto-encoder task. By randomly masking some tokens and combining the caption result from the image as the prompt, we make the model learn to predict the original target text.
Training Caption Model ((i) → **keywords**(y))
Meanwhile, we train an caption model to generate the guiding prompt from images for translation, We formulate image-text pairs from both triple data and target-side monolingual caption. The input and output of the model are the image and extracted keywords of the corresponding target sentence.
## 3.3 Comparison And Combination Of Fusion-Based And Prompt-**Based**
Under the same training framework 2/3-Triplet, we propose two approaches, FUSION-BASED and PROMPT-BASED, for utilizing non-triple data. The FUSION-BASED approach preserves the complete visual context, providing more information via model fusion. In contrast, the PROMPT-BASED
approach has the advantage of not requiring any modifications to the model architecture. Instead, all visual information is introduced by the prompt model, making deployment more straightforward.
The two methods, FUSION-BASED and PROMPT-BASED, are not mutually exclusive, and we can jointly utilize them. Specifically, the model simultaneously utilizes the fused feature in Eq. 3 as an encoder representation and the promptedconcatenated source as text input. The combination enables the model to benefit from our framework in the most comprehensive way, and as a result, the performance gains significant improvements.
## 4 Dataset
As mentioned before, in previous test sets, many sentences can be easily translated without the image context, for all information is conveyed in the text and has no ambiguity. To deeply evaluate visual information usage, we propose a multimodalspecific dataset.
We collect the data based on real-world ecommercial data crawled from TikTok Shop and Shoppee. We crawled the product image and title on two websites, where the title may be in English or Chinese. We filter out redundant, duplicate samples and those with serious syntax errors. Based on this, we conduct manual annotations. We hired a team of 20 professional translators. All translators are native Chinese, majoring in English. In addition, another translator independently samples the annotated corpus for quality control. We let the annotators select some samples specifically for the test set, which they found difficult to translate or had some confusion without images. The total number of triples annotated is 22, 500 of which are carefully selected samples as testset. We also randomly selected 500 samples as devsets among the full-set while the remaining as training set.
Besides the annotated triplets, we clean the rest of the crawled data and open sourced it as the monolingual caption part of the data. Since our approach features in utilizing bilingual data to enhance multimodal translation, we sample 750K CCAlign (ElKishky et al., 2020) English-Chinese as a bilingual parallel text. The selection is motivated by the corpus's properties of its diversity in sources and domains, and it is more relevance to real-world compared to other corpus. The sampled data scale is decided based on both the model architecture and the principles of the neural scaling law (Kaplan et al., 2020; Gordon et al., 2021). We also encourage future researchers to explore the use of additional non-triple data to further enhance performance, as detailed in the appendix. We summarize the dataset statistics in Table 1. We discuss ethic and copyright issue of the data in the appendix.
## 5 Experiments 5.1 Datasets
We conduct experiments on three benchmark datasets: Multi30K (Elliott et al., 2016), FashionMMT (Clean) (Song et al., 2021), and our EMMT.
Multi30K is the most common benchmark on MMT tasks, annotated from *Flickr*, where we focus on English-German translation. To validate the effectiveness of parallel text, we add 1M EnglishGerman from CCAlign and COCO (Lin et al., 2014; Biswas et al., 2021). **Fashion-MMT** is built on fashion captions of FACAD (Yang et al., 2020).
## 5.2 Baselines
We compare our proposed 2/3-Triplet with the following SOTA MT and MMT systems:
Transformer (Vaswani et al., 2017) is the current de facto standard for text-based MT.
![4_image_0.png](4_image_0.png)
UPOC2(Song et al., 2021) introduced cross-modal pre-training tasks for multimodal translation.
Selective-Attention (SA) (Li et al., 2022a) investigated strong vision models and enhanced features can enhance multimodal translation with simple attention mechanism.
UVR-NMT (Zhang et al., 2020) retrieves related images from caption corpus as the pseudo image for sentences.
Phrase Retrieval (Fang and Feng, 2022) is an improved version of retrieval-based MMT model that retrieve images in phrase-level.
In addition, we report the results of Google Translate, which helps to check whether the translation of the test set actually requires images. All baselines reported use the same number of layers, hidden units and vocabulary as 2/3-Triplet for fair comparison.
We mainly refer to BLEU (Papineni et al., 2002)
as the major metric since it is the most commonly used evaluation standard in various previous multimodal MT studies.
## 5.3 Setups
To compare with previous SOTAs, we use different model scales on Multi30K and the other two datasets. We follow Li et al. (2022a)'s and Li et al.
(2021)'s setting on Multi30K, where the model has 4 encoder layers, 4 decoder layers, 4 attention heads, hidden size and filter size is 128 and 256, respectively. On the other two datasets, we set the model has 6 encoder layers, 6 decoder layers, 8 attention heads, hidden size and filter size is 256 and 512, respectively (*i.e.* Transformer-base setting). We apply BPE (Sennrich et al., 2016) on tokenized English and Chinese sentences jointly to get vocabularies with 11k merge operations. We use Zeng et al. (2022)'s method to get the caption model. The vocabularies, tokenized sentence and caption models will be released for reproduction.
Codes are based on Fairseq (Ott et al., 2019).
When training models on various domains (+PT
and +MC in Tab. 2), we upsample small-scale data
(*i.e.* E-commercial Triplet) because of the massive
| ID | Test set | EMMT | Multi30k-Test16 | Multi30k-Test17 | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|--------|-------------------|-------------------|-------|--------------|-------|-------|
| Training Data | Triplet Only | +PT | + PT + MC | Triplet Only | +PT | Triplet Only | +PT | |
| 1 | Plain Transformer♥ | 39.07 | 40.66 | 42.71 | 39.97 | 44.13 | 31.87 | 40.46 |
| 2 | Selective Attention♠ | 41.27 | / | / | 40.63 | / | 33.80 | / |
| 3 | UPOC2♦ | 40.60 | / | 44.81 | 40.8 | / | 34.1 | / |
| 4 | UVR-NMT♣ | 37.82 | 41.13 | / | 38.19 | / | 31.85 | / |
| 5 | Phrase Retrieval♣ | / | / | / | 40.30 | / | 33.45 | / |
| 6 | FUSION-BASED | 41.74 | 44.22 | 45.93 | 40.95 | / | 34.03 | / |
| 7 | PROMPT-BASED | 41.70 | 43.35 | 46.28 | 40.17 | / | 33.87 | / |
| 8 | FUSION+PROMPT | 42.03 | 45.20 | 46.55 | 40.48 | 44.60 | 34.62 | 40.07 |
| Google Translate | 44.27 | 41.9 | 42.0 | | | | | |
| ♥ We also train plain Transformer on monolingual captions via Siddhant et al. (2020)'s method for fair comparison on textual data. ♠ We use their open source code to reproduce Multi30K's results. ♦ Multi30K's results copy from Song et al. (2021). We add all MC and PT data for its pre-training in +PT+MC column for fair comparison on data. The complete UPOC2 also utilize product attributes besides images, which is removed from our replication. | | | | | | | | |
disparity of data scale in different domains. We follow Wang and Neubig (2019)'s and Arivazhagan et al. (2019)'s temperature based data sampling strategy and set the sampling temperature at 5. We empirically find that the model gains by simply randomly dropping some images during the training, where we set the drop ratio at 0.3 . Interestingly, such the method is also observed in other multimodal research topics (Abdelaziz et al., 2020; Alfasly et al., 2022). We evaluate the performance with tokenized BLEU (Papineni et al., 2002).
## 5.4 Main Results
We list the main results in Tab. 2. We get three major findings throughout the results:
1. In traditional multimodal MT settings (*i.e.*
Triplet only and Multi30K), whose training and inference are on triple data, 2/3-Triplet rivals or even surpasses the previous SOTAs.
2. Parallel text and monolingual captions significantly boost the performance of multimodal translation models. With these additional data, even the plain Transformer model outperforms SOTA multimodal baselines. Given the scarcity of multimodal data, we argue that the use of extra data, especially the parallel text, is more crucial for multimodal translation than the use of multimodal information.
3. FUSION and PROMPT generally achieve the best performance when used together. This suggests two approaches are complementary.
We also list results on Multi30k for dataset comparison. Google Translate achieves the best results, while all other models are close in performance with no statistical significant improvement. It indicates that images in Multi30K are less essential and a strong text translation model is sufficient to handle the majority of cases. Moreover, we find that by incorporating non-imaged parallel text, the model's performance improves significantly, while narrows the gap between plain transformer models MMT ones. Hence, the parallel text rather than images may be more essential for improving performance on the Multi30k. In contrast, 2/3-Triplet surpass Google's on EMMT with visual infomation, providing evidence that ours serves as a suitable benchmark.
We also report the results of 2/3-Triplet and baselines on FashionMMT in Appendix along with BLEURT (Sellam et al., 2020) and word accuracy as supplementary metrics. The results show that 2/3-Triplet also rivals the SOTA MMT systems on various benchmarks and metrics.
## 5.5 Performance On Triplet-Unavailable Setting
In more scenarios, annotated triple data is rather scarce or even unavailable, *i.e.* only bilingual translation or monolingual image caption is available in the training data, while we wish the model can still translate sentences in multimodal manners.
Since our proposed 2/3-Triplet utilize not only triplets, we examine whether our model can conduct inference on multimodal triple testset while only trained on the non-triple data, as triplet might be unavailable in real scenarios. In this experiment, we discard all images of EMMT's triples during the training stage, while the trained model is still evaluated on the multimodal test set. We compare the triplet-unavailable results to triplet only and full data training set settings in Figure 3 We can see that 2/3-Triplet still preserves a relatively high performance and even sharply beats the triplet-only setting. This fully illustrates that involving parallel text and monolingual caption is extremely important for MMT.
![6_image_1.png](6_image_1.png)
## 6 Discussion
As plenty of previous studies have discussed, the current multimodal MT benchmarks are biased, hence the quality gains of previous work might not actually derive from image information, but from a better training schema or regularization effect (Dodge et al., 2019; Hessel and Lee, 2020).
This section gives a comprehensive analysis and sanity check on our proposed 2/3-Triplet and EMMT: we carefully examine whether and how our model utilize images, and whether the testset of EMMT has sufficient reliability.
## 6.1 Visual Ablation Study: Images Matter
We first conduct ablation studies on images to determine how multimodal information contributes to the performance. Most studies used **adversarial**
input (*e.g.* shuffled images) to inspect importance of visual information. However, effects of adversarial input might be opaque (Li et al., 2021). Hence, we also introduce **absent** input to examine whether 2/3-Triplet can handle source sentences without image by simply zeroing the image feature for FUSION or striping the prompt for PROMPT.
We list the results of a vision ablation study of both adversarial and absent respectively in Figure 4,
![6_image_0.png](6_image_0.png)
where we select FUSION-BASED and PROMPT-BASED approaches trained with full data(last columns in Table 2) for comparison.
![6_image_2.png](6_image_2.png)
In the absent setting, both the FUSION and PROMPT degrade to the baseline, confirming the reliance of 2/3-Triplet on image information.
In the adversarial setting, the PROMPT performs worse than the baseline, which is in line with the expectation that incorrect visual contexts lead to poor results. However, while the FUSION also exhibits a decline in performance, it still surpasses the baseline. This aligns with the observations made by Elliott (2018); Wu et al. (2021) that the visual signal not only provides multimodal information, but also acts as a regularization term. We will further discuss this issue in Section 7.
## 6.2 How Visual Modality Works
We further investigate how the visual signal influence the model.
FUSION-**BASED** We verify how much influence the visual signal imposes upon the model. Inspired by Wu et al. (2021), we quantify the modality contribution via the L2-norm ratio (ΛHfused for vision over Htext for text, in Eq. 3). We visualize the whole training process along with BLEU as a reference in Figure 5. Wu et al. (2021) criticize that previous studies do not utilize visual signal, for the
| source: | ready stock , cheese grains , pets only |
|----------------------|----------------------------------------------------------------------------------------|
| human: | 现货 商品 奶酪 粒 (宠物 专用) |
| Plain: | 现货 起司 谷物 宠物 专用 ready stock cheese cereal grains pet only |
| Ours (Triplet-only): | 现货 奶酪 颗粒 宠物 仅限 ready stock cheese granular pets only for |
| Ours (All-data): | 现货 奶酪 粒 宠物 专用 ready stock cheese grains for pets only |
| source: | ready stock kids medical surgical face mask 3-ply 20pcs |
| human: | 现货 儿童 医疗 手术 口罩 3 层 20 个 |
| Plain: | 现货 儿童 医用 面具 3-ply 20pcs ready stock kids medical (opera) mask 3-ply 20pcs |
| Ours (Triplet-only): | 现货 儿童 医用 口罩 3ply 20pcs ready stock kids medical mask 3-ply 20pcs |
| Ours (All-data): | 现货 儿童 医用 外科 口罩 3 层 20 片 ready stock kids medical surgical mask 3-ply 20pcs |
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
final ratio converge to zero. Our method shows a different characteristic: as the BLEU becomes stable, the ratio of visual signal and textual signal still remains at around 0.15, showing the effectiveness of the visual modality.
PROMPT-**BASED** We also look into the influence caused by the prompts. We sample an ambiguous sentence: "*chengwei kf94 fish mouth medical mask,*
10 pieces of one box". The keyword "mask" can be translated into "口罩" ("*face mask*" in English) or
"面膜" ("*facial mask*" in English) without any context. We visualize the attention distribution when our PROMPT-BASED model is translating "mask" in Figure 6. We can see that the a high attention is allocated to the caption prompt. Therefore, our method correctly translates the word. We also visualize the detailed attention heatmaps for source, prompts and generated sentences in Appendix.
## 6.3 Qualitative Case Study
We also compare several cases from EMMT testsets to discuss how multimodal information and external bilingual data help the translation performance. Meanwhile, we regard the case study as a spot check for the multimodal translation testset itself. We here choose plain Transformer, our methods trained on triplet only and all data, as well as human reference for comparison.
Table 3 presents the qualitative cases and major conclusions are as follows: 1) Visual information plays a vital role in disambiguating polysemous words or vague descriptions. 2) Non-triple data improves translation accuracy, particularly in translating jargons and enhancing fluency in the general lack of multimodal data. 3) Our test set is representative in real-world seniors as it includes product titles that are confusing and require image, in contrast to previous case studies on Multi30k where researchers artificially mask key words (Caglayan et al., 2019; Wu et al., 2021; Wang and Xiong, 2021; Li et al., 2022a).
## 7 Conclusion
This paper devises a new framework 2/3-Triplet for multimodal machine translation and introduces two approaches to utilize image information. The new methods are effective and highly interpretable. Considering the fact that current multimodal benchmarks are limited and biased, we introduce a new dataset EMMT of the e-commercial domain. To better validate the multimodal translation systems, the testset is carefully selected as the image are crucial for translation accuracy. Experimental results and comprehensive analysis show that 2/3-Triplet makes a strong baseline and EMMT can be a promising benchmark for further research.
## Limitation
First, there are studies (Wu et al., 2021) claiming visual information only serves as regularization. In our ablation study, we find the adversarial setting of FUSION-BASED approach outperforms the plain Transformer. Combined with observations from previous studies, we suggest that fusion-based architectures may apply some images information as regularization terms, yet the further quantitative analysis is needed to confirm this phenomenon.
Second, though our testset is carefully selected to ensure the textual ambiguity without image data, we encounter difficulties in designing a suitable metric for quantifying the degree to which the models are able to resolve the ambiguity. Specifically, we find that conventional metrics, such as wordlevel entity translation accuracy, exhibit significant fluctuations and do not effectively quantify the extent to which the model effectively resolves ambiguity. We discuss this metric in more details in the Appendix, and offer a glossary of ambiguous words used in the test set. We acknowledge that the evaluation of multimodal ambiguity remains an open problem and an area for future research.
In addition, there are some details regarding the dataset that we need to clarify: the dataset is collected after COVID-19, so some commodities will be associated with the pandemic. We collect data by category in order to cover various products to reduce the impact of the epidemic on product types.
## References
Ahmed Hussen Abdelaziz, Barry-John Theobald, Paul Dixon, Reinhard Knothe, Nicholas Apostoloff, and Sachin Kajareker. 2020. Modality dropout for improved performance-driven talking faces. In ICMI
'20: International Conference on Multimodal Interaction, Virtual Event, The Netherlands, October 25-29, 2020, pages 378–386. ACM.
Haithem Afli, Loïc Barrault, and Holger Schwenk. 2016.
Building and using multimodal comparable corpora
for machine translation. *Nat. Lang. Eng.*, 22(4):603– 625.
Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019.
Massively multilingual neural machine translation.
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3874–
3884. Association for Computational Linguistics.
Saghir Alfasly, Jian Lu, Chen Xu, and Yuru Zou. 2022.
Learnable irrelevant modality dropout for multimodal action recognition on modality-specific annotated videos. In *IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022*, pages 20176–
20185. IEEE.
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George F. Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. *CoRR*, abs/1907.05019.
Rajarshi Biswas, Michael Barz, Mareike Hartmann, and Daniel Sonntag. 2021. Improving german image captions using machine translation and transfer learning.
In *Statistical Language and Speech Processing - 9th* International Conference, SLSP 2021, Cardiff, UK,
November 23-25, 2021, Proceedings, volume 13062 of *Lecture Notes in Computer Science*, pages 3–14.
Springer.
Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, and Lucia Specia. 2021. Cross-lingual visual pretraining for multimodal machine translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19
- 23, 2021, pages 1317–1324. Association for Computational Linguistics.
Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Loïc Barrault. 2019. Probing the need for visual context in multimodal machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4159–
4170. Association for Computational Linguistics.
Iacer Calixto, Qun Liu, and Nick Campbell. 2017.
Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -
August 4, Volume 1: Long Papers, pages 1913–1924.
Association for Computational Linguistics.
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2185–2194.
Association for Computational Linguistics.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL
2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1383–1392. Association for Computational Linguistics.
Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. Ccaligned: A
massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5960–5969. Association for Computational Linguistics.
Desmond Elliott. 2018. Adversarial evaluation of multimodal machine translation. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31
- November 4, 2018, pages 2974–2978. Association for Computational Linguistics.
Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. In *Proceedings of the* 5th Workshop on Vision and Language, hosted by the 54th Annual Meeting of the Association for Computational Linguistics, VL@ACL 2016, August 12, Berlin, Germany. The Association for Computer Linguistics.
Desmond Elliott and Ákos Kádár. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 130–141. Asian Federation of Natural Language Processing.
Qingkai Fang and Yang Feng. 2022. Neural machine translation with phrase-level universal visual representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5687–5698. Association for Computational Linguistics.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1:
Long Papers), Virtual Event, August 1-6, 2021, pages 3816–3830. Association for Computational Linguistics.
Mitchell A. Gordon, Kevin Duh, and Jared Kaplan.
2021. Data and parameter scaling laws for neural machine translation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana,*
Dominican Republic, 7-11 November, 2021, pages 5915–5922. Association for Computational Linguistics.
Jack Hessel and Lillian Lee. 2020. Does my multimodal model learn cross-modal interactions? it's harder to tell than you might think! In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 861–877. Association for Computational Linguistics.
Julian Hitschler, Shigehiko Schamoni, and Stefan Riezler. 2016. Multimodal pivots for image caption translation. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics,*
ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
Po-Yao Huang, Junjie Hu, Xiaojun Chang, and Alexander G. Hauptmann. 2020. Unsupervised multimodal neural machine translation with pseudo visual pivoting. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL
2020, Online, July 5-10, 2020, pages 8226–8237.
Association for Computational Linguistics.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *Proceedings of the 38th International Conference on Machine Learning, ICML*
2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4904–4916. PMLR.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B.
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. *CoRR*,
abs/2001.08361.
Bei Li, Chuanhao Lv, Zefan Zhou, Tao Zhou, Tong Xiao, Anxiang Ma, and Jingbo Zhu. 2022a. On vision features in multimodal machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 6327–6337. Association for Computational Linguistics.
Jiaoda Li, Duygu Ataman, and Rico Sennrich. 2021.
Vision matters when it should: Sanity checking multimodal machine translation models. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8556–8562. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4582–
4597. Association for Computational Linguistics.
Yafu Li, Yongjing Yin, Jing Li, and Yue Zhang. 2022b.
Prompt-driven neural machine translation. In Findings of the Association for Computational Linguistics:
ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2579–2590. Association for Computational Linguistics.
Yi Li, Rameswar Panda, Yoon Kim, Chun-Fu Richard Chen, Rogério Feris, David D. Cox, and Nuno Vasconcelos. 2022c. VALHALLA: visual hallucination for machine translation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR
2022, New Orleans, LA, USA, June 18-24, 2022, pages 5206–5216. IEEE.
Jindrich Libovický and Jindrich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics,*
ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 196–202. Association for Computational Linguistics.
Huan Lin, Fandong Meng, Jinsong Su, Yongjing Yin, Zhengyuan Yang, Yubin Ge, Jie Zhou, and Jiebo Luo.
2020. Dynamic context-guided capsule network for multimodal machine translation. In *MM '20: The* 28th ACM International Conference on Multimedia, Virtual Event / Seattle, WA, USA, October 12-16, 2020, pages 1320–1329. ACM.
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO:
common objects in context. In Computer Vision -
ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of *Lecture Notes in Computer* Science, pages 740–755. Springer.
Benjamin Marie, Atsushi Fujita, and Raphael Rubino.
2021. Scientific credibility of machine translation research: A meta-evaluation of 769 papers. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 7297– 7306. Association for Computational Linguistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations*, pages 48–53. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311–318. ACL.
Thibault Sellam, Dipanjan Das, and Ankur P. Parikh.
2020. BLEURT: learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7881–7892.
Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.
Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Xu Chen, Sneha Reddy Kudugunta, Naveen Arivazhagan, and Yonghui Wu. 2020. Leveraging monolingual data with self-supervision for multilingual neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2827–2835. Association for Computational Linguistics.
Yuqing Song, Shizhe Chen, Qin Jin, Wei Luo, Jun Xie, and Fei Huang. 2021. Product-oriented machine translation with cross-modal cross-lingual pretraining. In *MM '21: ACM Multimedia Conference,*
Virtual Event, China, October 20 - 24, 2021, pages 2843–2852. ACM.
Zewei Sun, Qingnan Jiang, Shujian Huang, Jun Cao, Shanbo Cheng, and Mingxuan Wang. 2022. Zeroshot domain adaptation for neural machine translation with retrieved phrase-level prompts. *CoRR*,
abs/2209.11409.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Dexin Wang and Deyi Xiong. 2021. Efficient objectlevel visual context modeling for multimodal machine translation: Masking irrelevant objects helps
grounding. In *Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on* Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 2720–
2728. AAAI Press.
Xinyi Wang and Graham Neubig. 2019. Target conditioned sampling: Optimizing data selection for multilingual neural machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5823–5828. Association for Computational Linguistics.
Yifan Wang, Zewei Sun, Shanbo Cheng, Weiguo Zheng, and Mingxuan Wang. 2022a. Controlling styles in neural machine translation with activation prompt.
CoRR, abs/2212.08909.
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2022b. Simvlm: Simple visual language model pretraining with weak supervision. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Zhiyong Wu, Lingpeng Kong, Wei Bi, Xiang Li, and Ben Kao. 2021. Good for misconceived reasons: An empirical revisiting on the need for visual context in multimodal machine translation. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6153–6166. Association for Computational Linguistics.
Xuewen Yang, Heming Zhang, Di Jin, Yingru Liu, ChiHao Wu, Jianchao Tan, Dongliang Xie, Jue Wang, and Xin Wang. 2020. Fashion captioning: Towards generating accurate descriptions with semantic rewards. In *Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020,*
Proceedings, Part XIII, volume 12358 of *Lecture* Notes in Computer Science, pages 1–17. Springer.
Shaowei Yao and Xiaojun Wan. 2020. Multimodal transformer for multimodal machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4346–4350. Association for Computational Linguistics.
Yongjing Yin, Fandong Meng, Jinsong Su, Chulun Zhou, Zhengyuan Yang, Jie Zhou, and Jiebo Luo.
2020. A novel graph-based multi-modal fusion encoder for neural machine translation. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 3025–3035. Association for Computational Linguistics.
Yan Zeng, Xinsong Zhang, and Hang Li. 2022. Multigrained vision language pre-training: Aligning texts with visual concepts. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 25994–26009. PMLR.
Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao.
2020. Neural machine translation with universal visual representation. In *8th International Conference on Learning Representations, ICLR 2020, Addis* Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
## Appendix 7.1 Ethic Consideration About Data Annotators
We hire 20 professional translators in a crowdsource platform and pay them according to the market wage and work within 8 hours a day. All translators are native Chinese and have graduated with an English major. The ethics review is done while in data acceptance stage.
## 7.2 Data Copyright
In our study, we present a new dataset of public e-commercial products from Shoppee and TikTok Shop. To address copyright concerns, we provide a detailed description of how we collect the data and ensure that our usage complies with all relevant policies and guidelines.
For the Shoppee dataset, we obtain the data from their Open Platform API2. We carefully review their Data Protection Policy 3and Privacy Policy guidelines 4, which provide clear instructions for using data through the Shopee Open Platform. We strictly follow their requirements and limitations, ensuring that we did not access any personal data and that we only use open information provided by the API. We also adhere to their robot guidelines 5, avoiding full-site scraping.
For the TikTok Shop dataset, we access the data using robots, as scraping is allowed according to their robots.txt file 6. We also review TikTok Shop Privacy Policy and TikTok for Business Privacy Policy 7to ensure that we only collect data from merchants under their policy.
It is important to note that all data we publish is publicly available on the Internet and only pertains to public e-commercial products. We do not access or publish any user information, and we take all necessary steps to respect the intellectual property and privacy rights of the original authors and corresponding websites. If any authors or publishers express a desire for their documents not to be included in our dataset, we will promptly remove that portion from the dataset. Additionally, we certify that our use of any part of the datasets is limited to non-infringing or fair use under copyright law. Finally, we affirm that we will never violate anyone's rights of privacy, act in any way that might give rise to civil or criminal liability, collect or store personal data about any author, infringe any copyright, trademark, patent, or other proprietary rights of any person.
## 7.3 Results On Fashion-Mmt
We list the testset performance on Fashion-MMT
in Table 4.
| FashionMMT(C) | Triplet Only | + Parallel Text | |
|-------------------|----------------|-------------------|-------|
| Transformer | 40.12 | / | |
| UPOC2 | MTLM+ISM | 41.38 | / |
| MTLM+ISM+ATTP | 41.93 | / | |
| Ours | FUSION | 41.19 | 42.38 |
| PROMPT | 40.97 | 42.02 | |
| FUSION+PROMPT | 41.38 | 42.33 | |
Table 4: Results on Fashion-MMT(C) testset.
Fashion-MMT is divided into two subset according to the source of the Chinese translation: "Large" subset for the machine-translated part and "Clean" subset for the manually annotated part. As its authors also found the Large subset is noisier and different from the human annotated data, our experiments focused on the Clean subset with FashionMMT(*i.e.* Fashion-MMT(c)).
We compare the model performance on training on Triplet Only and adding Parallel Text settings.
As the original dataset does not provide a parallel corpus without pictures, we used Parallel Text from EMMT for our experiments.
Note that the UPOC2 model relies on three submethods, namely MTLM, ISM, and ATTP. The ATTP requires the use of commodity attributes, whereas our model does not use such information.
Hence, we also list results of UPOC2 without ATTP
in the table.
The results show that our model rivals UPOC2 on triplet only settings. And by using parallel text, ours gain further improvement, even if the parallel text does not match the domain of the original data.
The results demonstrate the potential of our training strategy over multiple domains.
## 7.4 Evaluation With Various Metrics
Recent studies have indicated that the sole reliance on BLEU as an evaluation metric may be biased (Marie et al., 2021). We hence evaluate models with machine learning-based metric BLEURT (Sellam et al., 2020) and list the results
| ID | Metric | BLEURT | Accuracy | | | | |
|------------------|---------------------|----------|------------|--------------|-------|-----------|-------|
| Training Data | Triplet Only | + PT | + PT + MC | Triplet Only | + PT | + PT + MC | |
| 1 | Plain Transformer | 0.5424 | 0.5559 | 0.5662 | 0.765 | 0.754 | 0.761 |
| 2 | Selective Attention | 0.5619 | / | / | 0.782 | / | / |
| 3 | UPOC2 | 0.4855 | / | 0.5788 | 0.792 | / | 0.798 |
| 4 | UVR-NMT | 0.5299 | 0.5866 | / | 0.795 | 0.791 | / |
| 5 | Phrase Retrieval | / | / | / | / | / | / |
| 6 | FUSION-BASED | 0.5760 | 0.5782 | 0.5923 | 0.771 | 0.778 | 0.812 |
| 7 | PROMPT-BASED | 0.5600 | 0.5772 | 0.5980 | 0.792 | 0.775 | 0.792 |
| 8 | FUSION+PROMPT | 0.5647 | 0.5917 | 0.6018 | 0.809 | 0.791 | 0.799 |
| Google Translate | 0.6108 | 0.741 | | | | | |
## In Table 5
8.
Previous multimodal works often set entity nouns in the original sentence into [mask] to quantify model's ability for translating masked items with images (Wang and Xiong, 2021; Li et al.,
2022a; Fang and Feng, 2022). While the experiment can measure the effectiveness of multimodal information, text with [mask] is not natural and the setting makes less sense in the real world. Inspired by their settings, we have developed a set of commonly used English-Chinese translation ambiguities by mining frequently used product entity and manual annotating. We have defined an word-level accuracy metric based on those potential ambiguous words in Table 7: if a certain English word appears in the original sentence, we require that the model's translation result in the target language must be consistent with the human reference's corresponding entity translation in order to be considered a correct translation, and thus calculate the word-level accuracy.
The results of BLEURT generally align with BLEU, indicating the effectiveness of 2/3-Triplet. However, an exception occurs in the Google Translate system, whose score are highest among all systems. We attribute this deviation to the use of back-translated pseudo corpus in the pre-training of the BLEURT model.
Multimodal models consistently perform better than plain transformer models in word-level accuarcy. Additionally, Google Translate obtains the lowest scores in word-level accuracy, indicating that BLEURT may not distinguish ambiguous words in multimodal scenarios. However, the difference between multimodal ones is not significant.
We attribute it to the difficulty in quantifying the semantic differences between synonyms, as we will demonstrate in our case study details. Furthermore, given the significant human effort required for mining and annotating ambiguous word list while it is highly domain-specifc to the test set, we suggest that the development of new metrics for evaluating multimodal translation ambiguity shall be a valuable topic of future research.
## 7.5 Translation Details Of Case Study
Here we give some detailed explanations about the translation of case study translations:
In the first case, the Plain Transformer fail to recognize whether the word "grains" means cereal crop (谷物) or the cheese of grain sizes(奶酪 粒). Triplet-Only 2/3-Triplet translate "grains" into 颗粒, which is acceptable, but the word not commonly used to describe food in Chinese, yet the model does not translate "only" grammatically properly.
In the second case, Plain Transformer translates
"mask" to 面具, which is more commonly used to refer opera mask in Chinese. Both Plain Transformer and Triplet-Only 2/3-Triplet fail to understand "pcs"(件、个、片) and "ply"(层), and directly copy them to targets. The two methods also fail to translate "surgical"(手术、外科) correctly as it is a rare word in Triplet only settings.
In comparison, the translation of 2/3-Triplet is more consistent with the images, and more appropriate in terms of grammar and wording.
![14_image_0.png](14_image_0.png)
## 7.6 Attention Visualization
We visualize one the attention heatmap case of PROMPT-BASED in Figure 8 and Figure 8.
Figure 8 shows the attention alignment of original source (y-axis) and the prompted source (xaxis) in text encoder. Figure 8 shows the generated sentence (y-axis) and the prompted source (x-axis)
in text decoder. From the heat map we know that the prompt attends to the most relevant ambiguous words and supports the model translation, both when encoding the source sentence and decoding the infernece. Specifically in our case, "口 罩"(face mask) in prompts has high attention with all "masks" occurrence on the source side, and has high attention with all "口罩" generation in decoder side. In contrast, the word "防护"(protective)
less prominent in the attention heatmap as it is less ambiguous.
## 7.7 Details On Data Selection And Mixing
As discussed in Section 5.3, we resort to upsampling the e-commercial triplet data due to the significant disparity in the quantity of data across various domains. As previously proposed by Wang and Neubig (2019) and Arivazhagan et al. (2019),
we utilize a temperature-based sampling method, where the i-th data split is assigned a sampling weight proportional to D
1 T
i
, where Di denotes the number of sentences in the i-th data split, and T is the temperature hyper-parameter. In our implementation, to guarantee the completeness and homogeneity of data across each training iteration, we directly upsample the triplet data or monolingual captions, and subsequently, shuffle them randomly with parallel text to construct the training dataset.
The upsampling rate for the triplet data is rounded to 15 and the upsampling rate for the parallel text is rounded to 4, resulting in an actual sampling temperature of 5.11 .
## 8 Model Performance With Excessive Data
Based on data distribution and scaling laws, we sample 750k parallel text and 103k monolingual captions as non-triple data to validate our methods.
To further explore the potential of models with excessive non-triple data, we attempt to increase the data scale of the parallel text corpus to 5M, which are also sampled from CCAlign corpus. We list the results in Table 6. However, we find that excessive parallel text does not further promote model performance on current test sets. We suggest that the lack of improvement in performance may be due to the difference in text domain between the general domain and the e-commerce domain. As we will release the parallel text corpus we used in our experiments, in addition to conducting fair comparisons based on our data, we also encourage future researchers to use more unconstrained external data and techniques to continue to improve performance.
English Word Chinese Potential Translations English Word Chinese Potential Translations
mask 面膜,口罩,面罩,面具,遮垫 tape 胶带,胶布,带子,磁带,薄胶带
bow 琴弓,弓子,弯弓 bar 吧台,酒吧,棒杆 top 上衣,上装,女上装,机顶 basin 盆子,盆器,盆,地盆,盆池
set 套装,把套,撮子,套盒,组套 sheet 被单,棚布,薄板,薄片,片材
clip 卡子,提盘夹,提盘夹子,夹片,取夹 film 贴膜,薄膜,胶片,胶卷,软片
nail 钉子,铁钉,扒钉,指甲,钉钉子 eyeliner 眼线笔,眼线液,眼线,眼线膏
iron 铁,铁艺,电熨斗,熨斗,烫斗 shell 车壳,被壳,贝壳,外壳,壳壳
rubber 胶皮,橡皮 chip 芯片,筹码
brush 刷子,毛笔,毛刷,板刷,锅刷 plug 插头,塞子,胶塞,堵头,地塞 oil 机油,油,油脂,油液 napkin 餐巾纸,餐巾
canvas 餐布,油画布,画布,帆布 grease 润滑脂,打油器
ring 戒指,指环,圆环,圈环,响铃 pipe 管子,烟斗,管材,皮管,排管
pad 护垫,盘垫,踏垫,垫块,贴垫 charcoal 木炭,炭笔,炭,引火炭,炉炭 wipes 湿巾,抹手布,擦地湿巾,擦碗巾,擦奶巾 blade 铲刀,刀片,叶片,刀锋,遮板
face mask 焕颜面膜,护脸面罩,遮脸面罩,脸罩,脸部面膜,口罩 bucket 水桶,面桶,扒斗,漂桶,簸箩
powder 粉饼,散粉,粉掌,修容粉饼,粉剂 lift 升降机,升降梯,举升机,举升器,起重器
tie 扎带,领带 crane 吊车,起重机,吊机,起重吊机,仙鹤
desktop 桌面,台式机 football 足球,橄榄球
jack 千斤顶,插孔 frame 画框,车架,框架,包架,裱画框 collar 项圈,颈圈,套环,领夹 plum 话梅,李子
cement 胶泥,水泥 slide 滑轨,滑梯,滑滑梯,滑道,幻灯片 tank 坦克,料槽,坦克车 keyboard 键盘,钥匙板,小键盘
hood 头罩,遮光罩,机罩,风帽,引擎盖 bass 鲈鱼,贝斯
gum 牙胶,树胶,口香糖 makeup remover 卸妆水,卸妆膏,卸妆液,卸妆乳,卸妆棉棒 screen 屏风,纱窗,滤网,丝网,筛网 counter 计数器,柜台
bell 铃铛,车铃,吊钟,吊铃 separator 隔板,分离器,分液器,隔片,分离机
![15_image_1.png](15_image_1.png)
![15_image_0.png](15_image_0.png)
![16_image_0.png](16_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In Limitation section
✗ A2. Did you discuss any potential risks of your work?
The data set is only on e-commercial domain, which has bit potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
✗ B1. Did you cite the creators of artifacts you used?
Left blank.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 5
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We illustrate the model parameters. The model is small (6-layer Transformer-base) and is friendly to researchers with low resources The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5.4. In table
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.4
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4 And In Appendix
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
in Appendix
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
in Appendix
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
in Appendix
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
in Appendix
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? in Appendix |
coil-shwartz-2023-chocolate | From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds? | https://aclanthology.org/2023.findings-acl.169 | Noun compound interpretation is the task of expressing a noun compound (e.g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e.g. bunny-shaped chocolate). We propose modifications to the data and evaluation setup of the standard task (Hendrickx et al., 2013), and show that GPT-3 solves it almost perfectly. We then investigate the task of noun compound conceptualization, i.e. paraphrasing a novel or rare noun compound. E.g., chocolate crocodile is a crocodile-shaped chocolate. This task requires creativity, commonsense, and the ability to generalize knowledge about similar concepts. While GPT-3{'}s performance is not perfect, it is better than that of humans{---}likely thanks to its access to vast amounts of knowledge, and because conceptual processing is effortful for people (Connell and Lynott, 2012). Finally, we estimate the extent to which GPT-3 is reasoning about the world vs. parroting its training data. We find that the outputs from GPT-3 often have significant overlap with a large web corpus, but that the parroting strategy is less beneficial for novel noun compounds. | # From Chocolate Bunny To **Chocolate Crocodile**: Do Language Models Understand Noun Compounds?
Jordan Coil1 **and Vered Shwartz**1,2 1 University of British Columbia 2 Vector Institute for AI
[email protected], [email protected]
## Abstract
Noun compound interpretation is the task of expressing a noun compound (e.g. *chocolate* bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e.g. *bunny-shaped chocolate*). We propose modifications to the data and evaluation setup of the standard task (Hendrickx et al.,
2013), and show that GPT-3 solves it almost perfectly. We then investigate the task of noun compound conceptualization, i.e. paraphrasing a novel or rare noun compound. E.g., *chocolate* crocodile is a crocodile-shaped chocolate. This task requires creativity, commonsense, and the ability to generalize knowledge about similar concepts. While GPT-3's performance is not perfect, it is better than that of humans—likely thanks to its access to vast amounts of knowledge, and because conceptual processing is effortful for people (Connell and Lynott, 2012).
Finally, we estimate the extent to which GPT-3 is reasoning about the world vs. parroting its training data. We find that the outputs from GPT-3 often have significant overlap with a large web corpus, but that the parroting strategy is less beneficial for novel noun compounds.
## 1 Introduction
Noun compounds (NCs) are prevalent in English, but most individual NCs are infrequent (Kim and Baldwin, 2007). Yet, it is possible to derive the meaning of most NCs from the meanings of their constituent nouns. The task of noun compound interpretation (NCI) addresses this by explicitly uncovering the implicit semantic relation between the constituent nouns. We focus on the paraphrasing variant (Nakov and Hearst, 2006), where the goal is to generate multiple paraphrases that explicitly express the semantic relation between the constituents. For example (Figure 1), a *chocolate* bunny is a "chocolate shaped like a bunny".
Earlier methods for NCI represented NCs as a function their constituents' representations (e.g.
![0_image_0.png](0_image_0.png)
Van de Cruys et al., 2013; Shwartz and Dagan, 2018). In recent years, pre-trained language models (PLMs) caused a paradigm shift in NLP. Such models are based on the transformer architecture
(Vaswani et al., 2017), which by design computes a word representation as a function of the representation of its context. Further, PLMs are pre-trained on vast amounts of text, which equips them with broad semantic knowledge (Rogers et al., 2020).
Such knowledge may facilitate interpreting unseen NCs based on observed NCs that are semantically similar. Indeed, Ponkiya et al. (2020) showed that a masked language model is useful for this task, and Shwartz (2021) demonstrated the utility of generative language models on this task.
We formalize the experiments presented in Shwartz (2021) and evaluate generative models on NCI. We manually analyze and correct many problems with the standard SemEval 2013 task 4 dataset (Hendrickx et al., 2013), and release a cleaned version of the dataset. Following the criticism in Shwartz and Dagan (2018) on the task's dedicated evaluation metrics, we propose a more complete set of evaluation metrics including both automatic metrics and human evaluation.
Our experiments show that a few-shot model based on GPT-3 (Brown et al., 2020) achieves near2698 perfect performance on the NCI test set. The impressive performance may be due to a combination of factors. First, it tends to memorize texts seen during pre-training (Carlini et al., 2022), likely including partial or complete definitions of common NCs. Second, it has learned vast commonsense and world knowledge from its pre-training corpus, which—together with its ability to generalizemay be useful for interpreting less frequent NCs.
To test the extent that GPT-3 reasons about its knowledge as opposed to memorizes definitions, we propose a second task: noun compound conceptualization (NCC). The setup is identical to NCI, but the NCs are rare or novel (e.g., *chocolate crocodile* in Fig. 1), requiring a model to come up with a plausible interpretation based on its existing knowledge. We construct a test set for this task based on data from Dhar and van der Plas (2019).
The results show that GPT-3 outperforms humans on NCC, presumably thanks to its fast access to a huge "knowledge base", and compared to the relative human slowness on this task (Connell and Lynott, 2012).
Yet, compared to its performance on NCI, GPT3's performance on NCC shows a significant drop.
We thus quantify the extent that GPT-3 copies from its pre-training corpus when generating paraphrases for either NCI or NCC. We find that the generated paraphrases have significant overlap with a large web-based corpus, but that as expected, the copying strategy is less beneficial for NCC than for NCI.
We anticipate that the cleaned dataset and proposed evaluation setup will be adopted by the research community for NCI, and hope to see further research on NCC.1
## 2 Background 2.1 Noun Compound Interpretation
Traditionally, NCI has been framed as a classification task into predefined relation labels. Datasets differed by the number of relations and their specificity level; from 8 prepositional relations (e.g. of, from, etc.; Lauer, 1995), to finer-grained inventories with dozens of relations (e.g. contains, purpose, time of; Kim and Baldwin, 2005; Tratz and Hovy, 2010). The classification approach is limited because even the larger relation inventories don't cover all possible relationships between nouns. In addition, each NC is classified to a single relation, although several relations may be appropriate. E.g., *business zone* is both a zone that contains businesses and a zone whose purpose is business (Shwartz and Dagan, 2018).
For these reasons, in this paper we focused on the task of interpreting noun compounds by producing multiple free-text paraphrases (Nakov and Hearst, 2006). The reference paraphrases could be any text, but in practice they typically follow a "[n2] ... [n1]" pattern, where n1 and n2 are the constituent nouns.
The main dataset for this task comes from SemEval 2013 task 4 (Hendrickx et al., 2013), following a similar earlier task (Butnariu et al., 2009).
Earlier methods for this task reduced the paraphrasing task into a classification task to one of multiple paraphrase templates extracted from a corpus
(Kim and Nakov, 2011; Pa¸sca, 2015; Shwartz and Dagan, 2018). Shwartz and Dagan (2018) jointly learned to complete any item in the ([n1], [n2], paraphrase template) tuple, which allowed the model to generalize, predicting paraphrases for rare NCs based on similarity to other NCs.
More recently, Ponkiya et al. (2020) showed that PLMs already capture this type of knowledge from their pre-training. They used an offthe-shelf T5 model to predict the mask substitutes in templates such as "[n2] [MASK] [n1]", achieving a small improvement over Shwartz and Dagan
(2018). Shwartz (2021) further showed that supervised seq2seq models based on PLMs and a few-shot model based on GPT-3 yielded correct paraphrases for both common and rare NCs.
## 2.2 Forming And Interpreting New Concepts
Research in cognitive science studied how people interpret new noun-noun combinations such as cactus fish (e.g. Wisniewski, 1997; Costello and Keane, 2000; Connell and Lynott, 2012). While such combinations invite various interpretations, there is usually a single preferred interpretation which is more intuitively understood. For example, a *cactus fish* would more likely mean "a fish that is spiky like a cactus" than "a fish that is green like a cactus", because "spiky" is more characteristic of cacti than "green" (Costello and Keane, 2000).
Connell and Lynott (2012) constructed a set of 27 novel NCs and asked people to (1) judge the sensibility of an NC; and (2) come up with a plausible interpretation. The short response times for the sensibility judgment task indicated that participants relied on shallow linguistic cues as shortcuts, such as the topical relatedness between the constituent nouns. Response times in the interpretation generation task were longer, indicating that participants employed a slower process of mental simulation.
Interpreting a new concept required building a detailed representation by re-experiencing or imagining the perceptual properties of the constituent nouns.
Computational work on plausibility judgement for NCs involves rare NCs (Lapata and Lascarides, 2003) and novel NCs (Dhar and van der Plas, 2019). The latter built a large-scale dataset of novel NCs by extracting positive examples from different decades in the Google Ngram corpus for training and testing. Negative examples were constructed by randomly replacing one of the constituents in the NC with another noun from the data. They proposed an LSTM-based model that estimates the plausibility of a target NC based on the pairwise similarity between the constituents of the target NC
and other, existing NCs. For example, the candidate NC *glass canoe* was predicted as plausible thanks to its similarity to *glass boat*.
In this paper, we go beyond plausibility judgement to the more complicated task of interpretation.
In concurrent work, Li et al. (2022) conducted similar experiments evaluating GPT-3's ability to define common and new noun compounds, as well as combinations of nonce words. They found no evidence that GPT-3 employs human-like linguistic principles when interpreting new noun compounds, and suggested it might be memorizing lexical knowledge instead. We further try to quantify the latter possibility in this work.
Similarly to novel NCs, Pinter et al. (2020b) look at novel blends from the NYTWIT corpus, collected automatically from a Twitter bot that tweets words published for the first time in the NYT (Pinter et al., 2020a). For example, *thrupple* is a blend of three and couple, used to describe "A group of three people acting as a couple". They found that PLMs struggled to separate blends into their counterparts.
In a related line of work on creativity, researchers proposed models that coin new words from existing ones. Deri and Knight (2015) generated new blends such as *frenemy* (friend + enemy). Mizrahi et al. (2020) generated new Hebrew words with an algorithm that is inspired by the human process of combining roots and patterns.
## 3 Noun Compound Interpretation
We first evaluate PLMs' ability to interpret existing noun compounds. We focus on the free-text paraphrasing version of NCI, as exemplified in Table 2. We use the standard dataset from SemEval 2013 Task 4 (Hendrickx et al., 2013). We identified several problems in the dataset that we address in Sec 3.1. We then trained PLM-based models on the revised dataset (Sec 3.2), and evaluated them both automatically and manually (Sec 3.3).
## 3.1 Data
We manually reviewed the SemEval-2013 dataset and identified several major issues with the data quality. We propose a revised version of the dataset, with the following modifications.
Train-Test Overlap. We discovered 32 NCs that appeared in both the training and test sets, and removed them from the test set.
Incorrect Paraphrases. We manually corrected paraphrases with superficial problems such as spelling or grammatical errors, redundant spaces, and superfluous punctuation. We also identified and removed NCs that were semantically incorrect. For example, *rubber glove* was paraphrased to "gloves has been made to get away from rubber",
perhaps due to the annotator mistaking the word rubber for *robber*. Finally, we found and removed a few paraphrases that contained superfluous or subjective additions, deviating from the instructions by Hendrickx et al. (2013). For example, *tax reduction* was paraphrased as "reduction of tax *hurts the* economy", and *engineering work* as "work done by men in the field of engineering". Further, we discarded a total of 14 NCs from the training set and 11 NCs from the test set that had no correct paraphrases. In total, we removed 1,960 paraphrases from the training set and 5,066 paraphrases from the test set.
"Catch-All" Paraphrases. The paraphrases in Hendrickx et al. (2013) were collected from crowdsourcing workers. An issue with the crowdsourcing incentive structure, is that it indirectly encourages annotators to submit any response, even when they are uncertain about the interpretation of a given NC.
In the context of this dataset, this incentive leads to what we call "catch-all" paraphrases. Such paraphrases include generic prepositional paraphrases such as "[n2] of [n1]" (e.g. "drawing of chalk").
| Original | Revised | | | | | |
|--------------|-----------|------|-------|-------|-------|-------|
| train | dev | test | train | dev | test | |
| #NCs | 174 | 0 | 181 | 160 | 28 | 110 |
| #paraphrases | 4,256 | 0 | 8,190 | 5,441 | 1,469 | 4,820 |
For verbal paraphrases, the include generic verbs, such as "[n2] based on [n1]", "[n2] involving [n1]",
"[n2] associated with [n1]", "[n2] concerned with
[n1]", and "[n2] coming from [n1]". While these paraphrases are not always incorrect, they are also not very informative of the relationship between the constituent nouns. We therefor removed such paraphrases.2 Data Augmentation. To increase the size of the dataset in terms of paraphrases and facilitate easier training of models, we performed semi-automatic data augmentation. Using WordNet (Fellbaum, 2010), we extended the set of paraphrases for each NC by replacing verbs with their synonyms and manually judging the correctness of the resultant paraphrase. We also identified cases were two paraphrases could be merged into additional paraphrases. For example, *steam train* contained the paraphrases "train powered by steam" and "train that operates using steam", for which we added
"train operated by steam" and "train that is powered using steam". Overall, we added 3,145 paraphrases to the training set and 3,115 to the test set.
We followed the same train-test split as the original dataset, but dedicated 20% of the test set to validation. Table 1 displays the statistics of the NCI
datasets.
## 3.2 Methods
We evaluate the performance of two representative PLM-based models on our revised version of the SemEval-2013 dataset (henceforth: the NCI
dataset): a supervised seq2seq T5 model (Raffel et al., 2020) and a few-shot prompting GPT-3 model (Brown et al., 2020).
Supervised Model. We trained the seq2seq model from the Transformers package (Wolf et al.,
2019), using T5-large. We split each instance in 2Another factor for the quality of paraphrases is the workers' English proficiency level. Writing non-trivial paraphrases requires high proficiency, and in 2013, it wasn't possible to filter workers based on native language on Mechanical Turk.
the dataset into multiple training examples, with the NC as input and a single paraphrase as output.
We used the default learning rate (5 × 10−5), batch size (16), and optimizaer (Adafactor). We stopped the training after 4 epochs when the validation loss stopped improving. During inference, we used topp decoding (Holtzman et al., 2020) with p = 0.9 and a temperature of 0.7, and generated as many paraphrases as the number of references for a given NC.
Few-shot Model. We used the text-davinci-002 GPT-3 model available through the OpenAI API. We randomly sampled 10 NCs, each with one of its paraphrases, from the training set, to build the following prompt:
Q: what is the meaning of <NC>?
A:<paraphrase>
This prompt was followed by the same question for the target NC, leaving the paraphrases to be completed by GPT-3. We used the default setup of top-p decoding with p = 1 and a temperature of 1.
## 3.3 Evaluation
We decided to deviate from the original evaluation setup of the SemEval 2013 dataset, which was criticized in Shwartz and Dagan (2018). We describe the original evaluation setup, and our proposed setup including automatic and manual evaluation.
Original Evaluation Setup. The original SemEval task was formulated as a ranking task. The paraphrases of each NC were ranked according to the number of annotators who proposed them. Hendrickx et al. (2013) introduced two dedicated evaluation metrics, an 'isomorphic' score that measured the recall, precision, and order of paraphrases predicted by the systems, and a 'non-isomorphic' score that disregarded the order. Both metrics rewarded systems for predicting shorter prepositional paraphrases (e.g. "[n2] of [n1]"), that were in the set of paraphrases for many NCs, and were often ranked high because many annotators proposed them. For example, for the NC *access road*, the catch-all paraphrase "road for access" was ranked higher than the more informative "road that provides access". Indeed, as noted in Shwartz and Dagan (2018), a baseline predicting a fixed set of common, generic paraphrases already achieves moderately good non-isomorphic score. In general, we do not see the benefit of the ranking system,
| NC | GPT-3 | T5 |
|-----------------|---------------------------------------------------|--------------------------|
| access road | road that provides access | road for access |
| sport page | a page in a publication that is devoted to sports | page dedicated to sports |
| computer format | the way in which a computer organizes data | format used in computers |
access road road that provides access road for access reflex action a sudden, involuntary response to a stimulus action performed to perform reflexes
sport page a page in a publication that is devoted to sports page dedicated to sports
computer format the way in which a computer organizes data format used in computers grief process process of grieving or mourning process that a grief sufferer experiences
Table 2: Example paraphrases generated using GPT-3 and T5 for NCs in the revised SemEval 2013 test set.
Table 3: Performance of the T5 and GPT-3 models on the revised SemEval 2013 test set.
| Method | METEOR | ROUGE-L | BERTScore | Human |
|----------|----------|-----------|-------------|---------|
| T5 | 69.81 | 65.96 | 95.31 | 65.35 |
| GPT-3 | 56.27 | 47.31 | 91.94 | 95.64 |
since some of the most informative paraphrases are unique and are less likely to have been proposed by many annotators. Instead, we propose to use standard evaluation metrics for generative tasks, as we describe below.
Automatic Evaluation. Table 3 (columns 2-4)
displays the performance of T5 and GPT-3 on the test set using the following standard evaluation metrics for text generation tasks: the lexical overlap metrics ROUGE-L (Lin, 2004) and METEOR
(Lavie and Agarwal, 2007), and the semanticsimilarity metric BERT-Score (Zhang et al., 2020).
These metrics compare the system generated paraphrases with the reference paraphrases, further motivating our data augmentation in Sec 3.1 (e.g., Lin
(2004) found that considering multiple references improves ROUGE's correlation with human judgements). For each metric m, we compute the following score over the test set T:
$$s=\mathrm{mean}_{\mathrm{nc}\in T}{\big[}\,\mathrm{mean}_{\mathrm{p}\in\mathrm{system(nc)}}$$ $$\mathrm{max}_{\mathrm{r}\in\mathrm{references(nc)}}\,\mathrm{m}(p,r){\big]}$$
In other words, we generate a number of paraphrases equal to the number of reference paraphrases, then find the most similar reference for each of the generated paraphrases, and average across all paraphrases for each NC in the test set.
The automatic metrics show a clear preference to T5. However, upon a closer look at the outputs of each model, it seems that T5 generated paraphrases that more closely resembled the style and syntax of the references, as expected from a supervised model, but the paraphrases were not "more correct" than those outputted by GPT-3. For example, in Table 2, the paraphrase generated by GPT-3 for reflex action is correct but doesn't follow the syntax of the references in the training data ([n2] ...
[n1]). The T5-generated paraphrase follows that syntax but generates the generic and inaccurate paraphrase "action performed to perform reflexes".
More broadly, lexical overlap based metrics such as ROUGE and METEOR penalize models for lexical variability.
Human Evaluation. To assess the quality of predictions in a more reliable manner, we turn to human evaluation. We used Amazon Mechanical Turk (MTurk) and designed a human intelligence task (HIT) which involved displaying an NC along with 10 generated paraphrases, 5 from GPT-3 and 5 from T5, randomly shuffled. We asked workers to indicate for each paraphrase whether they deemed it acceptable or not. Each HIT was to be performed by 3 workers, and acceptability was measured using majority voting. To ensure the quality of workers, we required that workers reside in the US, Canada, or the UK, and that they had an acceptance rate of at least 99% for all prior HITs. We also required them to pass a qualification task that resembled the HIT itself. We paid each worker $0.10 per task, which yielded an approximate hourly wage $15.
The last column in Table 3 presents the results of the human evaluation in terms of percentage of paraphrases deemed acceptable by a majority of human evaluators. GPT-3 performed remarkably well with over 95% of generated paraphrases deemed acceptable by a majority of human evaluators. In contrast to the automatic metrics, T5 fared much worse on human evaluation, and human annotators judged a third of T5 outputs as incorrect.
## 4 Noun Compound Conceptualization
GPT-3's impressive success at interpreting existing noun compounds is related to PLMs' ability to associate nouns with their hypernyms (Ettinger, 2020) and to generate accurate definitions for terms
(Shwartz et al., 2020). Such models are trained on vast amounts of texts, including said definitions, and the target NC itself occurring alongside contexts that indicate its meaning. Humans are different in their ability to interpret NCs. We can often rely on a single context, or no context at all, to have at least an educated guess at the meaning of a new NC. We are capable of representing new concepts by "mentally manipulating old ones"
(Connell and Lynott, 2012), e.g. coming up with a plausible interpretation for *chocolate crocodile* based on similar concepts such as *chocolate bunny*.
Prior work on NCI simulated this by training a model to jointly predict a paraphrase as well as answer questions such as "what can chocolate be shaped like?" (Shwartz and Dagan, 2018). We are interested in learning whether PLMs already do this implicitly, or more broadly, to what extent can PLMs interpret new noun compounds?
Inspired by studies in cognitive science about
"conceptual combination" (Wisniewski, 1997; Costello and Keane, 2000), we define the task of Noun Compound Conceptualization (NCC). NCC
has the same setup as NCI (§3), but the inputs are rare or novel noun compounds. The task thus requires some level of creativity and the ability to make sense of the world. We first describe the creation of the NCC test set (Sec 4.1). We evaluate the best model from Sec 3.2 on the new test set, and present the results in Sec 4.2.
## 4.1 Data
We construct a new test set consisting of novel or rare NCs. The guidelines for adding an NC for the test set are that: (a) humans could easily make sense of it; but (b) it is infrequent in or completely absent from the web.
Noun Compounds. The main source for the test set is a dataset from Dhar and van der Plas (2019).
They proposed the task of classifying an unseen sequence of two nouns to whether it can form a plausible NC or not. The data was created by extracting noun-noun bigrams from the Google Ngram corpus (Brants, 2006). To simulate novel NCs, the models were trained on bigrams that only appeared in the corpus until the year 2000 and evaluated on bigrams that only appeared after 2000. Since GPT-3 was trained on recent data, we had to make sure that we only include the most infrequent NCs.
| Test Set | NCI | NCC |
|-------------------|-------|-------|
| Human Performance | - | 73.33 |
| GPT-3 | 95.64 | 83.81 |
![5_image_0.png](5_image_0.png)
We thus further refined the data from Dhar and van der Plas (2019) by including only the 500 most infrequent NCs based on their frequency in a largescale text corpus, C4 (Raffel et al., 2020). We then semi-automatically filtered out named entities, compounds that were part of larger expressions, and NCs with spelling errors. Finally, we manually chose only the NCs for which we could come up with a plausible interpretation, leaving us with 83 NCs in total.
We added 22 more NCs that we extracted in a similar manner from the Twitter sentiment 140 dataset (Go et al., 2009). We expected to find more
"ad-hoc" NCs in tweets than in more formal texts such as news. Due to the age and size of this dataset, we filtered the NCs based on frequency in C4, setting the threshold to 250 occurances. Overall, our NCC test set contains a total of 105 NCs.
Paraphrases. We collected reference paraphrases for the NCC test set using MTurk. We showed workers the target NC and asked them to paraphrase the NC or give their best estimate if they are unfamiliar with the NC. We used the same qualifications as in Sec 3.3, and paid $0.12 per HIT.
## 4.2 Evaluation
We focus on GPT-3 due to its almost perfect performance on NCI. We evaluated GPT-3 on the NCC test set using the few-shot setup described in Sec 3.2. We selected the few-shot examples from the NCI training set.
We focus on human evaluation (as described in Sec 3.3), which is more reliable than automatic metrics. We asked workers to judge the validity of both human-written and GPT-3 generated paraphrases.
Table 4 shows that GPT-3 performs significantly better than humans at this task. GPT-3 benefits from access to huge amounts of data. We conjecture that even though the target NCs are rare in its training data, it likely observed similar NCs, and is able to generalize and make sense of new concepts.
At the same time, while humans are in general ca-
![6_image_0.png](6_image_0.png)
pable of coming up with a plausible interpretation for an unfamiliar concept, it is an effortful and cognitively taxing task. We hypothesize that in a setup other than crowdsourcing, i.e. given more time or incentive, human performance may increase.
Compared to its performance on NCI, GPT-3's performance on NCC shows a significant drop.
This may suggest that GPT-3 struggles to reason about certain rare NCs, which we investigate in the next section.
## 5 Does Gpt-3 Parrot Its Training Data?
While GPT-3 performs fairly well on NCC, looking at failure cases brings up interesting observations. For example, one of its responses for *chocolate crocodile* was "A large, aggressive freshwater reptile native to Africa". This response seems to have ignored the *chocolate* part of the NC entirely, and opted to provide an answer to "What is a crocodile?". Much like a student who doesn't know the answer to a question so instead regurgitates everything they memorized about the topic in hopes that it will include the correct answer.3 To quantify the extent to which GPT-3 may be parroting its training corpus, we look at n-gram overlap between GPT-3's generated paraphrases and the large-scale web-based corpus C4 (Raffel et al., 2020).4 1 Figure 2 displays the percents of n-grams among the generated paraphrases (for n = {3, 4, 5}) that occur in the C4 corpus 0, 1-5, or 5+ times, for each of the NCI and NCC test sets. The results are presented separately for paraphrases deemed correct and incorrect by human evaluators.
We learn several things from Figure 2. First, the generated paraphrases often had significant overlap with the corpus (34-94%). As expected, trigrams are copied more than 4-grams, which are copied more than 5-grams, as those tend to be rarer.
Second, for the NCI test set, for each n, we see that n-grams from the correct paraphrases are copied from the web more often than n-grams from the incorrect paraphrases. The trend is reversed for NCC, where incorrect paraphrases are copied from the web more often than correct ones. Naturally, the copying strategy is less useful for NCC, which requires reasoning about new concepts. When GPT3 generates correct paraphrases for NCC, their ngrams tend to not appear in the web at all.
We reach a similar conclusion by looking at the percent of n-grams in correct vs. incorrect paraphrases that are copied from the web. The vast majority of n-grams copied from the web (97%)
for the NCI test set were correct, as opposed to only 80% for NCC.
## 6 Conclusion
We evaluated PLMs on their ability to paraphrase existing and novel noun compounds. For interpre-
C4 (Raffel et al., 2020) is a colossal, cleaned version of Common Crawl, thus it is the closest to GPT-3's training corpus.
tation of existing NCs (NCI), we released a cleaned version of the SemEval 2013 dataset, with manual correction and automatic augmentation of paraphrases, and proposed additional evaluation metrics to overcome limitations described in prior work.
GPT-3 achieved near perfect performance on this new test set. We then investigated the task of noun compound conceptualization (NCC). NCC evaluates the capacity of PLMs to interpret the meaning of new NCs. We showed that GPT-3 still performs reasonably well, but its success can largely be attributed to copying definitions or parts of definitions from its training corpus.
## 7 Limitations
Human performance on NCC. The human accuracy on NCC was 73%, compared to 83% for GPT-3. We know from cognitive science research that humans are capable of forming new concepts based on existing ones (Connell and Lynott, 2012).
Moreover, we manually selected NCs in the NCC
test set that we could come up with a plausible interpretation for. The fact that 27% of the paraphrases proposed by MTurk workers were judged as incorrect could be explained by one of the following.
The first explanation has to do with the limitations of crowdsourcing. To earn enough money, workers need to perform tasks quickly, and conceptualization is a slow cognitive process. On top of that, a worker that has already spent considerable amount of time trying to come up with a plausible interpretation for a new NC, is incentivized to submit any answer they managed to come up with, regardless of its quality. Skipping a HIT means lost wages. In a different setup, we hypothesize that human performance may increase for this task.
The second explanation has to do with the evaluation setup. We asked people to judge paraphrases as correct or incorrect. Upon manual examination of a sample of the human-written paraphrases, we observed a non-negligible number of reasonable
(but not optimal) paraphrases that were annotated as incorrect. For future work, we recommend doing a more nuanced human evaluation that will facilitate comparing the outputs of humans and models along various criteria.
The work focuses only on English. Our setup and data construction methods are fairly generic and we expect it to be straightforward to adapt them to other languages that use noun compounds.
With that said, languages such as German, Norwegian, Swedish, Danish, and Dutch write noun compounds as a single word. Our methods will not work on these languages without an additional step of separating the NC into its constituent nouns, similar to unblending blends (Pinter et al., 2020b).
In the future, we would like to investigate how well PLMs for other languages perform on NCI and NCC, especially for low-resource languages.
Limitations of automatic metrics for generative tasks. Automatic metrics based on n-gram overlap are known to have low correlation with human judgements on various NLP tasks (Novikova et al., 2017). In particular, they penalize models for lexical variability. To mitigate this issue, we semi-automatically expanded the set of reference paraphrases using WordNet synonyms. Yet, we still saw inconsistencies with respect to the automatic metrics and human evaluation on NCI. The automatic metrics showed a clear preference to T5, which thanks to the supervision, learned to generate paraphrases that more closely resembled the style and syntax of the references. GPT-3's paraphrases, which were almost all judged as correct by human annotators, were penalized by the automatic metrics for their free form (e.g., they didn't always include the constituent nouns). For this reason, we focused only on human evaluation for NCC.
## 8 Ethical Considerations
Data Sources. All the datasets and corpora used in this work are publicly available. The cleaned version of the NCI dataset is based on the existing SemEval 2013 dataset (Hendrickx et al., 2013). The NCs for the new NCC test set were taken from another publicly-available dataset (Dhar and van der Plas, 2019), based on frequencies in the Google Ngram corpus (Brants, 2006). To quantify Ngram overlap, we used the Allen AI version of the C4 corpus (Raffel et al., 2020; Dodge et al., 2021) made available by the HuggingFace Datasets package.5 Data Collection. We performed human evaluation using Amazon Mechanical Turk. We made sure annotators were fairly compensated by computing an average hourly wage of $15, which is well above the US minimum wage. We did not collect any personal information from annotators.
Models. The models presented in this paper are for a low-level NLP task rather than for an appli-5https://huggingface.co/datasets/c4 cation with which users are expected to interact directly. The generative models are based on PLMs, which may generate offensive content if prompted with certain inputs.
## Acknowledgements
This work was funded, in part, by an NSERC
USRA award, the Vector Institute for AI, Canada CIFAR AI Chairs program, an NSERC discovery grant, and a research gift from AI2.
## References
Thorsten Brants. 2006. Web 1t 5-gram version 1.
http://www. ldc. upenn. edu/Catalog/CatalogEntry.
jsp? catalogId= LDC2006T13.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Cristina Butnariu, Su Nam Kim, Preslav Nakov, Diarmuid Ó Séaghdha, Stan Szpakowicz, and Tony Veale.
2009. SemEval-2010 task 9: The interpretation of noun compounds using paraphrasing verbs and prepositions. In *Proceedings of the Workshop on Semantic* Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 100–105, Boulder, Colorado. Association for Computational Linguistics.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang.
2022. Quantifying memorization across neural language models. *arXiv preprint arXiv:2202.07646*.
Louise Connell and Dermot Lynott. 2012. Flexible shortcuts: Linguistic distributional information affects both shallow and deep conceptual processing. In *Proceedings of the Annual Meeting of the Cognitive Science Society*, volume 34.
Fintan J. Costello and Mark T. Keane. 2000. Efficient creativity: Constraint-guided conceptual combination. *Cognitive Science*, 24(2):299–349.
Aliya Deri and Kevin Knight. 2015. How to make a frenemy: Multitape FSTs for portmanteau generation.
In *Proceedings of the 2015 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies,
pages 206–210, Denver, Colorado. Association for Computational Linguistics.
Prajit Dhar and Lonneke van der Plas. 2019. Learning to predict novel noun-noun compounds. In *Proceedings* of the Joint Workshop on Multiword Expressions and WordNet (MWE-WN 2019), pages 30–39, Florence, Italy. Association for Computational Linguistics.
Jesse Dodge, Maarten Sap, Ana Marasovic, William ´
Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286–1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48.
Christiane Fellbaum. 2010. Wordnet. In *Theory and applications of ontology: computer applications*, pages 231–243. Springer.
Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision.
CS224N project report, Stanford 1.12.
Iris Hendrickx, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Stan Szpakowicz, and Tony Veale.
2013. SemEval-2013 task 4: Free paraphrases of noun compounds. In *Second Joint Conference on* Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013),
pages 138–143, Atlanta, Georgia, USA. Association for Computational Linguistics.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *International Conference on Learning* Representations.
Su Nam Kim and Timothy Baldwin. 2005. Automatic interpretation of noun compounds using WordNet similarity. In *Second International Joint Conference* on Natural Language Processing: Full Papers.
Su Nam Kim and Timothy Baldwin. 2007. Interpreting noun compounds using bootstrapping and sense collocation. *Proc. of the Pacific Association for Computational Linguistics (PACLING)*.
Su Nam Kim and Preslav Nakov. 2011. Large-scale noun compound interpretation using bootstrapping and the web as a corpus. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 648–658, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Mirella Lapata and Alex Lascarides. 2003. Detecting novel compounds: The role of distributional evidence.
In 10th Conference of the European Chapter of the Association for Computational Linguistics, Budapest, Hungary. Association for Computational Linguistics.
Mark Lauer. 1995. Designing statistical language learners: Experiments on noun compounds. *Ph. D. Thesis,*
Department of Computing Macquarie University.
Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In *Proceedings of the Second Workshop on Statistical Machine* Translation, pages 228–231, Prague, Czech Republic.
Association for Computational Linguistics.
Siyan Li, Riley Carlson, and Christopher Potts. 2022.
Systematicity in GPT-3's interpretation of novel English noun compounds. In *Findings of the Association for Computational Linguistics: EMNLP 2022*,
pages 717–728, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Moran Mizrahi, Stav Yardeni Seelig, and Dafna Shahaf.
2020. Coming to Terms: Automatic Formation of Neologisms in Hebrew. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 4918–4929, Online. Association for Computational Linguistics.
Preslav Nakov and Marti Hearst. 2006. Using verbs to characterize noun-noun relations. In *Proceedings* of the 12th international conference on Artificial Intelligence: methodology, Systems, and Applications, pages 233–244.
Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
Marius Pa¸sca. 2015. Interpreting compound noun phrases using web search queries. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 335–344, Denver, Colorado. Association for Computational Linguistics.
Yuval Pinter, Cassandra L. Jacobs, and Max Bittker.
2020a. NYTWIT: A dataset of novel words in the New York Times. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 6509–6515, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Yuval Pinter, Cassandra L. Jacobs, and Jacob Eisenstein.
2020b. Will it unblend? In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 1525–1535, Online. Association for Computational Linguistics.
Girishkumar Ponkiya, Rudra Murthy, Pushpak Bhattacharyya, and Girish Palshikar. 2020. Looking inside noun compounds: Unsupervised prepositional and free paraphrasing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4313–4323, Online. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in BERTology: What we know about how BERT works. *Transactions of the Association* for Computational Linguistics, 8:842–866.
Vered Shwartz. 2021. A long hard look at MWEs in the age of language models. In Proceedings of the 17th Workshop on Multiword Expressions (MWE 2021),
page 1, Online. Association for Computational Linguistics.
Vered Shwartz and Ido Dagan. 2018. Paraphrase to explicate: Revealing implicit noun-compound relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1200–1211, Melbourne, Australia. Association for Computational Linguistics.
Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 4615–4629, Online. Association for Computational Linguistics.
Stephen Tratz and Eduard Hovy. 2010. A taxonomy, dataset, and classifier for automatic noun compound interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 678–687, Uppsala, Sweden. Association for Computational Linguistics.
Tim Van de Cruys, Stergos Afantenos, and Philippe Muller. 2013. MELODI: A supervised distributional approach for free paraphrasing of noun compounds.
In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 144–147, Atlanta, Georgia, USA. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Edward J Wisniewski. 1997. When concepts combine.
Psychon Bull Rev., page 167–183.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2, 3
✓ B1. Did you cite the creators of artifacts you used?
2, 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
8
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 8
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Irrelevant for this type of data (as discussed in Section 8)
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
2
## C ✓ **Did You Run Computational Experiments?** 2, 3, 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Most of this is irrelevant to our experiments. We mentioned the exact models we used (sections 2, 3).
We will add the GPU hours for the camera-ready version.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
2, 3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
2, 3, 8 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
2, 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
2, 3 (in the text - not a full HIT template with all the examples etc. We can include this as an appendix to the camera-ready version if needed)
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
2, 3, 8
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Irrelevant (we didn't collect private information as we discuss in Section 8)
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Irrelevant (we didn't collect private information as we discuss in Section 8)
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
2, 3 |
borenstein-etal-2023-measuring | Measuring Intersectional Biases in Historical Documents | https://aclanthology.org/2023.findings-acl.170 | Data-driven analyses of biases in historical texts can help illuminate the origin and development of biases prevailing in modern society. However, digitised historical documents pose a challenge for NLP practitioners as these corpora suffer from errors introduced by optical character recognition (OCR) and are written in an archaic language. In this paper, we investigate the continuities and transformations of bias in historical newspapers published in the Caribbean during the colonial era (18th to 19th centuries). Our analyses are performed along the axes of gender, race, and their intersection. We examine these biases by conducting a temporal study in which we measure the development of lexical associations using distributional semantics models and word embeddings. Further, we evaluate the effectiveness of techniques designed to process OCR-generated data and assess their stability when trained on and applied to the noisy historical newspapers. We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset. We provide evidence that gender and racial biases are interdependent, and their intersection triggers distinct effects. These findings align with the theory of intersectionality, which stresses that biases affecting people with multiple marginalised identities compound to more than the sum of their constituents. |
## Measuring Intersectional Biases In Historical Documents Warning: This Paper Shows Dataset Samples That Are Racist In Nature Nadav Borenstein∗1 Karolina Stanczak ´∗1 Thea Rolskov2 **Natália Da Silva Perez**3 Natacha Klein Käfer1**Isabelle Augenstein**1
1University of Copenhagen 2Aarhus University 3Erasmus University Rotterdam [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
## Abstract
Data-driven analyses of biases in historical texts can help illuminate the origin and development of biases prevailing in modern society.
However, digitised historical documents pose a challenge for NLP practitioners as these corpora suffer from errors introduced by optical character recognition (OCR) and are written in an archaic language. In this paper, we investigate the continuities and transformations of bias in historical newspapers published in the Caribbean during the colonial era (18th to 19th centuries). Our analyses are performed along the axes of gender, race, and their intersection. We examine these biases by conducting a temporal study in which we measure the development of lexical associations using distributional semantics models and word embeddings. Further, we evaluate the effectiveness of techniques designed to process OCRgenerated data and assess their stability when trained on and applied to the noisy historical newspapers. We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset.
We provide evidence that gender and racial biases are interdependent, and their intersection triggers distinct effects. These findings align with the theory of intersectionality, which stresses that biases affecting people with multiple marginalised identities compound to more than the sum of their constituents.
## Https://Github.Com/Copenlu/I Ntersectional-Bias-Pbw 1 Introduction
The availability of large-scale digitised archives and modern NLP tools has enabled a number of sociological studies of historical trends and cultures (Garg et al., 2018; Kozlowski et al., 2019; Michel et al., 2011). Analyses of historical biases and stereotypes, in particular, can shed light on past
* Equal contribution.
![0_image_0.png](0_image_0.png)
societal dynamics and circumstances (Levis Sullam et al., 2022) and link them to contemporary challenges and biases prevalent in modern societies (Payne et al., 2019). For instance, Payne et al. (2019) consider implicit bias as the cognitive residue of past and present structural inequalities and highlight the critical role of history in shaping modern forms of prejudice.
Thus far, previous research on bias in historical documents focused either on gender (Rios et al.,
2020; Wevers, 2019) or ethnic biases (Levis Sullam et al., 2022). While Garg et al. (2018) separately analyse both, their work does not engage with their intersection. Yet, in the words of Crenshaw (1995), intersectional perspective is important because "the intersection of racism and sexism factors into black women's lives in ways that cannot be captured wholly by looking separately at the race or gender dimensions of those experiences."
Analysing historical documents poses particular challenges for modern NLP tools (Borenstein et al.,
2023; Ehrmann et al., 2020). Misspelt words due to wrongly recognised characters in the digitisation process, and archaic language unknown to modern NLP models, i.e. historical variant spellings and words that became obsolete in the current language, increase the task's complexity (Bollmann, 2019; Linhares Pontes et al., 2019; Piotrowski, 2012).
However, while most previous work on historical NLP acknowledges the unique nature of the task, only a few address them within their experimental setup.
In this paper, we address the shortcomings of previous work and make the following contributions: (1) To the best of our knowledge, this paper presents the first study of historical language associated with entities at the intersections of two axes of oppression: race and gender. We study biases associated with identified entities on a word level, and to this end, employ distributional models and analyse semantics extracted from word embeddings trained on our historical corpora. (2) We conduct a temporal case study on historical newspapers from the Caribbean in the colonial period between 1770–
1870. During this time, the region suffered both the consequences of European wars and political turmoil, as well as several uprisings of the local enslaved populations, which had a significant impact on the Caribbean social relationships and cultures
(Migge and Muehleisen, 2010). (3) To address the challenges of analysing historical documents, we probe the applied methods for their stability and ability to comprehend the noisy, archaic corpora.
We find that there is a trade-off between the stability of word embeddings and their compatibility with the historical dataset. Further, our temporal analysis connects changes in biased word associations to historical shifts taking place in the period. For instance, we couple the high association between *Caribbean countries* and "manual labour" prevalent mostly in the earlier time periods to waves of white labour migrants coming to the Caribbean from 1750 onward. Finally, we provide evidence supporting the intersectionality theory by observing conventional manifestations of gender bias solely for white people. While unsurprising, this finding necessitates intersectional bias analysis for historical documents.
## 2 Related Work
Intersectional Biases. Most prior work has analysed bias along one axis, e.g. race or gender, but not both simultaneously (Field et al., 2021; Stanczak and Augenstein ´ , 2021). There, research on racial biases is generally centred around the gender majority group, such as Black men, while research on gender bias emphasises the experience of individuals who hold racial privilege, such as white women. Therefore, discrimination towards people with multiple minority identities, such as Black women, remains understudied. Addressing this, the intersectionality framework (Crenshaw, 1989) investigates how different forms of inequality, e.g. gender and race, intersect with and reinforce each other. Drawing on this framework, Tan and Celis (2019a); May et al. (2019); Lepori (2020); Maronikolakis et al. (2022); Guo and Caliskan (2021) analyse the compounding effects of race and gender encoded in contextualised word representations and downstream tasks. Recently, Lalor et al. (2022); Jiang and Fellbaum (2020) show the harmful implications of intersectionality effects in pre-trained language models. Less interest has been dedicated to unveiling intersectional biases prevalent in natural language, with a notable exception of Kim et al. (2020) which provide evidence on intersectional bias in datasets of hate speech and abusive language on social media. As far as we know, this is the first paper on intersectional biases in historical documents.
Bias in Historical Documents. Historical corpora have been employed to study societal phenomena such as language change (Kutuzov et al., 2018; Hamilton et al., 2016) and societal biases. Gender bias has been analysed in biomedical research over a span of 60 years (Rios et al., 2020), in Englishlanguage books published between 1520 and 2008
(Hoyle et al., 2019), and in Dutch newspapers from the second half of the 20th century (Wevers, 2019).
Levis Sullam et al. (2022) investigate the evolution of the discourse on Jews in France during the 19th century. Garg et al. (2018) study the temporal change in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the US. However, they neglect the emergent intersectionality bias.
When analysing the transformations of biases in historical texts, researchers rely on conventional tools developed for modern language. However, historical texts can be viewed as a separate domain due to their unique challenges of small and idiosyncratic corpora and noisy, archaic text (Piotrowski, 2012). Prior work has attempted to overcome the challenges such documents pose for mod-
| Source | #Files | #Sentences |
|----------------------|----------|--------------|
| Caribbean Project | 7 487 | 5 224 591 |
| Danish Royal Library | 5 661 | 657 618 |
| Total | 13 148 | 5 882 209 |
| Period | Decade | #Issues | Total |
|---------------|-----------|-----------|---------|
| International | 1710–1770 | 15 | 1 886 |
| conflicts | 1770s | 747 | |
| and slave | 1780s | 283 | |
| rebellions | 1790s | 841 | |
| Revolutions | 1800s | 604 | |
| and nation | 1810s | 1 347 | 3 790 |
| building | 1820s | 1 839 | |
| 1830s | 1 838 | | |
| Abolishment | 1840s | 1 197 | |
| of slavery | 1850s | 1 111 | 7 453 |
| 1860s | 1 521 | | |
| 1870s | 1 786 | | |
Table 1: Statistics of the newspapers dataset.
Table 2: Total number of articles in each period and decade.
ern tools, including recognition of spelling variations (Bollmann, 2019) and misspelt words (Boros et al., 2020), and ensuring the stability of the applied methods (Antoniak and Mimno, 2018).
We study the dynamics of intersectional biases and their manifestations in language while addressing the challenges of historical data.
## 3 Datasets
Newspapers are considered an excellent source for the study of societal phenomena since they function as transceivers - both producing and demonstrating public discourse (Wevers, 2019). As part of this study, we collect newspapers written in English from the "Caribbean Newspapers, 1718–1876" database,1the largest collection of Caribbean newspapers from the 18th–19th century available online.
We extend this dataset with English-Danish newspapers published between 1770–1850 in the Danish colony of Santa Cruz (Saint Croix) downloaded from Danish Royal Library's website.2 See Tab 1 and Fig 8 (in App A.1) for details.
As mentioned in §1, the Caribbean islands experienced significant changes and turmoils during the 18th–19th century. Although chronologies
![2_image_0.png](2_image_0.png)
can change from island to island, key moments in Caribbean history can be divided into roughly four periods (Higman, 2021; Heuman, 2018): 1) colonial trade and plantation system (1718 to 1750); 2)
international conflicts and slave rebellions (1751 to 1790); 3) revolutions and nation building (1791 to 1825); 4) end of slavery and decline of European dominance (1826 to 1876). In our experimental setup, we conduct a temporal study on data split into these periods (see Tab 2 for the number of articles in each period). As the resulting number of newspapers for the first period is very small (< 10),
we focus on the three latter periods.
Data Preprocessing. Starting with scans of entire newspaper issues (Fig 2.a), we first OCR them using the popular software Tesseract3 with default parameters and settings. We then clean the dataset by applying the DataMunging package,4 which uses a simple rule-based approach to fix basic OCR
errors (e.g. long s' being OCRed as f', (Fig 2.b)).
As some of the newspapers downloaded from the Danish royal library contain Danish text, we use spaCy5to tokenise the OCRed newspapers into sentences and the python package langdetect6 to filter out non-English sentences.
## 4 Bias And Its Measures
Biases can manifest themselves in natural language in many ways (see the surveys by Stanczak and ´
Augenstein (2021); Field et al. (2021); Lalor et al.
(2022)). In the following, we state the definition of bias we follow and describe the measures we use to quantify it.
## 4.1 Definition
Language is known to reflect common perceptions of the world (Hitti et al., 2019) and differences in its usage have been shown to reflect societal biases
(Hoyle et al., 2019; Marjanovic et al., 2022). In this paper, we define bias in a text as the use of words or syntactic constructs that connote or imply an inclination or prejudice against a certain sensitive group, following the bias definition as in Hitti et al. (2019).
To quantify bias under this definition, we analyse word embeddings trained on our historical corpora. These representations are assumed to carry lexical semantic meaning signals from the data and encode information about language usage in the proximity of entities. However, even words that are not used as direct descriptors of an entity influence its embedding, and thus its learnt meaning. Therefore, we further conduct an analysis focusing exclusively on words that describe identified entities.
## 4.2 Measures
WEAT The Word Embedding Association Test
(Caliskan et al., 2017) is arguably the most popular benchmark to assess bias in word embeddings and has been adapted in numerous research (May et al.,
2019; Rios et al., 2020). WEAT employs cosine similarity to measure the association between two sets of attribute words and two sets of target concepts. Here, the attribute words relate to a sensitive attribute (e.g. male and female), whereas the target concepts are composed of words in a category of a specific domain of bias (e.g. career- and familyrelated words). For instance, the WEAT statistic informs us whether the learned embeddings representing the concept of *f amily* are more associated with females compared to males. According to Caliskan et al. (2017), the differential association between two sets of target concept embeddings, denoted X and Y , with two sets of attribute embeddings, denoted as A and B, can be calculated as:
$$s(X,Y,A,B)=\sum_{x\in X}\mathrm{s}(x,A,B)-\sum_{y\in Y}\mathrm{s}(y,A,B)$$
where s(*w, A, B*) measures the embedding association between one target word w and each of the sensitive attributes:
$$s(w,A,B)=\operatorname*{mean}_{a\in A}[\cos(w,a)]-\operatorname*{mean}_{b\in B}[\cos(w,b)]$$
The resulting effect size is then a normalised measure of association:
$$d={\frac{\operatorname*{mean}[\mathrm{s}(x,A,B)]-\operatorname*{mean}[\mathrm{s}(y,A,B)]}{\operatorname*{std}_{w\in X\cup Y}[\mathrm{s}(w,A,B)]}}$$
As a result, larger effect sizes imply a more biased word embedding. Furthermore, conceptrelated words should be equally associated with either sensitive attribute group assuming an unbiased word embedding.
PMI We use point-wise mutual information (PMI;
Church and Hanks 1990) as a measure of association between a descriptive word and a sensitive attribute (gender or race). In particular, PMI measures the difference between the probability of the co-occurrence of a word and an attribute, and their joint probability if they were independent as:
$$\mathrm{PMI}(a,w)=\log{\frac{p(a,w)}{p(a)p(w)}}\qquad\qquad(1)$$
A strong association with a specific gender or race leads to a high PMI. For example, a high value for PMI(*female, wife*) is expected due to their co-occurrence probability being higher than the independent probabilities of *female* and *wife*. Accordingly, in an ideal unbiased world, words such as *honourable* would have a PMI of approximately zero for all gender and racial identities.
## 5 Experimental Setup
We perform two sets of experiments on our historical newspaper corpus. First, before we employ word embeddings to measure bias, we investigate the stability of the word embeddings trained on our dataset and evaluate their understanding of the noisy nature of the corpora. Second, we assess gender and racial biases using tools defined in §4.2.
## 5.1 Embedding Stability Evaluation
We use word embeddings as a tool to quantify historical trends and word associations in our data.
However, prior work has called attention to the lack of stability of word embeddings trained on small and potentially idiosyncratic corpora (Antoniak and Mimno, 2018; Gonen et al., 2020). We compare these different embeddings setups by testing them with regard to their stability and capturing meaning while controlling for the tokenisation algorithm, embedding size and the minimum number of occurrences.
We construct the word embeddings employing the continuous skip-gram negative sampling model from Word2vec (Mikolov et al., 2013b) using gensim.
7 Following prior work (Antoniak and Mimno, 2018; Gonen et al., 2020), we test two common vector dimension sizes of 100 and 300, and two minimum numbers of occurrences of 20 and 100. The rest of the hyperparameters are set to their default value. We use two different methods for tokenising documents, the spaCy tokeniser and a subword-based tokeniser, Byte-Pair Encoding (BPE, Gage (1994)). We train the BPE
tokeniser on our dataset using the Hugging Face tokeniser implementation.8 For each word in the vocabulary, we identify its 20 nearest neighbours and calculate the Jaccard similarity across five algorithm runs. Next, we test how well the word embeddings deal with the noisy nature of our documents. We create a list of 110 frequently misspelt words (See App A.2). We construct the list by first tokenising our dataset using spaCy and filtering out proper nouns and tokens that appear in the English dictionary. We then order the remaining tokens by frequency and manually scan the top 1 000 tokens for misspelt words. We calculate the percentage of words (averaged across 5 runs) for which the misspelt word is in immediate proximity to the correct word (top 5 nearest neighbours in terms of cosine similarity).
Based on the results of the stability and compatibility study, we select the most suitable model with which we conduct the following bias evaluation.
## 5.2 Bias Estimation 5.2.1 Weat Evaluation
As discussed in §4.2, WEAT is used to evaluate how two attributes are associated with two target concepts in an embedding space, here of the model that was selected by the method described in §5.1.
In this work, we focus on the attribute pairs (female, *male*)
9and (white, *non-white*). Usually, comparing the sensitive attributes (white, *non-white*)
is done by collecting the embedding of popular white names and popular non-white names (Tan and Celis, 2019b). However, this approach can introduce noise when applied to our dataset (Handler and Jacoby, 1996). First, non-whites are less likely to be mentioned by name in historical newspapers compared to whites. Second, popular non-white names of the 18th and 19th centuries differ substantially from popular non-white names of modern times, and, to the best of our knowledge, there is no list of common historical non-white names. For these reasons, instead of comparing the pair (*white*,
non-white), we compare the pairs (African countries, *European countries*) and (Caribbean countries, *European countries*).
Following Rios et al. (2020), we analyse the association of the above-mentioned attributes to the target concepts (career, *family*), (strong, *weak*),
(intelligence, *appearance*), and (*physical illness*,
mental illness). Following a consultation with a historian, we add further target concepts relevant to this period (manual labour, *non-manual labour*) and (crime, *lawfulness*). Tab 6 (in App A.3) lists the target and attribute words we use for our analysis.
We also train a separate word embedding model on each of the dataset splits defined in §3 and run WEAT on the resulting three models. Comparing the obtained WEAT scores allows us to visualise temporal changes in the bias associated with the attributes and understand its dynamics.
## 5.2.2 Pmi Evaluation
Different from WEAT, calculating PMI requires first identifying entities in the OCRed historical newspapers and then classifying them into predefined attribute groups. The next step is collecting descriptors, i.e. words that are used to describe the entities. Finally, we use PMI to measure the association strength of the collected descriptors with each attribute group.
Entity Extraction. We apply F-coref (Otmazgin et al., 2022), a model for English coreference resolution that simultaneously performs entity extraction and coreference resolution on the extracted entities. The model's output is a set of entities, each represented as a list of all the references to that entity in the text. We filter out non-human entities by using nltk's WordNet package,10 retaining only entities for which the synset "person.n1" is a hypernym of one of their references.
Entity Classification. We use a keyword-based approach (Lepori, 2020) to classify the entities into groups corresponding to the gender and race axes 10https://www.nltk.org/howto/wordnet.h tml
| #Entities | #Males | #Females | #Non-whites | #Non-white males | #Non-white females |
|-------------|----------|------------|---------------|--------------------|----------------------|
| 601 468 | 387 292 | 78 821 | 8 525 | 4 543 | 1 548 |
and their intersection. Specifically, we classify each entity as being a member of male vs *female*,
and white vs *non-white*. Additionally, entities are classified into intersectional groups (e.g. we classify an entity into the group *non-white females* if it belongs to both *female* and *non-white*).
Formally, we classify an entity e with references
{r 1 e, ..., rm e } to attribute group G with keyword-set KG = {k1*, ..., k*n} if ∃i such that r ie ∈ KG. See App A.3 for listing the keyword sets of the different groups. In Tab 3, we present the number of entities classified into each group. We note here the unbalanced representation of the groups in the dataset.
Further, it is important to state, that because it is highly unlikely that an entity in our dataset would be explicitly described as white, we classify an entity into the *whites* group if it was not classified as non-white. See the Limitations section for a discussion of the limitations of using a keyword-based classification approach.
To evaluate our classification scheme, an author of this paper manually labelled a random sample of 56 entities. The keyword-based approach assigned the correct gender and race label for ∼ 80% of the entities. See additional details in Tab 7 in App B.
From a preliminary inspection, it appears that many of the entities that were wrongly classified as *female* were actually ships or other vessels (traditionally "ship" has been referred to using female gender). As F-coref was developed and trained using modern corpora, we evaluate its accuracy on the same set of 56 entities. Two authors of this paper validated its performance on the historical data to be satisfactory, with especially impressive results on shorter texts with fewer amount of OCR
errors.
Descriptors Collection. Finally, we use spaCy to collect descriptors for each classified entity. Here, we define the descriptors as the lemmatised form of tokens that share a dependency arc labelled "amod" (i.e. adjectives that describe the tokens) to one of the entity's references. Every target group Gj is then assigned with descriptors list Dj = [d1*, ..., d*k].
To calculate PMI according to Eq (1), we estimate the joint distribution of a target group and a descriptor using a simple plug-in estimator:
$${\widehat{p}}(G_{j},d_{i})\propto\operatorname{count}(G_{j},d_{i})\qquad\qquad(2)$$
Now, we can assign every word ditwo continuous values representing its bias in the gender and race dimensions by calculating PMI(female, di) − PMI(males, di) and PMI(non-white, di) − PMI(white, di). These two continuous values can be seen as di's coordinates on the intersectional gender/race plane.
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
SpaCy 100 20 0.59 **63.89** 74.07 100 100 0.65 48.15 56.48 300 20 0.55 **63.89** 74.07 300 100 0.61 50.00 56.48
## 5.2.3 Lexicon Evaluation
Another popular approach for quantifying different aspects of bias is the application of specialised lexica (Stanczak and Augenstein ´ , 2021). These lexica assign words a continuous value that represents how well the word aligns with a specific dimension of bias. We use NRC-VAD lexicon (Mohammad, 2018) to compare word usage associated with the sensitive attributes *race* and *gender* in three dimensions: *dominance* (strength/weakness),
valence (goodness/badness), and *arousal* (activeness/passiveness of an identity). Specifically, given a bias dimension B with lexicon LB =
2716
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
{(w1, a1), ...,(wn, an)}, where (wi, ai) are wordvalue pairs, we calculate the association of B with a sensitive attribute Gj using:
$$A({\mathcal{B}},G_{j})={\frac{\sum_{i}^{n}a_{i}\cdot\operatorname{count}(w_{i},D_{j})}{\sum_{i}^{n}\operatorname{count}(w_{i},D_{j})}}\qquad{\mathrm{(3)}}$$
where count(wi, Dj ) is the number of times the word wi appears in the descriptors list Dj .
## 6 Results
First, we investigate which training strategies of word embeddings optimise their stability and compatibility on historical corpora (§6.1). Next, we analyse how bias is manifested along the gender and racial axes and whether there are any noticeable differences in bias across different periods of the Caribbean history (§6.2).
## 6.1 Embedding Stability Evaluation
In Tab 4, we present the results of the study on the influence of training strategies of word embeddings. We find that there is a trade-off between the stability of word embeddings and their compatibility with the dataset. While BPE achieves a higher Jaccard similarity across the top 20 nearest neighbours for each word across all runs, it loses the meaning of misspelt words. Interestingly, this phenomenon arises, despite the misspelt words occurring frequently enough to be included in the BPE model's vocabulary.
For the remainder of the experiments, we aim to select a model which effectively manages this
![7_image_0.png](7_image_0.png)
trade-off achieving both high stability and captures meaning despite the noisy nature of the underlying data. Thus, we opt to use a spaCy-based embedding with a minimum number of occurrences of 20 and an embedding size of 100 which achieves competitive results in both of these aspects. Finally, we note that our results remain stable across different algorithm runs and do not suffer from substantial variations which corroborates the reliability of the findings we make henceforth.
## 6.2 Bias Estimation 6.2.1 Weat Analysis
Fig 3 displays the results of performing a WEAT
analysis for measuring the association of the six targets described in §5.2 with the attributes
(females, *males*) and (*Caribbean countries*,
European countries), respectively.11 We calculate the WEAT score using the embedding model from §6.1 and compare it with an embedding model trained on modern news corpora
(word2vec-google-news-300, Mikolov et al. (2013a)). We notice interesting differences between the historical and modern embeddings. For example, while in our dataset *females* are associated with the target concept of manual labour, this notion is more aligned with *males* in the modern corpora. A likely cause is that during this period, womens' intellectual and 11See Fig 9 in App B for analysis of the attributes (African countries, *European countries*).
![7_image_1.png](7_image_1.png)
administrative work was not commonly recognised
(Wayne, 2020). It is also interesting to note that the attribute *Caribbean countries* has a much stronger association in the historical embedding with the target *career* (as opposed to *family*) compared to the modern embeddings. A possible explanation is that Caribbean newspapers referred to locals by profession or similar titles, while Europeans were referred to as relatives of the Caribbean population.
In Fig 4 and Fig 10 (in App B), we present a dynamic WEAT analysis that unveils trends on a temporal axis. In particular, we see an increase in the magnitude of association between the target of family vs *career* and the attributes (females, *males*) and
(Caribbean countries, *European countries*) over time. It is especially interesting to compare Fig 3 with Fig 4. One intriguing result is that the high association between *Caribbean countries* and *manual labour* can be attributed to the earlier periods.
This finding is potentially related to several historical shifts taking place in the period. For instance, while in the earlier years, it was normal for plantation owners to be absentees and continue to live in Europe, from 1750 onward, waves of white migrants with varied professional backgrounds came to the Caribbean.
## 6.2.2 Pmi Analysis
We report the results of the intersectional PMI analysis in Fig 1. As can be seen, an intersectional analysis can shed a unique light on the biased nature of some words in a way that single-dimensional analysis cannot. *White males* are "brave" and "ingenious", and *non-white males* are described as
"active" and "tall". Interestingly, while words such as "pretty" and "beautiful" (and peculiarly, "murdered") are biased towards *white* as opposed to *nonwhite females*, the word "lovely" is not, whereas
"elderly" is strongly aligned with *non-white females*. Another intriguing dichotomy is the word pair "sick" and "blind" which are both independent
![8_image_0.png](8_image_0.png)
along the gender axis but manifest a polar racial bias. In Tab 8 in App B, we list some examples from our dataset featuring those words.
Similarly to §6.2.1, we perform a temporal PMI
analysis by comparing results obtained from separately analysing the three dataset splits. In Fig 5, we follow the trajectory over time of the biased words
"free", "celebrated", "deceased" and "poor". Each word displays different temporal dynamics. For example, while the word "free" moved towards the male attribute, "poor" transitioned to become more associated with the attributes *female* and *non-white* over time (potentially due to its meaning change from an association with poverty to a pity).
These results provide evidence for the claims of the intersectionality theory. We observe conventional manifestations of gender bias, i.e. "beautiful" and "pretty" for *white females*, and "ingenious" and
"brave" for *white males*. While unsurprising due to the societal status of non-white people in that period, this finding necessitates intersectional bias analysis for historical documents in particular.
## 6.2.3 Lexicon Evaluation
Finally, we report the lexicon-based evaluation results in Fig 6 and Fig 7. Unsurprisingly, we observe lower dominance levels for the *non-white* and *female* attributes compared to *white* and *male*,
a finding previously uncovered in modern texts
(Field and Tsvetkov, 2019; Rabinovich et al., 2020).
While Fig 7 indicates that the level of dominance associated with these attributes raised over time, a noticeable disparity to white males remains. Perhaps more surprising is the valence dimension. We see the highest and lowest levels of associations with the intersectional attributes *non-white female* and *non-white male*, respectively. We hypothesise that this connects to the nature of advertisements for lending the services of or selling non-white women where being agreeable is a valuable asset.
## 7 Conclusions
In this paper, we examine biases present in historical newspapers published in the Caribbean during the colonial era by conducting a temporal analysis of biases along the axes of gender, race, and their intersection. We evaluate the effectiveness of different embedding strategies and find a tradeoff between the stability and compatibility of word representations on historical data. We link changes in biased word usage to historical shifts, coupling the development of the association between *manual labour* and *Caribbean countries* to waves of white labour migrants coming to the Caribbean from 1750 onward. Finally, we provide evidence to corroborate the intersectionality theory by observing conventional manifestations of gender bias solely for white people.
## Limitations
We see several limitations regarding our work.
First, we focus on documents in the English language only, neglecting many Caribbean newspapers and islands with other official languages.
While some of our methods can be easily extended to non-English material (e.g. WEAT analysis),
methods that rely on the pre-trained English model F-coref (i.e. PMI, lexicon-based analysis) can not.
On the same note, F-coref and spaCy were developed and trained using modern corpora, and their capabilities when applied to the noisy historical newspapers dataset, are noticeably lower compared to modern texts. Contributing to this issue is the unique, sometimes archaic language in which the newspapers were written. While we validate F-coref performance on a random sample
(§5.2), this is a significant limitation of our work.
Similarly, increased attention is required to adapt the keyword sets used by our methods to historical settings.
Moreover, our historical newspaper dataset is inherently imbalanced and skewed. As can be seen in Tab 2 and Fig 8, there is an over-representation of a handful of specific islands and time periods. While it is likely that in different regions and periods, less source material survived to modern times, part of the imbalance (e.g. the prevalence of the US Virgin Islands) can also be attributed to current research funding and policies.12 Compounding this further, minority groups are traditionally under-represented in news sources. This introduces noise and imbalance into our results, which rely on a large amount of textual material referring to each attribute on the gender/race plane that we analyse.
Relating to that, our keyword-based method of classifying entities into groups corresponding to the gender and race axes is limited. While we devise a specialised keyword set targeting the attributes female, *male* and *non-white*, we classify an entity into the *white* group if it was not classified as non-white. This discrepancy is likely to introduce noise into our evaluation, as can also be observed in Tab 7. This tendency may be intensified by the NLP systems that we use, as many tend to perform worse on gender- and race-minority groups (Field et al., 2021).
Finally, in this work, we explore intersectional bias only along the race and gender axes. Thus, we neglect the effects of other confounding factors (e.g. societal position, occupation) that affect asymmetries in language.
## Ethical Considerations
Studying historical texts from the era of colonisation and slavery poses ethical issues to historians and computer scientists alike since vulnerable groups still suffer the consequences of this history in the present. Indeed, racist and sexist language is not only a historical artefact of bygone days but has a real impact on people's lives (Alim et al., 2020).
We note that the newspapers we consider for this analysis were written foremost by the European 12The Danish government has recently funded a campaign for the digitisation of historical newspapers published in the Danish colonies; https://stcroixsource.com/20 17/03/01/.
oppressors. Moreover, only a limited number of affluent people (white males) could afford to place advertisements in those newspapers (which constitute a large portion of the raw material). This skews our study toward language used by privileged individuals and their perceptions.
This work aims to investigate racial and gender biases, as well as their intersection. Both race and gender are considered social constructs and can encompass a range of perspectives, including one's reflected, observed, or self-perceived identity. In this paper, we classify entities as observed by the author of an article and infer their gender and race based on the pronouns and descriptors used in relation to this entity. We follow this approach in an absence of explicit demographic information.
However, we warn that this method poses a risk of misclassification. Although the people referred to in the newspapers are no longer among the living, we should be considerate when conducting studies addressing vulnerable groups.
Finally, we use the mutually exclusive *white* and non-white race categories as well as *male* and *female* gender categories. We acknowledge that these groupings do not fully capture the nuanced nature of bias. This decision was made due to limited data discussing minorities in our corpus. While gender identities beyond the binary are unlikely to be found in the historical newspapers from the 18th-19th century, future work will aim to explore a wider range of racial identities.
## Acknowledgements
This work is funded by Independent Research Fund Denmark under grant agreement number 913000092B, as well as the Danish National Research Foundation (DNRF 138). Isabelle Augenstein is further supported by the Pioneer Centre for AI,
DNRF grant number P1.
## References
H. Samy Alim, Angela Reyes, and Paul V. Kroskrity, editors. 2020. The Oxford Handbook of Language and Race. Oxford University Press.
Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities.
Transactions of the Association for Computational Linguistics, 6:107–119.
Marcel Bollmann. 2019. A large-scale comparison of historical text normalization systems. In *Proceedings*
of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 3885–3898, Minneapolis, Minnesota. Association for Computational Linguistics.
Nadav Borenstein, Natalia da Silva Perez, and Isabelle Augenstein. 2023. Multilingual event extraction from historical newspaper adverts. In *Proceedings of the* 61st Annual Meeting of the Association for Computational Linguistics, Toronto, Canada. Association for Computational Linguistics.
Emanuela Boros, Ahmed Hamdi, Elvys Linhares Pontes, Luis Adrián Cabrera-Diego, Jose G. Moreno, Nicolas Sidere, and Antoine Doucet. 2020. Alleviating digitization errors in named entity recognition for historical documents. In *Proceedings of the 24th Conference on Computational Natural Language Learning*,
pages 431–441, Online. Association for Computational Linguistics.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases.
Science, 356(6334):183–186.
Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. *Computational Linguistics*, 16(1):22–29.
Kimberle Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. *The University of Chicago Legal* Forum, 140:139–167.
Kimberlé Crenshaw. 1995. Mapping the Margins: Intersectionality, Identity Politics, and Violence Against Women of Color. In Critical race theory: the key writings that formed the movement. New Press, New York.
Maud Ehrmann, Matteo Romanello, Simon Clematide, Phillip Benjamin Ströbel, and Raphaël Barman. 2020.
Language resources for historical newspapers: the impresso collection. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 958–968, Marseille, France. European Language Resources Association.
Anjalie Field, Su Lin Blodgett, Zeerak Waseem, and Yulia Tsvetkov. 2021. A survey of race, racism, and anti-racism in NLP. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1905–1925, Online. Association for Computational Linguistics.
Anjalie Field and Yulia Tsvetkov. 2019. Entity-centric contextual affective analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2550–2560, Florence, Italy.
Association for Computational Linguistics.
Philip Gage. 1994. A new algorithm for data compression. *C Users Journal*, 12(2):23–38.
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. *Proceedings* of the National Academy of Sciences, 115(16):E3635–
E3644.
Hila Gonen, Ganesh Jawahar, Djamé Seddah, and Yoav Goldberg. 2020. Simple, interpretable and stable method for detecting words with usage change across corpora. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 538–555, Online. Association for Computational Linguistics.
Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES '21, page 122–133, New York, NY, USA. Association for Computing Machinery.
William L. Hamilton, Jure Leskovec, and Dan Jurafsky.
2016. Diachronic word embeddings reveal statistical laws of semantic change. In *Proceedings of the* 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501, Berlin, Germany. Association for Computational Linguistics.
Jerome S. Handler and JoAnn Jacoby. 1996. Slave names and naming in barbados, 1650-1830. The William and Mary Quarterly, 53(4):685–728.
Gad Heuman. 2018. *The Caribbean: A Brief History*, 3 edition. Bloomsbury Academic, London, England.
B. W. Higman. 2021. A Concise History of the Caribbean, 2 edition. Cambridge Concise Histories.
Cambridge University Press.
Yasmeen Hitti, Eunbee Jang, Ines Moreno, and Carolyne Pelletier. 2019. Proposed taxonomy for gender bias in text; a filtering methodology for the gender generalization subtype. In *Proceedings of the First* Workshop on Gender Bias in Natural Language Processing, pages 8–17, Florence, Italy. Association for Computational Linguistics.
Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cotterell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1706–
1716, Florence, Italy. Association for Computational Linguistics.
May Jiang and Christiane Fellbaum. 2020. Interdependencies of gender and race in contextualized word embeddings. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 17–25, Barcelona, Spain (Online). Association for Computational Linguistics.
Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santiago, and Vivek Datta. 2020. Intersectional bias in hate speech and abusive language datasets.
arXiv:2005.05921 [cs].
Austin C. Kozlowski, Matt Taddy, and James A. Evans.
2019. The geometry of culture: Analyzing the meanings of class through word embeddings. American Sociological Review, 84(5):905–949.
Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384–1397, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
John Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking intersectional biases in NLP. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3598–3609, Seattle, United States. Association for Computational Linguistics.
Michael Lepori. 2020. Unequal representations: Analyzing intersectional biases in word embeddings using representational similarity analysis. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1720–1728, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Simon Levis Sullam, Giorgia Minello, Rocco Tripodi, and Massimo Warglien. 2022. Representation of jews and anti-jewish bias in 19th century french public discourse: Distant and close reading. *Frontiers in* Big Data, 4.
Elvys Linhares Pontes, Ahmed Hamdi, Nicolas Sidère, and Antoine Doucet. 2019. Impact of OCR Quality on Named Entity Linking. In *International Conference on Asia-Pacific Digital Libraries 2019*, Kuala Lumpur, Malaysia.
Sara Marjanovic, Karolina Stanczak, and Isabelle Au- ´
genstein. 2022. Quantifying gender biases towards politicians on Reddit. *PLOS ONE*, 17(10):1–36.
Antonis Maronikolakis, Philip Baader, and Hinrich Schütze. 2022. Analyzing hate speech data along racial, gender and intersectional axes. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 1–7, Seattle, Washington. Association for Computational Linguistics.
Chandler May, Alex Wang, Shikha Bordia, Samuel R.
Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics.
Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, Joseph P.
Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. Quantitative analysis of culture using millions of digitized books. *Science*,
331(6014):176–182.
Bettina M Migge and Susanne Muehleisen. 2010. Earlier Caribbean English and Creole in Writing. In Raymond Hickey, editor, *Varieties in writing: The* written word as linguistic evidence, pages 223–244.
John Benjamins.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. *arXiv:1301.3781 [cs]*.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In *Proceedings of the 26th International* Conference on Neural Information Processing Systems - Volume 2, NIPS'13, page 3111–3119, Red Hook, NY, USA. Curran Associates Inc.
Saif Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174–184, Melbourne, Australia. Association for Computational Linguistics.
Shon Otmazgin, Arie Cattan, and Yoav Goldberg. 2022.
F-coref: Fast, accurate and easy to use coreference resolution. In *Proceedings of the 2nd Conference* of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing:
System Demonstrations, pages 48–56, Taipei, Taiwan.
Association for Computational Linguistics.
B. Keith Payne, Heidi A. Vuletich, and Jazmin L.
Brown-Iannuzzi. 2019. Historical roots of implicit bias in slavery. Proceedings of the National Academy of Sciences, 116(24):11693–11698.
Michael Piotrowski. 2012. *Natural Language Processing for Historical Texts*. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.
Ella Rabinovich, Hila Gonen, and Suzanne Stevenson.
2020. Pick a fight or bite your tongue: Investigation of gender differences in idiomatic language usage. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5181–
5192, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Anthony Rios, Reenam Joshi, and Hejin Shin. 2020.
Quantifying 60 years of gender bias in biomedical research with word embeddings. In *Proceedings of*
the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 1–13, Online. Association for Computational Linguistics.
Karolina Stanczak and Isabelle Augenstein. 2021. ´ A
survey on gender bias in natural language processing.
arXiv:2112.14168 [cs].
Yi Chern Tan and L. Elisa Celis. 2019a. *Assessing* Social and Intersectional Biases in Contextualized Word Representations, chapter 1. Curran Associates Inc., Red Hook, NY, USA.
Yi Chern Tan and L Elisa Celis. 2019b. Assessing social and intersectional biases in contextualized word representations. Advances in Neural Information Processing Systems, 32.
Valerie Wayne. 2020. *Women's labour and the history* of the book in early modern England. Bloomsbury Publishing.
Melvin Wevers. 2019. Using word embeddings to examine gender bias in Dutch newspapers, 1950-1990.
In Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change, pages 92–97, Florence, Italy. Association for Computational Linguistics.
## A Additional Material A.1 Dataset Statistics
In Fig 8, we present the geographical distribution of the newspapers in the curated dataset.
## A.2 Misspelt Words
Here we list 110 frequently misspelt words and their correct spelling, which was used for the embedding evaluation described in Sec 5.1.
hon'ble - honorable, honble - honorable, majetty
- majesty, mujesty - majesty, mojesty - majesty, houfe - house, calied - called, upen - upon, cailed
- called, reeeived - received, betore - before, kaow
- know, reecived - received, bope - hope, fonnd
- found, dificult - difficult, qnite - quite, convineed - convinced, satistied - satisfied, intinate -
intimate, demandcd - demanded, snecessful - successful, abie - able, impossibie - impossible, althouch - although, foreed - forced, giad - glad, preper - proper, understocd - understood, fuund
- found, almest - almost, nore - more, atter - after, oceupied - occupied, understuod - understood, satis'y - satisfy, impofible - impossible, impoilible - impossible, inseusible - insensible, accessary
- accesory, contident - confident, koown - known, receiv - receive, calied - calles, appellunt - appellant, Eniperor - emperor, auxious - anxious, ofien -
often, lawiul - lawful, posstble - possible, Svanish
- Spanish, fuffictent - sufficient, furcher - further, yery - very, uader - under, ayreeable - agreeable, ylad - glad, egreed - agreed, unabie - unable, giyen
- given, uecessary - necessary, alrendy - already, entitied - entitled, cffered - offered, pesitive - positive, creater - creator, prefound - profound, examived - examined, successiul - successful, pablic -
public, propor - proper, cousiderable - considerable, lcvely - lovely, fold - sold, seeond - second, huuse - house, excellen - excellent, auetion - auction, Engiand - England, peopie - people, goveroment - government, yeurs - years, exceliency - excellency, generel - general, foliowing - following, goneral - general, preperty - property, wondertul
- wonderful, o'ciock - o'clock, exeellency - excellency, tollowing - following, Eugland - England, gentieman - gentleman, colontal - colonial, gevernment - government, excelleney - excellency, goverament - government, Lendon - London, Bermupa
- Bermuda, goverument - government, himeelf -
himself, entlemen - gentlemen, sublcriber - subscriber, majeliy - majesty, Weduesday - Wednesday, o'cleck - o'clock, o'cluck - o'clock, colonics -
## A.3 Keyword Sets
Tab 5 and Tab 6 describe the various keyword sets that we used for entity classification (Section 5.2.2) and for performing the WEAT tests (Section 5.2.1.
## B Supplementary Results
In Tab 7, we report the accuracy of the classified entities using the keyword-based approach. In Tab 8, we list examples of sentences from our newspaper dataset. Fig 9 presents the WEAT results of the attributes African countries vs *European countries*. Fig 10 presents temporal WEAT analysis conducted for the attributes *African countries* vs European countries.
![14_image_0.png](14_image_0.png)
Figure 8: The geographical distribution of the curated Caribbean newspapers dataset.
![14_image_2.png](14_image_2.png)
![14_image_1.png](14_image_1.png)
| Subgroup | Wordlist | | | | | | | |
|--------------------------------------------------------------------|---------------------------------------------------|---------|----------|----------|-------|-----------|--------|------|
| Males | husband, | suitor, | brother, | boyhood, | beau, | salesman, | daddy, | man, |
| spokesman, chairman, lad, mister, men, sperm, dad, gelding, gentleman, boy, sir, horsemen, paternity, statesman, prince, sons, countryman, pa, suitors, stallion, fella, monks, fiance, chap, uncles, godfather, bulls, males, grandfather, penis, lions, nephew, monk, countrymen, grandsons, beards, schoolboy, councilmen, dads, fellow, colts, mr, king, father, fraternal,baritone, gentlemen, fathers, husbands, guy, semen, brotherhood, nephews, lion, lads, grandson, widower, bachelor, kings, male, son, brothers, uncle, brethren, boys, councilman, czar, beard, bull, salesmen, fraternity, dude, colt, john, he, himself, his | | | | | | | | |
| Females | sisters, queen, ladies, princess, witch, mother, nun, aunt, princes, housewife, women, convent, gals, witches, stepmother, wife, granddaughter, mis, widows, nieces, studs, niece, actresses, wives, sister, dowry, hens, daughters, womb, monastery, ms, misses, mama, mrs, fillies, woman, aunts, girl, actress, wench, brides, grandmother, stud, lady, female, maid, gal, queens, hostess, daughter, grandmothers, girls, heiress, moms, maids, mistress, mothers, mom, mare, filly, maternal, bride, widow, goddess, diva, maiden, hen, housewives, heroine, nuns, females', she, herself, hers, her | | | | | | | |
| Non-whites | negro, negros, creole, indian, negroes, colored, mulatto, mulattos, negresse, mundingo, brown, browns, african, congo, black, blacks, dark, creoles | | | | | | | |
| Whites | (any entity that was not classified as Non-white) | | | | | | | |
| Table 5: Keywords used for classification entities into subgroups. | | | | | | | | |
| Attribute | Wordlist |
|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Males | husband, man, mister, gentleman, boy, sir, prince, countryman, fiance, godfather, grandfather, nephew, fellow, mr, king, father, guy, grandson, widower, bachelor, male, son, brother, uncle, brethren |
| Females | sister, queen, lady, witch, mother, aunt, princes, housewife, stepmother, wife, granddaughter, mis, niece, ms, misses, mrs, woman, girl, wench, bride, grandmother, female, maid, daughter, mistress, bride, widow, maiden |
| European | ireland, georgia, france, monaco, poland, cyprus, greece, hungary, norway, portugal, belgium, |
| countries | luxembourg, finland, albania, germany, netherlands, montenegro, scotland, spain, europe, russia, vatican, switzerland, lithuania, bulgaria, wales, ukraine, romania, denmark, england, italy, bosnia, turkey, malta, iceland, austria, croatia, sweden, macedonia |
| African countries | liberia, mozambique, gambia, ghana, morocco, chad, senegal, togo, algeria, egypt, benin, ethiopia, niger, madagascar, guinea, mauritius, africa, mali, congo, angola |
| Caribbean | barbuda, bahamas, jamaica, dominica, haiti, antigua, grenada, caribbean, barbados, cuba, trinidad, |
| countries | dominican, nevis, kitts, lucia, croix, tobago, grenadines, puerto, rico |
| Target | Wordlist |
| Appearance | apt, discerning, judicious, imaginative, inquiring, intelligent, inquisitive, wise, shrewd, logical, astute, intuitive, precocious, analytical, smart, ingenious, reflective, inventive, venerable, genius, brilliant, clever, thoughtful |
| Intelligence | bald, strong, muscular, thin, voluptuous, blushing, athletic, gorgeous, handsome, homely, feeble, fashionable, attractive, weak, plump, ugly, slim, stout, pretty, fat, sensual, beautiful, healthy, alluring, slender |
| Weak | failure, loser, weak, timid, withdraw, follow, fragile, afraid, weakness, shy, lose, surrender, vulnerable, yield |
| Strong | strong, potent, succeed, loud, assert, leader, winner, dominant, command, confident, power, triumph, shout, bold |
| Family | loved, sisters, mother, reunited, estranged, aunt, relatives, grandchildren, godmother, kin, grandsons, sons, son, parents, stepmother, childless, paramour, nieces, children, niece, father, twins, sister, fiance, daughters, youngest, uncle, uncles, aunts, eldest, cousins, grandmother, children, loving, daughter, paternal, girls, nephews, friends, mothers, grandfather, cousin, maternal, married, nephew, wedding, grandson |
| Career | branch, managers, usurping, subsidiary, engineering, performs, fiscal, personnel, duties, offices, clerical, engineer, executive, functions, revenues, entity, competitive, competitor, employing, chairman, director, commissions, audit, promotion, professional, assistant, company, auditors, oversight, departments, comptroller, president, manager, operations, marketing, directors, shareholder, engineers, corporate, salaries, internal, management, salaried, corporation, revenue, salary, usurpation, managing, delegated, operating |
| Manual labour | sailor, bricklayer, server, butcher, gardener, cook, repairer, maid, guard, farmer, fisher, carpenter, paver, cleaner, cabinetmaker, barber, breeder, washer, miner, builder, baker, fisherman, plumber, labourer, servant |
| Non-manual | teacher, judge, manager, lawyer, director, mathematician, physician, medic, designer, bookkeeper, |
| labour | nurse, librarian, doctor, educator, auditor, clerk, midwife, translator, inspector, surgeon |
| Mental illness | sleep, pica, disorders, nightmare, personality, histrionic, stress, dependence, anxiety, terror, emotional, delusion, depression, panic, abuse, disorder, mania, hysteria |
| Physical | scurvy, sciatica, asthma, gangrene, gerd, cowpox, lice, rickets, malaria, epilepsy, sars, diphtheria, |
| illness | smallpox, bronchitis, thrush, leprosy, typhus, sids, watkins, measles, jaundice, shingles, cholera, boil, pneumonia, mumps, rheumatism, rabies, abscess, warts, plague, dysentery, syphilis, cancer, influenza, ulcers, tetanus |
| Crime | arrested, unreliable, detained, arrest, detain, murder, murdered, criminal, criminally, thug, theft, thief, mugger, mugging, suspicious, executed, illegal, unjust, jailed, jail, prison, arson, arsonist, kidnap, kidnapped, assaulted, assault, released, custody, police, sheriff, bailed, bail |
| lawfulness | loyal, charming, friendly, respectful, dutiful, grateful, amiable, honourable, honourably, good, faithfully, faithful, pleasant, praised, just, dignified, approving, approve, compliment, generous, faithful, intelligent, appreciative, delighted, appreciate Table 6: Keywords used for performing WEAT evaluation. |
| Attribute | Ratio of correctly classified | Ratio of incorrectly classified entities | Ratio of unable to classify |
|-------------|---------------------------------|-------|-------------------------------|
| entities | | | |
| Non-whites | 0.89 | 0.036 | 0.07 |
| Whites | 0.75 | 0.18 | 0.07 |
| Males | 0.89 | 0.036 | 0.07 |
| Females | 0.79 | 0.21 | 0 |
Table 7: Performance of the keyword-based classification approach.
| Word | Sentence |
|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ingenious | This comprehensive piece of clockwork cost the ingenious and indefatigable artist (one Jacob Lovelace, of Exeter,) 34 years' labour. |
| elderly | y un away for upwards of 16 Months past;; elderly NEGRO WOMAN hamed LOUISA, belongifg to the Estate of the late Ancup. |
| active | FOR SALE, STRONG active NEGRO GIRL, about 24 Years of Age, she is a good Cook, can W asu, [rron, and is well acquainted with Housework in general. |
| beautiful | and the young husband was hurried away, being scarcely permitted to take a parting kiss from his blooming and beautiful bride. |
| blind | Dick, of the Mundingo Counrry, blind mark, about 18 years of ane, says he belongs te the estate Of ee Nichole, dec. of Mantego bay. |
| sick | The young wife had snatched upa,; few of her own and her baby's clothes; the husband, | Openiug Chorus, though sick, had attended to his duty to the last, and es | Song caped penniless with the clothes on his back. |
| free | A free black girl JOSEPHINE, detained by the Police as being diseased; Proprietors and Managers an the Country are kindly requested to have the said Josephine apprehended 'and lodged in the Towa Prison, the usual reward will be paid |
| brave | From that moment the brave Lopez Lara was only occupied in devising means for delivering this notorious criminal into the hvids of justice. |
Table 8: Examples from our dataset that contain biased words. Notice the high levels of noise and OCR errors.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Ethical considerations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract, 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3, 5
✓ B1. Did you cite the creators of artifacts you used?
3, 5 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Limitations, ethical considerations
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Ethical considerations
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3, limitations, appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3, appendix
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3, 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
li-etal-2023-incomplete | Incomplete Utterance Rewriting by A Two-Phase Locate-and-Fill Regime | https://aclanthology.org/2023.findings-acl.171 | Rewriting incomplete and ambiguous utterances can improve dialogue models{'} understanding of the context and help them generate better results. However, the existing end-to-end models will have the problem of too large search space, resulting in poor quality of rewriting results. We propose a 2-phase rewriting framework which first predicts the empty slots in the utterance that need to be completed, and then generate the part to be filled into each positions. Our framework is simple to implement, fast to run, and achieves the state-of-the-art results on several public rewriting datasets. | # Incomplete Utterance Rewriting By A Two-Phase Locate-And-Fill Regime
Zitong Li1, Jiawei Li2, Haifeng Tang3**, Kenny Q. Zhu**4∗
, Ruolan Yang5 1,2,4Shanghai Jiao Tong University, Shanghai, China 3China Merchants Bank Credit Card Center, Shanghai, China 5University of California, San Diego, US
[email protected], [email protected] [email protected], [email protected], [email protected]
## Abstract 1 Introduction
In multi-turn dialogues, speakers naturally tend to make heavy use of references or omit complex discourses to save the efforts. Thus natural language understanding models usually need the dialogue history to understand the true meaning of the current utterance. The existence of such incomplete utterances increases the difficulty of modeling dialogues.
![0_image_0.png](0_image_0.png)
Figure 1: An example of utterance rewriting. The phrase in the first red box is coreference, and the second is ellipsis.
The sources of incompleteness of an utterance can be divided into two categories: *coreference* and *ellipsis*. The task for solving these two kinds of incompleteness is called Incomplete Utterance
∗ The corresponding author.
Rewriting incomplete and ambiguous utterances can improve dialogue models' understanding of the context and help them generate better results. However, the existing end-toend models will have the problem of too large search space, resulting in poor quality of rewriting results. We propose a 2-phase rewriting framework which first predicts the empty slots in the utterance that need to be completed, and then generate the part to be filled into each positions. Our framework is simple to implement, fast to run, and achieves the state-of-the-art results on several public rewriting datasets.
Rewriting (IUR). As shown in Figure 1, the third utterance of this multi-turn dialogue is incomplete.
If this utterance is taken out alone without a context, we will not be able to understand what "one" means and where to buy it. The fourth utterance is a rewriting of the third one. We can see that "one" in the third utterance is replaced by "J.K. Rowling's new book". In addition, the place adverbial "from the book store in town" is inserted after "for me".
In today's industry strength dialogue systems and applications, due to stringent requirements on running time and maintenance cost, single-turn models are much more preferred than multi-turn models.
If an incomplete single-turn utterance can be completed, it will be more understandable without the context, and the cost of downstream NLP tasks, such as intention extraction and response generation, will be reduced.
Figure 1 shows that that all the words added in the rewritten utterance except "from" come from the context. Inspired by this, many early rewriting works used pointer networks (Vinyals et al., 2015)
or sequence to sequence models with copy mechanism (Gu et al., 2016; See et al., 2017) to directly copy parts from the context into the target utterance.
More recently, pre-trained language models such as T5 (Raffel et al., 2020) succeeds in many NLP tasks, and it appears that T5 is a plausible choice for utterance rewriting as well. However, IUR task is different from other generation tasks in that new parts typically only need to be added in one or two specific locations in the original utterance. That is, the changes to the utterance are localized. For example, a typical operation is adding modifiers before or after a noun. On the contrary, end-toend text generation models such as T5 may not preserve the syntactic structure of the input, which may cause the loss of important information and the introduction of wrong information into the output, which is illustrated as below (Two examples are generated by T5.).
- Can you buy **J.K. Rowling's new book**? (Losing original structure)
- Can you **publish** new book for me ? (Introducing wrong information)
Another problem of the end-to-end pre-trained models, which generate the rewritten utterances from scratch, is that they generally incur a large search space and are therefore not only imprecise but also inefficient. In order to solve the large search space issue, Hao et al. (2021a) treated utterance rewriting as a sequence tagging task. For each input word, they predict whether it should be deleted and the span that needs to be replaced with. Liu et al. (2020) formulated IUR as a syntactic segmentation task. They predict segmentation operations required on the utterance to be rewritten. However, they still did not take the important step of predicting the site of rewrite, particularly the position within the syntactic structure of the input utterance. If the model can learn the syntactic structure information in the target sentence, it can predict which part of the sentence needs to be modified, i.e., which words need to be replaced and where new words need to be inserted. After that, the model only needs to fill in these predicted positions. These two tasks are relatively simple to perform, and they collectively avoid the above problems. Our approach is based on the above intuition.
In order to effectively utilize the syntactic structure of the sentence to be rewritten, we divide the IUR task into two phases. The first phase is to predict which positions in the utterance need to be rewritten (including coreference and ellipsis). The second phase is to fill in the predicted positions.
In the first phase, we use the sequence annotation method to predict the locations of coreference and ellipsis in the utterance. In the second phase, we take the utterances with blanks as input and directly predict the words required for the blank position. By seperating the original rewriting task into two relatively simple phases, our results show that our model performs the best among recent state-of-theart rewriting models 1.
Our main contributions are as follows.
- A two-phase framework for solving incomplete utterance rewriting task is proposed.
1Complete code is available at https://github.com/
AutSky-JadeK/Locate-and-Fill.
It can complete the Incomplete Utterance Rewriting (IUR) task. (Section 2)
- An algorithm for aligning the two sentences before and after rewriting based on the longest common subsequences (LCS) algorithm. We succinctly and efficiently generated two kinds of data which can be used for predicting the positions to be rewritten (the first phase) and filling the blanks (the second phase) respectively. (Section 2.1.2)
- We have carried out experiments on 5 datasets, and the experimental results show that our two-phase framework achieves state-of-theart results. (Section 3)
## 2 Approach
![1_Image_0.Png](1_Image_0.Png)
Our framework is divided into two phases: **Locating positions to rewrite** and **Filling the blanks**.
Figure 2 is a brief schematic of the framework.
Phase 1 can be done either by heuristic rules or by supervision. Phase 2 can be done with a seq2seq text generation model. We give the details of these phases next.
## 2.1 Locating Positions To Rewrite
We designed an unsupervised and a supervised method to locate positions to rewrite. The two methods are described below.
2.1.1 Unsupervised Rule-based Method We first implement a rule-based method for the first phase of our problem, aiming at predicting the blanks automatically. We looked through thousands of complete utterance examples in Elgohary et al. (2019). Based on our observations and experience, we define six rules for generating two kinds of blanks which are used for resolving coreference and ellpisis in the second phase. The rules for generating blanks are summarized and explained below:
Personal Pronouns: We replace all the personal pronouns (except the first- and second-person pronouns) and their corresponding possessive pronouns with [MASK_r]. This indicates that we will replace these pronouns with some specific noun phrases at second phase.
Interrogatives: We insert [MASK_i] after the interrogative if the whole utterance only contains interrogatives such as what, how, why, when and so forth. [MASK_i] indicates that some additional text span shall be inserted at this location.
That, This: The use of word like "this", "that",
"these" and "those" are commonly used in colloquial language, which becomes a source of ambiguity. Therefore, we deal with the use of these pronouns in following ways:
- Not followed by a noun phrase: In this case, we simply replace the word by [MASK_r].
- Otherwise: We will insert [MASK_i] after the noun phrase.
The+Noun Phrase: We will insert [MASK_i] after the noun phrase.
Other, Another, Else: If the utterance contains these words, it usually indicates that there are people/things additional to what have been mentioned before. Hence, we add a [MASK_i] at the head of the sentence.
Before, After: We insert [MASK_i] after the sentence ended with "before" or "after", which is considered as an incompletion.
## 2.1.2 Supervised Lcs-Based Method
We also design an algorithm based on the Longest Common Subsequence (LCS) algorithm . The sentence to be rewritten X and after rewriting Y are aligned via a sequence labeling model. To obtain the common subsequence, LCS algorithm returns a matrix M which stores the LCS sequence for each step of the calculation. The value of Mi,j indicates the actual LCS length of sequences X[0, i] and Y [0, j]
2. When we trace back from the max value at the corner, the decreases of length show that the sentences have a common token.
2https://en.wikipedia.org/wiki/Longest_common_
subsequence_problem Coreference and ellipsis towards original sentence are extracted through LCS trace back algorithm, which is further labeled as COR and ELL
respectively. Given the tokenized original sentence X and ground truth Y as shown in Figure 3, the rules for labeling are specified as follows:
- The labeling is proceeded from the bottom right to the top left corner of a LCS matrix.
If the current tokens in Xi and Yj are equal, Xi matches part of the LCS and is labeled as O, then we go both up and left (shown in black). If not, we go up or left, depending on which cell has a higher number or lower index j, until we find next matched Xi′ that satisfies Xi′ = Yj′.
- If traversed path from previously matched token pair to newly match pair is a straight up arrow, it indicates that token(s) in interval
(Yj′, Yj )
3is (are) inserted at corresponding position i′in X to complete the original sentence. In this case, token Xi′ is labeled as ELL(shown in orange).
- If two matched pairs in the LCS matrix are joined by paths with corners, interval
(Xi′, Xi) is replaced by (Yj′, Yj ) during rewriting. As a result, coreferenced words are labeled as COR(shown in blue).
![2_image_0.png](2_image_0.png)
Then, we input the pre-processed training data into the BERT-CRF (Souza et al., 2019) model, which is considered as a sequence annotation task.
Using the method described in Section 2.1.2, we obtained the locations of coreferences and ellipses 3(a, b) means an open interval excl. the endpoints a and b.
of each utterance waiting to be rewritten. As shown in Figure 3, we use the BIO format to annotate the sequence. The starting position of the coreference is marked as B-COR (Begin-Coreference), while other positions of the coreference are marked as ICORs (Inside-Coreference). Ellipsis only appears in the middle of two tokens, so we mark the position of the latter token as B-ELL (Begin-Ellipse),
which means that there should be missing words between this token and the previous token, and the subsequent model is required to fill in it.
## 2.2 Blanks Filling
| Can you buy that novel for me ? | Original Sentence | |------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Can you buy [MASK_r] for me [MASK_i] ? Sentence with Blanks Can you buy [MASK_r] for me ? Sub-sentence 1 Can you buy that novel for me [MASK_i] ? Sub-sentence 2 | |
Figure 4: Split the sentence according to the number of blanks in the utterance.
Can you buy **[MASK_r] (that novel)** for me **[MASK_i] ( )** ? Can you buy **[MASK_r]** for me **[MASK_i]** ?
Can you buy that novel for me ? Original Sentence
Sentence with Blanks
with $\Huge\sf\color{red}{Blanks}$ 0 ?
Sentence with Hints
Figure 5: Add hints to blanks.
$$\mathrm{Figure}$$
We use T5-small (Raffel et al., 2020) and bartbase (Lewis et al., 2020) as pre-trained language model (PLM) in phase 2. In this section, we will take T5 as an example to illustrate the process of blanks filling.
We use the T5 model to fill in blanks with two optimizations: adding hints and splitting the current utterance into sub-sentences. The latter can ensure that there is only one blank in the sentence to be filled in the T5 model. The two optimizations are shown in Figure 4 and Figure 5. We transfer the data of each multi-turn dialogue into the format shown in Figure 6, and fine-tune the T5 model.
Input: I heard that J.K. Rowling's new book has been published.
[SEP] Great. I'm going to the bookstore in town. *[SEP]* Can you buy **<extra_id_0> (that book)** for me?
Output: -.5RZOLQJ¶VQHZERRN
Input: I heard that J.K. Rowling's new book has been published.
[SEP] Great. I'm going to the bookstore in town. *[SEP]* Can you buy that book for me **<extra_id_0> ( )** ?
Output: from the bookstore in town Figure 6: Format of fine-tune data of T5.
After fine-tuning, we take the predicted results of BERT-CRF model in Section 2.1.2 as input to get the final blank filled results of T5 model. Finally, the outputs of T5 model are filled back into the blanks of the original sentence to get the rewritten utterance. The same is for the rule-based method.
The blank prediction obtained from it is directly input into the same T5 model (the two optimization methods described in Figure 4 and Figure 5 will also be used) to obtain the output of T5.
## 3 Experiment
In this section, we will introduce our experiment setup and results.
| MuDoCo | CQR | REWRITE | RES | CANARD | |
|----------|-------|-----------|--------|----------|--------|
| Train | 2.39k | 0.52k | 16.00k | 193.77k | 16.88k |
| Dev | 0.29k | 0.06k | 2.00k | 5.10k | 1.79k |
| Test | 0.30k | 0.06k | 2.00k | 5.10k | 2.96k |
| Ave Len | 73.43 | 143.70 | 36.85 | 68.38 | 429.77 |
| % RW | 26.68 | 98.38 | 99.98 | 60.00 | 92.91 |
Table 1: Descriptions of the datasets. "Ave Len" means the average length of context. "% RW" denotes the percentage of samples whose current utterance is actually rewritten.
$${\mathrm{s~to~blanks.}}$$
## 3.1 Datasets
We tested the baseline and our framework on 3 public datasets in English and 2 in Chinese. The statistics are shown in Table 1. The examples are shown in Appendix.
MuDoCo (Martin et al., 2020) has a lower rewriting rate, which makes the rule-based method less accurate in predicting the locations to be rewritten. CQR (Regan et al., 2019) contains imperative dialogues in life (between people or between people and intelligent agents). The sentence patterns are relatively simple, fixed and easy to understand.
REWRITE (Su et al., 2019a) is a Chinese dataset, each dialogue of which contains 3 turns. It is collected from several popular Chinese social media platforms. The task is to complete the last turn.
RES (Restoration-200k) (Pan et al., 2019a) is a large-scale Chinese dataset in which 200K multiturn conversations are collected and manually labeled with the explicit relations between an utterance and its context. Each dialogue is longer than REWRITE. **CANARD** (Elgohary et al., 2019) contains a series of English dialogues about a certain topic or person organized in the form of QA. It has the largest size and the longest context length.
The sentence pattern in CANARD is complex, the understanding is difficult, and the rewriting degree is high.
## 3.2 Baselines
We choose the following strong baselines to compete with our framework.
T5-small model and **T5-base model** 4(Raffel et al.,
2020). We directly take the context and the current utterance as inputs, use the training set to fine-tune the T5 model, and test its end-to-end output on the test set as the result of rewriting the utterance.
BART-base model (Lewis et al., 2020). This is another pre-trained model we used. Its size is close to T5-small. Our model is tested based on these 2 PLMs.
Rewritten U-shaped Network (RUN) (Liu et al.,
2020). In this work, the authors regard the incomplete utterance rewriting task as a dialogue editing task, and propose a new model using syntactic segmentation to solve this task.
Hierarchical Context Tagging (HCT)(Lisa et al.,
2022). A method based on sequence tagging is proposed to solve the robustness problem in rewriting task.
Rewriting as Sequence Tagging (RAST)(Hao et al., 2021b). The authors proposed a novel tagging-based approach that results in a significantly smaller search space than the existing methods on the incomplete utterance rewriting task.
## 3.3 Evaluation Metrics
We use the **BLEU**n score (Papineni et al., 2002)
to measure the similarity between the generated rewritten utterance and the ground truth. Low order n-gram **BLEU**n score can measure precision, while high-order n-gram can measure the fluency of the sentence. We also use the **ROUGE**n score
(Lin, 2004) to measure recall of rewritten utterance.
Rewriting F-scoren (Pan et al., 2019b) is used to examine the words newly added to the current sentence. We calculte Rewriting F-score by comparing words added by the rewriting model with added words in ground truth. It is a widely accepted metric that can better measure the quality of rewriting. In addition to the automatic evaluation method, we also asked human annotators to conduct comparative tests on the rewriting results.
## 3.4 Implementation Details
All of the models are running and evaluated on 2 Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz with 4 NVIDIA GeForce RTX 2080 GPU and a 128GB RAM. Due to the memory constraints of our experimental environment, we adopt T5-small model in the second phase of our framework, and fine-tune it for 20 epochs. All experiments are repeated for 3 times and averaged.
## 3.5 Main Results
In the following section, "Ours-T5" represents our model based on T5-small in phase 2. "Ours-BART"
is our framework based on BART-base in phase 2.
"Ours-rule" is a variant of our method which uses the rule-based method in Section 2.1.1 to generate blanks in phase 1 and T5-small in phase 2. "GoldT5" is the result of directly inputting the sentence with the correct blanks into the T5-small model in phase 2. "Gold-BART" is directly inputting the sentence with the correct blanks into the BARTbase model in phase 2.
Table 2 shows the results of our framework and baselines on CQR and MuDoCo. Compared with CANARD, the two datasets are smaller in size and simpler in sentence structure. Our approach is significantly better than all baselines on all metrics.
For **Rewriting F-score**, our method is 6.37 and 6.63 percentage points higher than the sub-optimal end-to-end T5-small model, respectively. This metric strongly shows that our method can introduce more new words provided in the ground truth (compared with the original sentence). Relatively larger advantages of our model compared with T5-small in **BLEU** and **ROUGE** show that our method based on blank prediction and filling can retain the structure of the original sentence to the greatest extent, so as to retain more correct same information when calculating these two metrics and comparing the two sequences. However, end-to-end T5 model generates the whole rewriting utterance directly, which may lose some information from the original sentence.
The last part of Table 2 shows the results of our framework and baselines on CANARD. Among the three datasets we used, samples in CANARD are the most difficult and the most complex. Our model is superior to other baseline methods in all the experimental metrics. Especially in BLEU score, our method is significantly better than all baselines. As for Rewriting F-score and ROUGE, we found that
Datasets CQR MuDoCo CANARD
Methods F1/2 BLEU1/2 ROUGE1/2/L F1/2 BLEU1/2 ROUGE1/2/L F1/2 BLEU1/2 **ROUGE**1/2/L
HCT 58.6/32.3 64.2/52.9 67.8/47.3/65.6 56.1/49.2 93.0/90.7 94.9/87.8/94.9 33.9/28.4 67.9/61.7 80.1/66.5/79.5
RUN 54.0/29.5 63.1/51.9 67.3/45.1/64.3 44.8/32.0 93.0/90.2 94.4/85.4/94.3 43.8/30.5 70.1/62.2 80.5/62.9/79.0
RAST 60.9/33.7 65.4/53.8 69.0/50.6/67.7 58.9/50.7 92.4/89.9 94.0/84.7/93.8 44.8/30.8 70.5/62.9 80.6/63.8/79.4
T5-small 80.8/72.3 62.3/59.9 84.3/76.8/83.0 62.4/56.7 87.5/79.4 95.0/88.4/94.9 51.5/40.4 70.4/64.1 80.2/66.6/78.1
BART-base 79.4/71.7 61.5/57.4 82.0/74.4/81.9 60.8/54.9 85.6/78.4 93.9/87.3/93.7 52.3/41.2 68.8/62.6 78.9/65.5/77.0
Ours-rule 65.0/57.7 69.5/65.2 72.3/60.3/69.7 60.4/47.4 83.0/78.3 92.3/80.6/92.0 51.8/40.5 70.9/64.6 80.8/67.0/79.0
Ours-T5 87.5/80.3 88.6/85.8 91.2/83.9/89.9 68.8/62.5 95.6/94.1 96.1/89.5/96.1 53.4/41.4 77.5/70.1 82.8/68.3/**81.1**
Ours-BART 86.1/78.3 86.9/83.9 90.1/81.8/88.4 66.6/61.5 94.3/92.8 94.3/87.8/95.0 53.1/40.9 76.5/69.6 82.0/67.4/80.0
Gold-T5 89.3/82.9 91.3/89.0 93.6/88.0/93.2 75.9/69.7 97.4/96.3 97.8/92.2/97.8 58.2/47.9 80.1/71.3 86.2/70.0/83.1 Gold-BART 89.0/82.4 90.7/88.2 92.3/87.1/92.4 72.6/67.0 95.6/94.5 95.5/90.1/94.8 57.7/47.5 79.9/71.0 85.6/69.4/82.2
the performance of end-to-end T5 model is close to our method. This is because the generative T5 model is very powerful and can generate fluent sentences. However, our 2-phase framework can better predict which positions in the current sentence should be rewritten, which can not be achieved by the end-to-end model. In the following analysis, we will further analyze this point.
An important reason why our framework is better than baselines on CQR and MuDoCo is that CQR mainly contains dialogues that users are asking agents for help. The positions and forms of words that can be added are relatively fixed, such as adding place adverbials. Samples in MuDoCo are basically imperative dialogues in daily life. It also has the same feature, which makes our model easier to learn. The results in Section 3.7 can also illustrate this point. The accuracy of the first phase of our framework is higher on CQR and MuDoCo.
Table 3 shows the results of our framework and baselines on Chinese datasets REWRITE and RES.
Due to the better performance of BART in Chinese texts, our model is mainly tested based on BARTbase rather than T5-small in these two datasets.
These two PLMs have similar sizes. HCT, RUN
and RAST perform well on these two datasets. Because these two datasets have few turns and simple contents, they have been studied a lot in previous work. However, their performance is not as good as that of BART-base. This shows the great potential
| Datasets | REWRITE | RES | | | | |
|------------|-----------|-----------|----------------|-----------|-----------|----------------|
| Methods | F1/2 | BLEU1/2 | ROUGE1/2/L | F1/2 | BLEU1/2 | ROUGE1/2/L |
| HCT | 79.3/74.2 | 92.7/90.2 | 94.4/89.3/93.5 | 73.2/67.1 | 92.1/91.7 | 93.4/88.8/92.8 |
| RUN | 80.5/75.0 | 93.5/90.9 | 95.8/90.3/91.3 | 72.9/66.9 | 92.0/89.1 | 92.1/85.4/89.5 |
| RAST | 77.8/72.5 | 90.5/88.3 | 94.7/88.9/92.9 | 71.8/65.3 | 89.7/88.8 | 91.1/84.2/87.8 |
| BART-base | 81.2/76.0 | 93.9/90.8 | 95.2/91.8/92.4 | 75.0/69.7 | 92.8/88.7 | 92.6/88.2/90.3 |
| Ours-rule | 79.1/73.8 | 90.2/87.8 | 93.3/90.6/91.4 | 72.3/65.8 | 90.5/86.3 | 90.4/86.1/88.5 |
| Ours-BART | 83.4/79.1 | 94.7/92.8 | 96.0/92.2/93.7 | 76.4/70.5 | 94.3/91.9 | 95.3/89.8/91.4 |
| Gold-BART | 85.6/80.9 | 95.6/93.3 | 94.7/92.6/92.8 | 80.8/73.2 | 95.0/91.2 | 95.8/91.4/92.1 |
of using PLMs directly in rewriting tasks. Compared with BART-base, our model has improved in BLEU score and ROUGE score. This shows that our method is also effective in Chinese. And when different PLMs are used as frameworks, the results can be improved.
In the Table 4, our model is compared with T5 model for end-to-end prediction. It can be observed that the word "there" is not considered to be replaced by the end-to-end model, which is due to the fact that the position to be rewritten is not obviously predicted. Our two-phase framework can make up for this. The sequence annotation model indicates that "there" is a part that needs to be replaced, so the T5 in the second phase can be predicted correctly. This is our advantage over the end-to-end model. More case studies are shown in appendix.
| Context |
|-----------|
| A: yogi berra B:major leagues A:what team signed him ? B:berra was called up to the yankees and played his first game on september 22 , 1946 ; | |
|--------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------|
| Current | A: how long was he there ? |
| Gold | A: how long was yogi berra with the yankees ? |
| Ours-sup | A: how long was yogi berra at the yankees ? |
| T5-small | A: how long was yogi berra there ? |
## 3.6 Human Evaluation
| Win | Tie | Loss | |
|--------------------|-------|--------|------|
| Ours v.s. HCT | 0.66 | 0.10 | 0.24 |
| Ours v.s. RUN | 0.50 | 0.16 | 0.34 |
| Ours v.s. Rule | 0.70 | 0.10 | 0.20 |
| Ours v.s. T5-small | 0.46 | 0.16 | 0.38 |
Table 5 shows the results of human evaluation on CANARD. For each pair of competing models, 50 pairs of rewriting results were randomly sampled from the testset for comparative testing. A total of 200 questions were randomly assigned to 5 human volunteers on average. Each person needs to choose the better one from the prediction results of the two models. As can be seen from the table, our method is significantly stronger than RUN, HCT,
and the rule-based method in Section 2.1.1. When compared with the end-to-end T5-small model, our advantage is relatively small. After observing the feedback of human annotators, we find that the end-to-end model has the advantage of direct generation and can generate more complete and fluent sentences. Our method only generates the words needed in blank, which lacks a certain degree of sentence fluency. However, our 2-phase framework can accurately predict the positions that need to be rewritten in the current sentence, which is beyond the reach of the end-to-end model (see appendix for specific analysis). Taken together, our method should be even better.
## 3.7 Ablation Tests
| Variant | F1 | F2 | B1 | B2 | R1 | R2 | RL |
|--------------|------|------|------|----------------|----------------|------|------|
| Ours-T5 | 53.4 | 41.4 | 77.5 | 70.1 | 82.8 68.3 81.1 | | |
| w/o LCS 52.7 | 40.7 | 76.2 | 68.6 | 82.5 67.7 81.1 | | | |
| w/o split | 50.4 | 39.2 | 76.6 | 67.5 | 82.5 65.2 78.8 | | |
| w/o hint | 52.1 | 40.2 | 76.7 | 69.4 | 82.4 67.7 80.8 | | |
Table 6 shows the results of end-to-end ablation test on CANARD. We can see that by replacing LCS algorithm with greedy algorithm, the experimental results have decreased to a certain extent, which shows the effectiveness of LCS algorithm.
On the other hand, due to the diversity of experimental data, the matching algorithm can only approach the correct results, and can not guarantee the complete correctness. Greedy algorithm is also a substitute. Our greedy algorithm is described as follows.
We use 2 pointers to traverse the current utterance and ground truth utterance. The pointers point to the current word in each of the two utterances. If they cannot be matched, the pointer of the ground truth will advance to the next matching position and stop, and the scanned span will be marked as an "ellipsis". If no match can be made until the end, the pointer of the current occurrence moves forward one bit and adds the previous position to the span of "coreference".
If we remove the two optimizations of splitting sentences according to the number of blanks and adding hint from our framework, there will be more obvious decline. The reason is that splitting sentences can keep more syntactic information in sentences, and multiple blanks will make sentences look "full of loopholes". Adding hint will prompt the original words in the language model in phase 2, so as to provide more information. For example, if our hint is "he", the model will not tend to fill in a female name or other things here.
![6_image_0.png](6_image_0.png)
Table 7 shows the F1-score of our LCS based algorithm and greedy based algorithm in predicting the locations that need to be rewritten (that is, the first phase in the 2-phase framework). They are trained and tested on the sequence annotation data generated by their own methods. We can see that the algorithm based on LCS has better effect.
## 3.8 Time Cost Evaluation
Table 7 shows the results of training and predicting time on CANARD. In Section 3.5, we found that
| Model | Phase 1 | Phase 2 | Total Time | |
|---------------------|-----------|-----------|--------------|----------|
| Ours-T5 | 9m17s | 4h31m20s | 4h40m37s | |
| T5-small | - | 4h2m54s | 4h2m54s | |
| (a) Training time. | | | | |
| Model | Phase 1 | Phase 2 | Total Time | Ave Time |
| Ours-T5 | 24s | 18m10s | 18m34s | 0.20s |
| T5-small | - | 24m32s | 24m32s | 0.26s |
| (b) Inference Time. | | | | |
our model has the least advantage over the endto-end T5-small model. Therefore, in this section, we compare their time consumption. In Table 7a, under the same configuration, we found that our method would take more time to fine-tune. This is understandable because although there are only 5571 samples in the testset of canard dataset, we will segment sentences according to the number of blanks. Even if there are sentences without any blanks, this optimization also leads to an increase in the number of samples to 6569. Interestingly, in the inference time, Table 7b shows that our model takes less time. This may be because our model does not need to generate a whole sentence, but only needs to fill in the blank, which is much shorter than a complete utterance. Due to the short time of BERTCRF, our method only takes 11.9% more time than the end-to-end T5 model, and the overall size of the model is almost the same as other training requirements. Therefore, we believe that even a small increase in results can illustrate the effectiveness of our method.
## 3.9 Comparison With Chatgpt
In this section, we will present the results of comparison with ChatGPT 5. Dialogue systems are useful in many tasks and scenarios. Rewriting utterances is particularly useful when a light-weight dialogue model which only takes the last utterance as input is desirable. This is exactly where very large models such as ChatGPT cannot help, not to mention the various woes of current ChatGPT such as the cost of deployment, slow inference speed, and privacy issues. Therefore, we believe that it is not fair to compare ChatGPT with the kind of rewriting technology that we are advocating in this 5https://chat.openai.com/
paper, and the latter still has its merits.
![7_image_0.png](7_image_0.png)
Please complete the following incomplete sentence completion
![7_image_1.png](7_image_1.png)
task. Given the context of the conversation and incomplete sentences to be rewritten, you need to complete the sentences to be rewritten so that they can be understood out of context.
Please do not change the words in the sentence to be rewritten
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
If you understand, I will give you some tasks.
Figure 8: A prompt designed to allow ChatGPT to do rewriting task.
The scale of ChatGPT or is at least 3 orders of magnitude larger than the models we use in this paper, which means this is not a fair comparison.
Nevertheless, we still conducted the following supplementary experiments on ChatGPT. The prompt we used is shown in Table 8.
| Methods | F1/2 | BLEU1/2 | ROUGE1/2/L |
|-----------|-----------|-----------|----------------|
| Ours-T5 | 46.6/33.3 | 63.5/53.9 | 67.6/49.4/64.2 |
| ChatGPT | 41.8/23.4 | 45.0/30.1 | 52.2/23.3/46.3 |
Table 8: Experimental results on 30 cases of CANARD.
The experimental results on 30 cases of CANARD is shown in Table 8. Some examples of the results are shown in Table 9. After repeated tries and with the best prompt we can find, ChatGPT is still worse than our method in terms of automatic evaluation metrics. However, by human evaluation, testers think that the rewriting results of ChatGPT are of higher quality (more fluent).
This is no surprise given the tremendous parameter space of ChatGPT.
## 4 Related Work
Early work on rewriting often considers the problem as a standard text generation task, using pointer networks or sequence-to-sequence models with a copy mechanism (Su et al., 2019b; Elgohary et al.,
| did fsb get into trouble for the attack against the account | |
|---------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
| Ours-T5 | annapolitovskaya@us provider1 ? why did superstar billy graham return to the wwwf ? Did the perpetrators face consequences for the attack |
| ChatGPT | on Anna Politkovskaya's email? What was the reason for Superstar Billy Graham's return to WWWF? |
Table 9: Examples of ChatGPT and ours on CANARD.
2019; Quan et al., 2019) to fetch the relevant information in the context (Gu et al., 2016). Later, pre-trained models like T5 (Raffel et al., 2020) are fine-tuned with conversational query reformulation dataset to generate the rewritten utterance directly.
Inoue et al. (2022) uses Picker which identifies the omitted tokens to optimize T5.In general, these generative approaches ignore the characteristic of IUR problem: rewritten utterances often share the same syntactic structure as the original incomplete utterances.
Given that coreference is a major source of incompleteness of an utterance, another common thought is to utilize a coreference resolution or corresponding feartures. Tseng et al. (2021) proposed a model which jointly learns coreference resolution and query rewrite with the GPT-2 architecture
(Radford et al., 2019). By first predicting coreference links between the query and context, the performance of rewriting has improved while the incompleteness is induced by coreference. However, this does not work for utterances with ellipisis.
Besides, the performance of the rewriting model is limited by the coreference resolution model.
Recently, some of the work on incomplete utterance rewriting focuses on the "actions" we take to change the original incomplete utterance into a self-contained utterance (target utterance). Hao et al. (2021a) solves this problem with a sequencetagging model. For each word in the input utterance, the model will predict whether to delete it or not, meanwhile, the span of words which need to be inserted before the current word will be chosen from the context. Liu et al. (2020) formulated the problem as a syntactic segmentation task by predicting segmentation operations for the rewritten utterance. Zhang et al. (2022) extracts the coreference and omission relationship directly from the self-attention weight matrix of the transformer instead of word embeddings. Compared with these methods, our framework separates the two phases more thoroughly of predicting the rewriting position and filling in the blanks, and meanwhile, reduces the difficulty of the two phases with the divide and conquer method.
## 5 Conclusion
In this work, we present a new 2-phase framework which includes locating positions to rewrite and filling the blanks for solving Incomplete Utterance Rewriting (IUR) task. We also propose an LCS
based method to align the original incomplete sentence with the ground truth utterance to obtain the positions of coreference and ellipsis. Results show that our model performs the best in several metrics. We also recognize two directions for further research. First, as the performance of our 2-phase framework is often limited by the first phase, we will try to improve the accuracy of locating rewriting positions. Second, it will be useful to study the best way for applying our rewriting model to other downstream NLP tasks.
## 6 Limitations
Our framework is a two-phase process, which has its inherent defects, that is, the results of the second phase depend on the results of the phase 1. Because the sequence annotation algorithm in the first phase cannot achieve 100% accuracy, it will predict the wrong position that should be rewritten when the second phase is followed, which will further lead to the error of the final result.
On the other hand, T5 model is only used to predict the words that should be filled in blank, rather than generate the whole sentence, which may lead to the decline of the overall fluency of the sentence.
## Acknowledgments
This work was generously supported by the CMB
Credit Card Center & SJTU joint research grant, and Meituan-SJTU joint research grant.
## References
Ahmed Elgohary, Denis Peskov, and Jordan BoydGraber. 2019. Can you unpack that? learning to rewrite questions-in-context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5918–5924, Hong Kong, China. Association for Computational Linguistics.
Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li.
2016. Incorporating copying mechanism in sequenceto-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–
1640, Berlin, Germany. Association for Computational Linguistics.
Jie Hao, Linfeng Song, Liwei Wang, Kun Xu, Zhaopeng Tu, and Dong Yu. 2021a. RAST: Domain-robust dialogue rewriting as sequence tagging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4913–4924, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jie Hao, Linfeng Song, Liwei Wang, Kun Xu, Zhaopeng Tu, and Dong Yu. 2021b. RAST: domain-robust dialogue rewriting as sequence tagging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4913–4924. Association for Computational Linguistics.
Shumpei Inoue, Tsungwei Liu, Nguyen Hong Son, and Minh-Tien Nguyen. 2022. Enhance incomplete utterance restoration by joint learning token extraction and text generation. *arXiv preprint arXiv:2204.03958*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL 2020, Online, July 5-10, 2020, pages 7871–7880.
Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Jin Lisa, Song Linfeng, Jin Lifeng, Yu Dong, and Gildea1 Daniel. 2022. Hierarchical context tagging for utterance rewriting. In Proceedings of the AAAI
Conference on Artificial Intelligence.
Qian Liu, Bei Chen, Jian-Guang Lou, Bin Zhou, and Dongmei Zhang. 2020. Incomplete utterance rewriting as semantic segmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2846–2857, Online. Association for Computational Linguistics.
Scott Martin, Shivani Poddar, and Kartikeya Upasani.
2020. MuDoCo: Corpus for multidomain coreference resolution and referring expression generation.
In Proceedings of the 12th Language Resources and Evaluation Conference, pages 104–111, Marseille, France. European Language Resources Association.
Zhu Feng Pan, Kun Bai, Yan Wang, Lianqiang Zhou, and Xiaojiang Liu. 2019a. Improving open-domain dialogue systems via multi-turn incomplete utterance
restoration. In *Proceedings of the 2019 Conference* on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP
2019, Hong Kong, China, November 3-7, 2019, pages 1824–1833. Association for Computational Linguistics.
Zhufeng Pan, Kun Bai, Yan Wang, Lianqiang Zhou, and Xiaojiang Liu. 2019b. Improving open-domain dialogue systems via multi-turn incomplete utterance restoration. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 1824–1833, Hong Kong, China. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Jun Quan, Deyi Xiong, Bonnie Webber, and Changjian Hu. 2019. GECOR: An end-to-end generative ellipsis and co-reference resolution model for taskoriented dialogue. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 4547–4557, Hong Kong, China. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Michael Regan, Pushpendre Rastogi, Arpit Gupta, and Lambert Mathias. 2019. A dataset for resolving referring expressions in spoken dialogue via contextual query rewrites (cqr). arXiv preprint arXiv:1903.11783.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Fábio Souza, Rodrigo Nogueira, and Roberto Lotufo.
2019. Portuguese named entity recognition using bert-crf. *arXiv preprint arXiv:1909.10649*.
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019a. Improving multi-turn dialogue modelling with utterance rewriter.
In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 22–31. Association for Computational Linguistics.
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019b. Improving multi-turn dialogue modelling with utterance ReWriter. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 22–31, Florence, Italy. Association for Computational Linguistics.
Bo-Hsiang Tseng, Shruti Bhargava, Jiarui Lu, Joel Ruben Antony Moniz, Dhivya Piraviperumal, Lin Li, and Hong Yu. 2021. CREAD: Combined resolution of ellipses and anaphora in dialogues. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 3390–3406, Online. Association for Computational Linguistics.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.
2015. Pointer networks. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.
Yong Zhang, Zhitao Li, Jianzong Wang, Ning Cheng, and Jing Xiao. 2022. Self-attention for incomplete utterance rewriting. In *IEEE International Conference* on Acoustics, Speech and Signal Processing, ICASSP
2022, Virtual and Singapore, 23-27 May 2022, pages 8047–8051. IEEE.
## A Examples Of Datasets
![11_image_0.png](11_image_0.png)
![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png)
| A | Examples of Datasets | A: betsy devos B: school vouchers | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|--------------------------------------------------------|
| Context | A: what are the school vouchers ? B: would allow students to attend private schools with public funding . | | |
| Current | A: how do people get them ? | | |
| Gold | A: how do people get the school vouchers ? | | |
| Ours-sup | A: how do people get school vouchers ? | | |
| HCT | A: how do people get private ? | | |
| Dataset | Description and Examples | | |
| MuDoCo | Daily conversation of six domains. Context A: put me on active now . B: you are active now . A: did i miss any calls or messages here ? B: mila called yesterday at 1 am . Current Utterance A: is she on now ? Ground Truth A: is mila on now ? | A: anna ella carroll B: 1850s political career | |
| Context | A: what made anna get into politics ? B: carroll joined the american party ( the know nothing party ) following the demise of the whigs . | | |
| CQR | Task-oriented dialogue between a user and an agent. Context A: What gas stations are here ? B: There is a Chevron . A: That ' s good ! Please pick the quickest route to get there and avoid all heavy traffic ! B: Taking you to Chevron . Current Utterance A: What is the address ? Ground Truth A: What is the address of the gas station Chevron ? | Current | A: where was she when she started the american party ? |
| Gold | A: where was anna ella carroll when she started the american party ? | | |
| Ours-sup | A: where was anna ella carroll when she started the american party ? | | |
| RUN | A: where was anna ella anna ella carroll when she started the american party ? | | |
| REWRITE | Chinese dataset of 3-turn dialogues. | Context | A: real love ( beatles song ) B: early origins |
| Current | A: who originally wrote real love ? | | |
| Gold | A: who originally wrote beatles song real love ? | | |
| Ours-sup | A: who originally wrote real love ? | | |
| T5-small | A: who originally wrote the beatles song real love ? | | |
| Context A: 能给我签名吗 (Could you give me signature?) B: 出专辑再议 (Wait until the album is released.) Current Utterance A: 我现在就要 (I want it now.) Ground Truth A: 我现在就要签名 (I want signature now.) | Table 11: More typical examples extracted from the prediction results on CANARD. | | |
| RES | Chinese dataset of 200K multi-turn conversations in open-domain. Context A: 今天买了一堆桌游有爱玩的可以一起 (Today, I bought a lot of board games. Those who like to play can join me.) B: 我比较喜欢卡卡颂和现代艺术 (I prefer Kakason and modern art.) A: 听说过不过没买 (I heard about it, but I didn't buy it.) B: 我有 (I have it.) Current Utterance A: 一起啊 (Let's play together.) Ground Truth A: 一起玩桌游啊 (Let's play board games together.) | of coreference in the current sentence, but finds the wrong span. From the second example, we can see that RUN's edit based model duplicates the span from the context. Our model uses T5 to find the corresponding span from the context, which is significantly stronger than RUN and HCT. The third example shows the shortcomings of our model. Compared with the end-to-end T5- small model, the first step of our framework failed to predict the need to insert words between "write" and "real", so the second step could not fill in the correct answer. This shows the inherent defect of the 2-step framework, that is, the result of the second step depends on that of the first step, and there | |
| CANARD | Teacher and student talking about news or a person. Context A: anna politkovskaya B: the murder remains unsolved , 2016 Current Utterance A: did they have any clues ? Ground Truth A: did investigators have any clues in the unresolved murder of anna politkovskaya ? | | |
Table 10: Information and examples of 4 datasets.
The brief descriptions, statistics and samples of the datasets are shown in Table 10.
## B Cases In Canard
Table 11 shows some specific examples of rewriting using our model and other baselines. The examples of predicting results of our model, HCT, RUN and T5-small on CANARD dataset are shown from top to bottom. HCT tends to copy the predicted span directly from the context. From the first example, we can find that HCT predicts the correct position of coreference in the current sentence, but finds the wrong span. From the second example, we can see that RUN's edit based model duplicates the span from the context. Our model uses T5 to find the corresponding span from the context, which is significantly stronger than RUN and HCT.
The third example shows the shortcomings of our model. Compared with the end-to-end T5small model, the first step of our framework failed to predict the need to insert words between "write" and "real", so the second step could not fill in the correct answer. This shows the inherent defect of the 2-step framework, that is, the result of the second step depends on that of the first step, and there is a certain gap.
## C Cases In Cqr And Mudoco
As a supplement to case study, we provide more cases from CQR and MuDoCo here in Table 12.
| Context | A: What gas stations are here ? |
|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| (CQR) | B: There is a Chevron . |
| Current | A: That ' s good ! Please pick the quickest route to get there and avoid all heavy traffic ! A: That ' s good ! Please pick the quickest route |
| Gold | to get to the gas station Chevron and avoid all heavy traffic ! A: That ' s good ! Please pick the quickest route |
| Ours-sup | to get to the gas station Chevron and avoid all heavy traffic ! |
| RUN | A: a chevron ' s good ! please pick the quickest route to the there and avoid all heavy chevron traffic ! A: where is the closest grocery store B: We are 4 miles away from Whole Foods and from Safeway : which one do you prefer ? A: Safeway . B: Safeway is located at 452 Arcadia Pl . |
| Context (CQR) Current | A: Pick the quickest route to go there and send the info on my screen please A: Pick the quickest route to go to the grocery store |
| Gold | Safeway 4 miles away at 452 Arcadia Pl and send the info on my screen please A: Pick the quickest route to go to the grocery store |
| Ours-sup | Safeway 4 miles away at 452 Arcadia Pl and send the info on my screen please |
| T5-small | A: Pick the quickest route to go to the grocery store Safeway 4 miles away at 452 A: if ray or ron call do not answer . B: i will not answer calls from ray or ron . A: add rob to that list too . B: you do not wish to receive calls from ray , ron or rob today . correct ? A: yep , but if roy calls answer that ! B: i will only answer if roy calls not the others . |
| Current | A: yes he is the only one i want to talk to today . |
| Gold | A: yes roy is the only one i want to talk to today . |
| Ours-sup | A: yes roy is the only one i want to talk to today . |
| HCT | A: yes rob is the only one i want to talk to today . A: who is that calling me now ? |
| Context (MuDoCo) | B: the call is from eric , alex , and kyle . |
| Context (MuDoCo) Current | A: decline the call and tell them all that i will call them back . |
| Gold | A: decline the call and tell eric , alex , and kyle that i will call them back . |
| Ours-sup | A: decline the call and tell eric , alex , and kyle that i will call them back . |
| T5-small | A: decline the call and tell eric, alex, and kyle all |
Table 12: Examples extracted from the prediction results of CQR and MuDoCo. Red words shows the different parts between our framework and baseline.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract
✗ A4. Have you used AI writing assistants when working on this paper?
I didn't use it.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3.1, 3.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
They are completely public.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
They are consistent with the intended use.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
They are completely anonymous.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3.1, Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
They are shown at the beginning of the section 3.
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
3.4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3.4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
popovic-etal-2023-exploring | Exploring Variation of Results from Different Experimental Conditions | https://aclanthology.org/2023.findings-acl.172 | It might reasonably be expected that running multiple experiments for the same task using the same data and model would yield very similar results. Recent research has, however, shown this not to be the case for many NLP experiments. In this paper, we report extensive coordinated work by two NLP groups to run the training and testing pipeline for three neural text simplification models under varying experimental conditions, including different random seeds, run-time environments, and dependency versions, yielding a large number of results for each of the three models using the same data and train/dev/test set splits. From one perspective, these results can be interpreted as shedding light on the reproducibility of evaluation results for the three NTS models, and we present an in-depth analysis of the variation observed for different combinations of experimental conditions. From another perspective, the results raise the question of whether the averaged score should be considered the {`}true{'} result for each model. | # Exploring Variation Of Results From Different Experimental Conditions
Maja Popovic,´
1 Mohammad Arvan,2 Natalie Parde,2 **Anya Belz**1 1ADAPT Centre, School of Computing, DCU, Ireland [email protected] 2Department of Computer Science, University of Illinois Chicago
{marvan3,parde}@uic.edu
## Abstract
It might reasonably be expected that running multiple experiments for the same task using the same data and model would yield very similar results. Recent research has, however, shown this not to be the case for many NLP
experiments. In this paper, we report extensive coordinated work by two NLP groups to run the training and testing pipeline for three neural text simplification models under varying experimental conditions, including different random seeds, run-time environments, and dependency versions, yielding a large number of results for each of the three models using the same data and train/dev/test set splits. From one perspective, these results can be interpreted as shedding light on the reproducibility of evaluation results for the three NTS models, and we present an indepth analysis of the variation observed for different combinations of experimental conditions.
From another perspective, the results raise the question of whether the averaged score should be considered the 'true' result for each model.
## 1 Introduction
Recently there has been a promising surge of interest in reproducibility of NLP models, supported by challenges (Pineau et al., 2021), shared tasks
(Belz et al., 2020), conference tracks (Carpuat et al.,
2022), and even the *Reality Check* theme at this conference. The outcome of this surge in interest has been a flurry of reproducibility studies and related investigations (Belz et al., 2022a; Arvan et al.,
2022a; Chen et al., 2022b). However, the collective findings from these efforts have been alarming.
With interest in reproducibility growing, the evidence is mounting that scores are substantially affected by changes not only to arbitrary factors like random seed and different data splits, but also by incidental factors such as the type of GPU on which an experiment is run, and the run-time environment. In many cases, near-identical scores can be guaranteed only when an experiment is re-run in fully containerised form. In effect, this means that even perfect sharing of information (once regarded as the answer to all our reproducibility problems1
(Sonnenburg et al., 2007)) cannot guarantee identical results in all cases.
All this raises questions about reporting, experimental design and the informativeness of scores regarding the relative merits of different methods. Underlying these is the question of where the boundary lies - seemingly between the two extremes. On the one hand, exploration of methodological variations and reporting of separate scores is part and parcel of method development. On the other hand, arbitrary and incidental factors such as random seed are not part of method development, because they do not generalise to future applications of the same method. For the former, clearly, comparing and reporting different scores is important; for the latter, how to interpret, address or report variation in scores is an open question.
In this paper, we tackle this question by conducting a systematic and comprehensive investigation coordinated across two NLP groups to study the variation of the results across three neural text simplification (NTS) models under many different experimental conditions. We experiment with different random seeds, run-time environments, and dependency versions to ensure broad coverage of our study. We observe that reporting average score and its coefficient of variation is a more reliable standard than reporting the maximum value, and we urge researchers to record all methodological conditions, control incidental ones, and abstract away arbitrary factors to promote the reproducibility of their scientific contributions.
## 2 Task And Experimental Set-Up
Our starting point for this exploration is the first neural text simplification system reported by Nisioi et al. (2017). This work was selected because it is suitable for our purposes: the authors provided a repository2 which contains comprehensive information about the original work and the resources, thus facilitating repeat runs of their experiments and exploration of variation on their experimental conditions, which is not often the case for NLP papers.
Moreover, the work has been reproduced before
(Cooper and Shardlow, 2020; Popovic and Belz ´ ,
2021; Popovic et al. ´ , 2022; Belz et al., 2022a; Arvan et al., 2022b) as part of the REPROLANG 2020
(Branco et al., 2020) and ReproGen 2021/2022
(Belz et al., 2021, 2022b) shared tasks, which represents another reference point to choose it.
In the following subsections we describe the four different systems (§2.2), the single data set/split and four text processing variants (§2.3), and the two evaluation methods (§2.4) which were included in our exploration, either because they were part of the original study or because we added them. §2.5 provides an overview of the incidental and arbitrary variation arising in our different runs which we also analysed.
## 2.1 Task Background
Briefly, text simplification aims to transform a specified text into a simpler form while retaining the same meaning. This is potentially useful for a broad range of real-world applications, because it makes the text readable and understandable for wider audiences and also easier to process by automatic NLP tools. The notion of simplicity itself may be tied to a variety of factors ranging from lexical complexity to content coverage or sentence/document structure. Automatic text simiplification (ATS) can be rule-based or data-based.
Many data-based techniques approach the task of simplifying text by adopting methods from machine translation (MT), which is also the case for our experiments. Our work does not seek to develop innovations in ATS specifically, but rather to use ATS models as a convenient case study for studying variation of results. Nonetheless, we provide this background to facilitate fuller understanding of the problem scope and goals of the reproduced systems.
## 2.2 Systems
Nisioi et al. (2017)'s original work is one of the first which explored neural networks for ATS (neural ATS, or NTS). They used Long Short-Term Memory (LSTM) recurrent neural networks with attention in an encoder-decoder architecture. Two models were trained: one standard neural MT model
(which we call LSTM), and one (LSTM-w2v) using external pre-trained word2vec word representations (Mikolov et al., 2013). All their experiments were carried out using the openNMT tool3(Klein et al., 2017). The used version is the initial version based on LuaTorch,4released in December 2016.
The authors provided information about all necessary external libraries and specific Python and Lua dependencies, and also released the two models they trained (LSMT and LSTM-w2v). It is worth noting that the source code uses Python 2.7 and Torch. The Python environment uses older versions of openNMT, NLTK, and gensim. This version of openNMT is no longer maintained and most of the libraries and dependencies have become obsolete, and it is therefore advised not to use this version anymore but to switch to one of the two newer ones (openNMT-py based on PyTorch or openNMT-tf based on TensorFlow). Therefore, it has become extremely challenging to recreate the same environment to regenerate and retrain the models using the released source code.
Other than variation in the libraries and environments, we conduct a random search for the LSTM
models using the original repository. In this scenario, all the hyper-parameters are kept the same except the random seed. Knowing that the random seed affects the weight initialisation, the data order used in training, and the sampling used in the generation, we suspected that we might observe a wide range of results.
Given that LSTM models generally have been superseded by transformer models (Vaswani et al., 2017), we additionally trained a transformer model on the data provided by the authors, using another publicly available tool, Sockeye.5 We used two versions of the tool: the first version, based on MXNet (Hieber et al., 2018), and the newest (third)
version based on PyTorch (Hieber et al., 2022). We treat these two versions as two different systems using the same model type. Thus to summarise, our systems are:
- **LSTM/OpenNMT:** Nisioi et al. (2017)'s LSTM neural MT model implemented as the first version of the OpenNMT tool.
- **LSTM-w2v/OpenNMT:** Nisioi et al. (2017)'s LSTM neural MT model, using external pre-trained word2vec representations implemented as the first version of the OpenNMT
tool.
- **Transformer/Sockeye v1 (MXNet):** Our updated version of the NTS model, using a transformer model instead of an LSTM, implemented as the first version of the Sockeye tool based on MXNet.
- **Transformer/Sockeye v3 (PyTorch):** Our updated version of the NTS model, using a transformer model instead of an LSTM implemented as the newest (third) version of the Sockeye tool based on PyTorch.
We report results achieved under numerous conditions for each of these systems, ensuring broad coverage and supporting the robustness of the investigation.
## 2.3 Data Set And Text Processing
Nisioi et al.'s (2017) repository contains the preprocessed data set, but not the original data nor the pre-processing scripts. Their data set was a popular corpus of parallel English Wikipedia and Simple English Wikipedia (EW-SEW) articles (Hwang et al., 2015), and we used the same data for our experiments. The corpus statistics for the parallel data in both the training and tests sets are presented in Table 1. We report the number of sentences and words and the overall vocabulary size for each partition (original/simplified × train/test) of the data.
In the original paper, it is reported that Named Entities were treated separately: they were first identified, then replaced by an 'unknown' symbol for the training, and for generating output, each
'unknown' symbol was replaced by the word with the highest probability score from the attention layer. However, no scripts or guidelines were provided for it. Also, it was not mentioned that the words were segmented into sub-word units, which is nowadays the standard for all state-of-the-art neural systems. Word segmentation enables better coverage of large vocabularies and treatment
| original | simplified | | |
|------------|--------------|-----------|-----------|
| sentences | 284,677 | | |
| train | words | 7,401,589 | 5,635,507 |
| vocabulary | 212,292 | 165,170 | |
| sentences | 360 | | |
| test | words | 8,110 | 7,957 |
| vocabulary | 3,209 | 2,802 | |
of rare and unseen words. The standard word segmentation method for the Sockeye tool is byte-pair encoding (BPE) (Sennrich et al., 2016), which is one of the most widely used segmentation methods.
According to the Sockeye guidelines, segmentation is performed after the original text is tokenised. In our experiments, we explored both original and additionally tokenised data, both with BPE word segmentation.
After generating outputs with our transformer models, sub-word units are joined together to form original words. This is usually followed by a detokenisation step. However, since the outputs of the original models are all tokenised, we evaluated both versions: tokenised and detokenised. Finally, due to lack of special treatment of named entities, the transformer outputs contain a number of 'unknown' symbols, referring to unseen sub-word units. We computed metric scores for two versions of the output: with 'unknown' symbols left in place, and with 'unknown' symbols removed.
## 2.4 Evaluation
We performed automatic evaluation of generated outputs using the script provided by the authors which calculates two metrics: BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016). Previous work also explored differences arising from different BLEU implementations (Popovic and Belz ´ ,
2021), but these are not relevant to present purposes. BLEU is based on matching between the generated text and a manually simplified reference text, while SARI compares the generated text both to the reference text as well as to the original text.
## 2.5 **Methodological, Arbitrary And Incidental** Variations
Table 2 provides an overview of the experimental conditions (first column) for which we explored
| Condition | Explanation | Range of values explored |
|-----------------------------------------------------------------------------|--------------------------------------------|--------------------------------------------------------------|
| Methodological variation Model type different types of neural architectures | LSTM, transformer | |
| Implementation | different implementations of same model | two versions of the Sockeye tool |
| Preprocessing | different types of text processing | word segmentation, tokenisation, treatment of named entities |
| Arbitrary variation Random seed | weights initialization, data order, and sampling used in generation | 36 different initialisations |
| Incidental variation Dependency versions | changes in external libraries/dependencies | Python, Lua, NLTK |
| Run-time environment | where/when the experiment was carried out | evaluation, test, training |
Table 2: Summary of different experimental conditions explored in our runs (see in text for explanation of three broad categories).
variation. The conditions are grouped into three categories: (i) methodological factors, i.e., variation in the methods used in a solution for a task with the aim of improving performance, where better performance can to some degree be expected to generalise to similar types of tasks; (ii) arbitrary factors where an arbitrary (often random) selection is made with respect to a given parameter; and (iii) incidental factors, where selection is not under the direct control of the system creators, e.g., changes from one version of a dependency to another. All of these conditions may be reasonably expected to vary during replication experiments.
Methodological factors may occur when the group replicating a given model decides to update some component of its design based on recent findings. An example in our own work reported here is the inclusion of the transformer-based model, based on the recent success of these models for a wide range of NLP tasks in the time since Nisioi et al. (2017)'s publication.
Arbitrary factors may occur due to underreporting of necessary parameters in the original work. For instance, if a hyper-parameter must be specified in order for the model to run but no specifications are provided by the model creators, the group replicating the work may select that hyperparameter randomly or using their own heuristic.
Incidental factors may occur due to library or package updates, rendering the versions reported in the original publication obsolete. It also may occur in different run-time environments, for example running experiments on different computers.
By including each of these factors in our study, we sought to ensure broad coverage of the range of results variation that may realistically occur when attempting to replicate a previously reported model.
## 3 Results
We report the results from both team A and team B, for each of the studied conditions. While both teams struggled to get the original repository to a working state, team A failed to install all the required dependencies as many are deprecated. Team B reported similar concerns about reproducing and reusing the original source code; however, ultimately, they managed to get the repository to a running state.
Table 3 shows the two automatic scores generated by the evaluation script provided by the authors for all explored variations (see Table 2),
grouped together by system: LSTM, LSTM-w2v, Transformer Sockeye v1 and Transformer Sockeye v3. Where they exist, results provided by the authors of the original paper are included as well.
For random seed search, we included two worst and best-performing models in this table, while full results of this search can be found in Appendix.
Averaged scores for each of the three models together with the standard deviations and coefficients of variation (Belz et al., 2022a) are presented in Table 4. For each of the models, 'all' refers to the average value of all scores for this model presented in Table 3. For the LSTM model, 'random seed' is averaged only over the random seed scores, and 'other' is averaged over all scores except the random seed scores. For the transformer model,
'v1' means only the scores from version 1, and 'v3' means only the scores from version 3.
| Training | Outputs | Scores (original script) | | | | | |
|----------------|----------------|----------------------------|---------------|-----------------|---------|--------------|---------------|
| System | trained by | on data set | generated by | post-processing | SARI | BLEU | run by |
| LSTM/ | N et al, 2017 | original | N et al, 2017 | original | 30.65 | 84.51 | N et al, 2017 |
| OpenNMT | N et al, 2017 | original | N et al, 2017 | original | 30.65 | 85.60 | team A, 2022 |
| N et al, 2017 | original | N et al, 2017 | original | 30.65 | 84.51 | team B, 2022 | |
| N et al, 2017 | original | team A, 2021 | original | 29.96 | 86.61 | team A, 2022 | |
| N et al, 2017 | original | team B, 2022 | original | 29.96 | 86.53 | team B, 2022 | |
| team B, 2022 | original | team B, 2022 | original | 30.23 | 88.81 | team B, 2022 | |
| team B, 2022 | original | team B, 2022 | original | 28.68 | 84.47 ‡ | team B, 2022 | |
| team B, 2022 | original | team B, 2022 | original | 29.76 | 89.59 † | team B, 2022 | |
| team B, 2023 | original | team B, 2023 | original | 29.53 | 88.68 | team B, 2023 | |
| LSTM-w2v/ | N et al, 2017 | original | N et al, 2017 | original | 31.11 | 87.50 | N et al, 2017 |
| OpenNMT | N et al, 2017 | original | N et al, 2017 | original | 31.11 | 89.36 | team A, 2022 |
| N et al, 2017 | original | N et al, 2017 | original | 31.11 | 87.50 | team B, 2022 | |
| N et al, 2017 | original | team A, 2021 | original | 29.12 | 89.64 | team A, 2022 | |
| N et al, 2017 | original | team B, 2022 | original | 29.12 | 89.40 | team B, 2022 | |
| team B, 2022 | original | team B, 2022 | original | 29.70 | 87.04 | team B, 2022 | |
| team B, 2023 | original | team B, 2023 | original | 29.74 | 88.56 | team B, 2023 | |
| Transformer/ | BPE joined | 32.67 | 84.66 | team A, 2022 | | | |
| Sockeye v1 | +'unk' removed | 32.67 | 89.75 | | | | |
| team A, 2022 | original+BPE | team A, 2022 | | | | | |
| (MXNet) | +detokenised | 32.64 | 84.00 | | | | |
| +detok+'unk' | 32.70 | 88.45 | | | | | |
| BPE joined | 32.54 | 80.32 | team A, 2022 | | | | |
| +'unk' removed | 32.54 | 86.15 | | | | | |
| team A, 2022 | tokenise+BPE | team A, 2022 | +detokenised | 32.86 | 83.52 | | |
| +detok+'unk' | 32.90 | 88.55 | | | | | |
| Transformer/ | BPE joined | 28.41 | 91.82 | team A, 2022 | | | |
| Sockeye v3 | +'unk' removed | 28.40 | 93.74 | | | | |
| team A, 2022 | original+BPE | team A, 2022 | | | | | |
| (PyTorch) | +detokenised | 32.66 | 90.95 | | | | |
| +detok+'unk' | 32.70 | 92.45 | | | | | |
| BPE joined | 29.50 | 88.30 | team A, 2022 | | | | |
| +'unk' removed | 29.49 | 89.97 | | | | | |
| team A, 2022 | tokenise+BPE | team A, 2022 | +detokenised | 32.94 | 91.00 | | |
| +detok+'unk' | 32.94 | 91.72 | | | | | |
According to the averaged SARI score, the transformer model performs best; however, the newest version performs worse than the old one. According to the averaged BLEU score,6 LSTM-v2w and Transformer have very similar performance, but the newest version of the transformer is the best of all while the first version is the worst.
We used the R package *cvequality* (Version 0.2.0;
(Marwick and Krishnamoorthy, 2019)) to test for significant differences of coefficients of variation
(CV). This package implements two of the most widely used statistical significance tests, proposed by Feltz and Miller (1996) and Krishnamoorthy and Lee (2014). The null hypothesis for each of the two automatic metrics is that there is no difference in CV between the three models.
We use the results reported in the Table 4 corresponding to the row 'all' for the three model variants. Conducting the two tests resulted in the statistical significance values shown in Table 5. We observe that neither test statistics nor p-value suggest statistical significance when setting α = 0.05.
Therefore, we cannot reject the null hypothesis.
## 4 Discussion
Nisioi et al. (2017) reported that using pre-trained word embeddings improves the model's performance. Results in Table 3 and Table 4 suggest that while this may be true, the differences are too small to draw clear conclusions. For one model alone, the LSTM variant, we have observed BLEU
scores ranging from 84.47 to 89.59; the average, on the other hand, is 87.90 with the CV of 1.36. Compared to LSTMs, transformer models have a higher variance in their performance. This can be attributed to the transformer's complexity and the fact that they are harder to train. Also, variations in tokenisation were included only in the transformer models. The performance difference between the best and worst transformer models is even higher
| SARI | BLEU | | | | | | |
|-------------|--------|-------|------|-------|-------|------|------|
| model | avg. | dev. | CV | avg. | dev. | CV | |
| LSTM | all | 29.38 | 0.48 | 1.66 | 87.64 | 1.39 | 1.59 |
| random seed | 29.24 | 0.31 | 1.07 | 87.90 | 1.18 | 1.36 | |
| other | 30.23 | 0.51 | 1.74 | 86.07 | 1.66 | 2.00 | |
| LSTM-w2v | all | 30.14 | 0.98 | 3.35 | 88.43 | 1.12 | 1.31 |
| transformer | all | 31.78 | 1.75 | 5.58 | 88.47 | 3.83 | 4.40 |
| v1 | 32.69 | 0.13 | 0.43 | 85.71 | 3.32 | 3.99 | |
| v3 | 30.88 | 2.18 | 7.29 | 91.24 | 1.69 | 1.91 | |
| test | BLEU | SARI |
|----------------------|-------------|-------------|
| Feltz & Miller | 3.54 / 0.16 | 2.98 / 0.22 |
| Krishnamoorthy & Lee | 1.72 / 0.42 | 1.59 / 0.44 |
than LSTM variants. With a 13.42 BLEU score difference, assessing *true* performance of the model is a challenging task. Judging the results by the average BLEU score (Table 4), we can observe that the transformer model trained using v3 of the Sockeye tool outperforms the rest of the models. This model achieves an average BLEU of 91.24 with a CV of 1.91. To put the CV into context, this value is higher than three other LSTM variants but lower than the rest of the transformer models. As it can be expected, using an averaged performance metric and CV enables a better comparison between models in different conditions.
Besides the mentioned analysis, we found it hard to provide distinct and unique observations from the results. This is likely due to the fact that the results are not conclusive and the variance is high.
We do not believe this is a flaw in our experimental design but rather a good representation of the complexities of comparing different models across varying conditions. The number of experiments conducted in this study is more than 60, a number that exceeds the number of experiments conducted in most other studies by a large margin.
One of the concerning issues we encountered is the issue of software deprecation. While this is not a new problem, and it is as old as software itself, it is becoming more and more prevalent. This is due to extreme reliance on empirical results and the complexity of publications that utilise neural networks. Often source codes use several external libraries and dependencies, any of which may become deprecated at any time. Increased availability of source code and the abundance of tools are signs of a healthy research community. Seeing new tools and libraries developed and improved daily is encouraging. At the same time, we believe researchers should practice caution when introducing new tools and libraries into their experiments, as doing so may shorten the usability of their source code.
## 4.1 Addressing Experimental Variation In Experimental Design
Many factors can affect the results of an experiment. Some of these factors are under the experimenter's control, and some are not. Before we address these variations, we highlight that scientific experiments are developed as a counterpart to abstraction of real-world problems. Data sets are created with this in mind, consisting of training, validation, and test sets of which the latter, in particular, is created to represent unseen real-world data.
Research on improving the generalisation of machine learning algorithms is another good example of leveraging scientific experiments to understand real-world challenges.
We can use another analogy to explore these variations further. Bogosort is a sorting algorithm that generates random permutations of the input until the input is sorted. While in the best case, it may take O(n) steps to sort the input, its worst-case performance is unbounded, making it impractical to use. Theoretically, it is possible to find the random seed that achieves best-case performance for a specific input; nonetheless, the slightest change in hardware, environment, or even the input itself will render this seed useless. Although neural networks are far more complicated than a simple sorting algorithm, the basis of reliance on the evidence is the same. Similar to Bogosort, recording all the random numbers used in an experiment is possible
(Chen et al., 2022a), but the question is: should we? We do not think so. Instead of optimising the random seed or other arbitrary factors, researchers should focus on the methods that minimize the impact of these variables. Ultimately, we believe the correct approach for conducting scientific experiments is to thoroughly report methodological variations, control incidental variations, and abstract away arbitrary variations.
## 5 Conclusions
In this work, we conducted a series of experiments for a single task using the same data under different experimental conditions. We categorized these conditions into three different categories: methodological, arbitrary, and incidental. We report the results of our experiments to demonstrate the wide results variation that can occur due to these factors.
We propose that researchers should record all methodological conditions, control incidental ones, and abstract away arbitrary factors. Lastly, we observed that using average score and its coefficient of variation (CV) instead of the maximum value provides far more reliable results. We recommend that researchers adopt this practice when documenting the findings from their own studies.
We are aware that this is easier said than done.
We are, however, optimistic that the field can move closer to this ideal over time. In the meantime, it is our hope that this recommendation highlights the contrast between what is currently a common practice (unfortunately, inadequate recording and reporting that do not address necessary factors for reproducibility) and what is needed to support successful, reproducible research in our field.
## Limitations
Our work is limited by several factors. First, our findings are supported only by experiments on a single NLP task (neural text simplification). We selected this task because it offered an intriguing sandbox for studying varying experimental conditions, ranging from differences in random seeds to modifications in compile-time and run-time environments and dependency versions. Comparing the multifaceted outcomes arising from these experiments facilitated greater quantified estimations of the degree of reproducibility for the selected NTS
systems. However, the dimensions of variation that we explored in this work are common to many NLP
tasks; none are unique only to text simplification.
Because of this, we believe that our findings would generalise broadly across NLP tasks.
We used a single data set, the same as in the original paper by Nisioi et al. (2017), to foster controlled study of our other experimental variables.
The data set comprises aligned sentences between English Wikipedia and Simple English Wikipedia.
Thus, it is unclear whether our findings would be similar if the study was conducted using data from other languages, including those with richer morphology such as Czech or Arabic.
Finally, although we conducted a robust set of experiments for the selected models across two research groups, our experiments are limited to a small set of NTS models due to the extensive set of conditions tested for each model. Although these models vary in their architecture, we do not know if other NTS models may be more or less stable across experimental conditions. Taken together, the limitations accompanying our findings suggest compelling avenues for future research.
## Ethics Statement
This research was guided by a broad range of ethical considerations, taking into account factors associated with environmental impact, equitable access, and reproducibility. We summarize those that we consider most critical in this section. It is our hope that by building a holistic understanding of these factors, we develop improved perspective of the challenges associated with reproducibility studies and the positive broader impacts that improved reproducibility standards may promote.
Environmental Impact. In this work, we seek to study the complex and murky relationship between experimental conditions and experimental outcomes. To address research questions surrounding this relationship, we conduct many experimental runs to replicate the same models across an extensive set of variable conditions. Although necessary for justifying our claims, a downside of this process is that it may produce environmental harm. One might argue that the advantages of assurance that the 'true' evaluation score is found do not outweigh the disadvantages of repeatedly running models that are known to produce large carbon footprints (Strubell et al., 2019). We attenuate this risk by controlling for as many variables allowable (e.g., data set and architectural variations)
while still fostering robust study of our core question, to minimize the number of experimental runs required.
Equitable Access. A concern closely related to environmental impact is that of equitable access to this line of research. By studying a problem that requires many repeated experimental runs with subtle variations, we may exclude disadvantaged researchers from performing meaningful follow-up studies, since they may not have the requisite resource bandwidth (Bommasani et al., 2021, §5.6).
However, although reproducibility studies themselves may pose a barrier to entry for researchers with limited access to compute hardware, the innovations *resulting* from these studies (e.g., improved community standards for reproducibility of reported results) may stand to greatly benefit marginalised researchers, by minimising the potential for bottlenecks in attempting to perform impossible and costly replications to establish performance baselines.
Reproducibility. To ensure reproducibility of our own work, we report all experimental parameters, computational budget, and computing infrastructure used. We discuss our experimental setups in depth, as they are the primary focus of this study.
We report descriptive statistics about our results to enhance transparency of our findings, and we report all implementation settings (e.g., package version number) needed to successfully replicate our work.
Although reproducibility studies are not specified as an intended use of the referenced systems (Nisioi et al., 2017), this use is compatible with the original access conditions and the authors have consented to the paper's use in numerous reproducibility studies since its publication (Belz et al., 2022b).
## Acknowledgements
This research was conducted with the financial support of Science Foundation Ireland under Grant Agreement No. 13/RC/2106_P2 at the ADAPT
SFI Research Centre at Dublin City University.
ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded by Science Foundation Ireland through the SFI Research Centres Programme.
## References
Mohammad Arvan, Luís Pina, and Natalie Parde. 2022a.
Reproducibility in computational linguistics: Is source code enough? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2350–2361, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Mohammad Arvan, Luís Pina, and Natalie Parde. 2022b.
Reproducibility of *Exploring Neural Text Simplification Models*: A Review. In *Proceedings of the 15th* International Natural Language Generation Conference (INLG 2022), Waterville, ME.
Anya Belz, Shubham Agarwal, Anastasia Shimorina, and Ehud Reiter. 2020. ReproGen: Proposal for a shared task on reproducibility of human evaluations in NLG. In Proceedings of the 13th International Conference on Natural Language Generation, pages 232–236, Dublin, Ireland. Association for Computational Linguistics.
Anya Belz, Maja Popovic, and Simon Mille. 2022a.
Quantified reproducibility assessment of NLP results.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16–28, Dublin, Ireland. Association for Computational Linguistics.
Anya Belz, Anastasia Shimorina, Shubham Agarwal, and Ehud Reiter. 2021. The ReproGen shared task on reproducibility of human evaluations in NLG:
Overview and results. In *Proceedings of the 14th* International Conference on Natural Language Generation, pages 249–258, Aberdeen, Scotland, UK.
Association for Computational Linguistics.
Anya Belz, Anastasia Shimorina, Maja Popovic, and ´
Ehud Reiter. 2022b. The 2022 reprogen shared task on reproducibility of evaluations in nlg: Overview and results. In *Proceedings of the 2022 ReproGen* Shared Task on Reproducibility of Evaluations in NLG (ReproGen 2022), pages 1–9, Waterville, Maine.
Association for Computational Linguistics.
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S.
Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S.
Chen, Kathleen A. Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent,
Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher R'e, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021.
On the opportunities and risks of foundation models.
ArXiv.
António Branco, Nicoletta Calzolari, Piek Vossen, Gertjan Van Noord, Dieter van Uytvanck, João Silva, Luís Gomes, André Moreira, and Willem Elbers. 2020. A
shared task of a new, collaborative type to foster reproducibility: A first exercise in the area of language science and technology with REPROLANG2020. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5539–5545, Marseille, France. European Language Resources Association.
Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz, editors. 2022. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States.
Boyuan Chen, Mingzhi Wen, Yong Shi, Dayi Lin, Gopi Krishnan Rajbahadur, and Zhen Ming Jiang.
2022a. Towards training reproducible deep learning models. In *44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022*, pages 2202–2214.
ACM.
Yanran Chen, Jonas Belouadi, and Steffen Eger. 2022b.
Reproducibility issues for bert-based evaluation metrics. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*,
pages 2965–2989, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Michael Cooper and Matthew Shardlow. 2020. CombiNMT: An exploration into neural text simplification models. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 5588–5594, Marseille, France. European Language Resources Association.
Carol J Feltz and G Edward Miller. 1996. An asymptotic test for the equality of coefficients of variation from k populations. *Statistics in medicine*, 15(6):647– 658.
Felix Hieber, Michael Denkowski, Tobias Domhan, Barbara Darques Barros, Celina Dong Ye, Xing Niu, Cuong Hoang, Ke Tran, Benjamin Hsu, Maria Nadejde, Surafel Lakew, Prashant Mathur, Anna Currey, and Marcello Federico. 2022. Sockeye 3: Fast Neural Machine Translation with PyTorch. arXiv preprint https://arxiv.org/abs/2207.05851v4.
Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In *Proceedings of the 13th* Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track),
pages 200–207, Boston, MA. Association for Machine Translation in the Americas.
William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from standard Wikipedia to Simple Wikipedia. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 211–217, Denver, Colorado. Association for Computational Linguistics.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Opensource toolkit for neural machine translation. In *Proceedings of ACL 2017, System Demonstrations*, pages 67–72, Vancouver, Canada. Association for Computational Linguistics.
Kalimuthu Krishnamoorthy and Meesook Lee. 2014.
Improved tests for the equality of normal coefficients of variation. *Computational statistics*, 29:215–232.
Ben Marwick and Kalimuthu Krishnamoorthy. 2019.
cvequality: Tests for the equality of coefficients of variation from multiple groups. R package version 0.2.0.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality.
In *Proceedings of the 26th International Conference* on Neural Information Processing Systems - Volume 2, NIPS'13, page 3111–3119, Red Hook, NY, USA.
Sergiu Nisioi, Sanja Štajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 2: Short Papers), pages 85–91, Vancouver, Canada. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d'Alché Buc, Emily Fox, and Hugo Larochelle.
2021. Improving reproducibility in machine learning research: a report from the neurips 2019 reproducibility program. *Journal of Machine Learning Research*,
22.
Maja Popovic and Anya Belz. 2021. ´ A reproduction study of an annotation-based human evaluation of MT outputs. In Proceedings of the 14th International Conference on Natural Language Generation, pages 293–300, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Maja Popovic, Rudali Huidrom, Sheila Castilho, and ´
Anya Belz. 2022. Reproducing a manual evaluation of simplicity in text simplification system outputs. In International Natural Language Generation Conference (INLG 2022). Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Soren Sonnenburg, Mikio L Braun, Cheng Soon Ong, Samy Bengio, Leon Bottou, Geoffrey Holmes, Yann LeCunn, Klaus-Robert Muller, Fernando Pereira, Carl Edward Rasmussen, et al. 2007. The need for open source software in machine learning. *Journal* of Machine Learning Research, 8:2443–2466.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS
2017), pages 5998–6008, Long Beach, CA.
Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification.
Transactions of the Association for Computational Linguistics, 4:401–415.
## A Lstm Random Seed Search
| Variant | Perplexity | SARI | BLEU |
|---------------|--------------|--------|--------|
| nts_search_3 | 10.30 | 28.68 | 84.47 |
| nts_search_14 | 10.49 | 28.62 | 85.04 |
| nts_search_24 | 10.25 | 28.94 | 85.28 |
| nts_search_10 | 10.26 | 28.88 | 86.69 |
| nts_search_16 | 10.45 | 29.60 | 86.81 |
| nts_search_20 | 10.22 | 29.02 | 86.95 |
| nts_search_4 | 10.63 | 29.78 | 87.14 |
| nts_search_2 | 10.27 | 29.34 | 87.19 |
| nts_search_31 | 10.34 | 29.31 | 87.21 |
| nts_search_17 | 10.13 | 29.40 | 87.42 |
| nts_search_23 | 10.31 | 29.19 | 87.51 |
| nts_search_0 | 10.37 | 28.95 | 87.75 |
| nts_search_15 | 10.33 | 28.96 | 87.77 |
| nts_search_25 | 10.21 | 29.62 | 87.81 |
| nts_search_39 | 10.24 | 28.83 | 87.81 |
| nts_search_38 | 10.32 | 29.28 | 87.84 |
| nts_search_36 | 10.29 | 29.11 | 87.86 |
| nts_search_33 | 10.39 | 28.99 | 87.94 |
| nts_search_22 | 10.32 | 29.02 | 88.28 |
| nts_search_37 | 10.20 | 29.24 | 88.36 |
| nts_search_29 | 10.33 | 29.31 | 88.42 |
| nts_search_26 | 10.16 | 29.18 | 88.42 |
| nts_search_1 | 10.32 | 29.17 | 88.58 |
| nts_search_32 | 10.24 | 29.15 | 88.59 |
| nts_search_19 | 10.43 | 29.29 | 88.61 |
| nts_search_18 | 10.49 | 29.50 | 88.64 |
| nts_search_11 | 10.30 | 28.98 | 88.68 |
| nts_search_12 | 10.24 | 29.55 | 88.69 |
| nts_search_21 | 10.30 | 29.89 | 88.75 |
| nts_search_41 | 10.38 | 29.32 | 88.83 |
| nts_search_13 | 10.12 | 29.59 | 88.98 |
| nts_search_35 | 10.39 | 29.14 | 89.01 |
| nts_search_34 | 10.34 | 29.30 | 89.03 |
| nts_search_28 | 10.34 | 29.15 | 89.14 |
| nts_search_30 | 10.16 | 29.71 | 89.42 |
| nts_search_27 | 10.21 | 29.76 | 89.59 |
Table 6: Results of the full random seed search, with perplexity, SARI, and BLEU scores reported for each variant.
We provide the full experimental results from the random seed search in this appendix. For each variant, we include its perplexity, SARI, and BLEU
score.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
ocampo-etal-2023-playing | Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection | https://aclanthology.org/2023.findings-acl.173 | Research on abusive content detection on social media has primarily focused on explicit forms of hate speech (HS), that are often identifiable by recognizing hateful words and expressions. Messages containing linguistically subtle and implicit forms of hate speech still constitute an open challenge for automatic hate speech detection. In this paper, we propose a new framework for generating adversarial implicit HS short-text messages using Auto-regressive Language Models. Moreover, we propose a strategy to group the generated implicit messages in complexity levels (EASY, MEDIUM, and HARD categories) characterizing how challenging these messages are for supervised classifiers. Finally, relying on (Dinan et al., 2019; Vidgen et al., 2021), we propose a {``}build it, break it, fix it{''}, training scheme using HARD messages showing how iteratively retraining on HARD messages substantially leverages SOTA models{'} performances on implicit HS benchmarks. | # Playing The Part Of The Sharp Bully: Generating Adversarial Examples For Implicit Hate Speech Detection
Nicolás Benjamín Ocampo1, Elena Cabrio1**, Serena Villata**1 1Université Côte d'Azur, CNRS, Inria, I3S, France
{nicolas-benjamin.ocampo,elena.cabrio,serena.villata}@univ-cotedazur.fr
## Abstract 1 Introduction
Research on abusive content detection on social media has primarily focused on explicit forms of hate speech (HS), that are often identifiable by recognizing hateful words and expressions.
Messages containing linguistically subtle and implicit forms of hate speech still constitute an open challenge for automatic hate speech detection. In this paper, we propose a new framework for generating adversarial implicit HS
short-text messages using Auto-regressive Language Models. Moreover, we propose a strategy to group the generated implicit messages by their complexity levels (EASY, MEDIUM,
and HARD categories) characterizing how challenging these messages are for supervised classifiers. Finally, relying on (Dinan et al., 2019; Vidgen et al., 2021), we propose a "build it, break it, fix it", training scheme using HARD
messages showing how iteratively retraining on HARD messages substantially leverages SOTA
models' performances on implicit HS benchmarks.
The spread of offensive content and hate speech
(HS) is a severe and increasing problem in online social communities. While in the last years numerous studies in the Natural Language Processing community have proposed computational methods to address the spread of malicious content, they tend to over-rely on overt and explicit forms of HS, neglecting more implicit and veiled ones (e.g., *"I'm either in North Florida or Nigeria sometimes I can't tell the difference."* from the White Supremacy Forum Dataset (WSF) (de Gibert et al., 2018)). Implicit HS contains expressions of coded or indirect language that does not immediately denote hate but still disparages a person or a group based on protected characteristics such as race, gender, cultural identity, or religion (ElSherief et al., 2021). Implicitness goes beyond wordrelated meaning, implying figurative language such as irony and sarcasm, generally hiding the real sense, making it more challenging to grasp sometimes even for humans. From a computational perspective, current SOTA models fail to properly detect implicit and subtle HS messages, as peculiar features connected to sentiment, inference, context and irony, as well as complex syntactic structures, cannot be properly understood (ElSherief et al.,
2021; Ocampo et al., 2023).
In order to improve the automated detection of HS messages, a few recent studies focus on obtaining more targeted diagnostic insights for current NLP models by systematically providing means of creating HS adversarial examples (Röttger et al.,
2021; Kirk et al., 2022; Hartvigsen et al., 2022) and more guided training strategies aiming to identify veiled HS implications (Dinan et al., 2019; Vidgen et al., 2021; Nejadgholi et al., 2022; Sarwar and Murdock, 2022). However, most of these approaches obtain implicit adversarial instances i)
scraping posts from the web, causing data disproportion and spurious hate correlations, ii) performing perturbations of input sentences neglecting text variety for training, and *iii)* manually creating such messages, which require high-annotation costs and experienced crowdsourcers.
In this work, we propose a new framework for generating on-scale close-to-human-like adversarial implicit HS texts using the pre-trained language model (PLM) GPT3 (Brown et al., 2020), which is known to output biased and hateful content
(Sheng et al., 2019; Gehman et al., 2020). Although such hateful messages pose real threats, we use this inadmissible behavior to improve existing hate classifiers, pushing forward the research to systematically neutralize implicit hateful messages. While the proposed approach follows the ALICE model (Hartvigsen et al., 2022), that combines demonstration-based prompting and already trained HS classifiers to generate adversarial messages, in our work we go beyond it further developing a generation framework for implicit HS detection, that implements a variant of constrained beam search decoding through novel soft-constrains approaches. We rely on auto-regressive PLMs that play the role of a bully challenging a HS classifier on implicit messages. Given an implicit hateful prompt, we encourage generations to be more implicit and adversarial by i) guiding generation with demonstration-based prompts and implicit messages, ii) soft-constraining the generation probabilities in such a way that the output text is "similar" to the demonstration examples occurring in the prompt, *iii)* minimizing classification scores of implicit hate detectors, iv) weighting generation if offensive or implicit words of an HS lexicon are used, and v) determining the optimal number of input sentences to generate instances that are hard to classify.
Additionally, we present a *build it, break it, fix* it approach inspired by (Dinan et al., 2019; Vidgen et al., 2021), grouping implicit HS adversarial examples into three categories: EASY, MEDIUM,
and HARD, according to their challenging level.
Then, we incrementally retrain SOTA models on implicit HS detection on these three groups showing how HARD generated messages improve SOTA
models' performances substantially on ISHate, a collection of HS benchmarks annotated with implicit HS labels (Ocampo et al., 2023).1 NOTE: This paper contains examples of language which may be offensive to some readers.
They do not represent the views of the authors.
## 2 Related Work
In the following, we first report on the most significant research work on abusive language and hate speech detection carried out in the Natural Language Processing (NLP) community, and then on works describing the generation of adversarial examples to analyze and improve NLP model.
## 2.1 Explicit And Implicit Hs Detection
Many resources and computational methods to detect HS have been proposed in the latest years, such as lexicons (e.g., (Wiegand et al., 2018; Bassignana et al., 2018)), HS datasets and benchmarks (e.g.,
(Zampieri et al., 2019; Basile et al., 2019; Davidson et al., 2017; Founta et al., 2018)), supervised classifiers (e.g., (Park and Fung, 2017; Gambäck and Sikdar, 2017; Wang et al., 2020; Lee et al.,
2019)). These studies have set strong basis to explore the issue of HS and abusive language, in particular in social media messages. However, most of these works do not consider subtle and elusive hateful instances (that use for instance circumlocution, metaphor, or stereotypes), that can be as harmful as overt ones (Nadal et al., 2014; Kanter et al., 2017).
To fill this gap, implicit HS detection has recently caught the interest of the NLP community, and benchmarks containing implicit HS messages have been proposed (Sap et al., 2020; Caselli et al., 2020; ElSherief et al., 2021; Hartvigsen et al., 2022; Wiegand et al., 2021a, 2022; Ocampo et al., 2023).
As for the computational approaches, (Kim et al.,
2022) tackle cross-dataset underperforming issues on HS classifiers and propose a contrastive learning method that encodes implicit hate implications close in representation space. (Nejadgholi et al.,
2022) use Testing Concept Activation Vectors from computer vision to provide a metric called degree of explicitness and update HS classifiers with guided data augmentation. (Han and Tsvetkov, 2020) propose a pipeline to surface veiled offenses without compromising current performances on explicit HS
forms. Finally, (Jurgens et al., 2019; Waseem et al.,
2017; Wiegand et al., 2021b) explain why explicitness, and implicitness are sub-notions of abusiveness and motivate researchers to devise ad-hoc technologies to address them.
## 2.2 Adversarial Generation
An adversarial example is an input designed to fool a machine learning model. Among the works investigating robustness of NLP models to adversarial examples, (Nie et al., 2020) develops the *textattack* framework that unifies multiple adversarial methods made available by the NLP community (e.g.,
(Alzantot et al., 2018; Jia et al., 2019; Li et al.,
2020)) into one system, facilitating their use.
In the context of HS detection, both manually created offensive instances (Röttger et al., 2021; Kirk et al., 2022), or examples generated with autoregressive PLM models (Hartvigsen et al., 2022; Cao and Lee, 2020; Gehman et al., 2020; Sheng et al., 2019) have been used as adversarial attacks.
Adversarial instances can be used in multiple ways to grow wisdom over handling models' misclassification. In our paper, we focus on dynamic
![2_image_0.png](2_image_0.png)
adversarial data collection (DADC) (Dinan et al.,
2019; Kiela et al., 2021; Vidgen et al., 2021; Wallace et al., 2022), where humans create challenging examples to fool SOTA models over many rounds with a stream of ever-improving modelsin-the-loop. This process ideally covers most taskrelevant phenomena, leading to more robust models. As the main limitations of these strategies are the expensive text creation and validation by human annotators, we challenge language models to carry out this task with similar performances.
## 3 Proposed Framework
As introduced before, most of the existing methods to detect abusive language in short text messages rely on supervised approaches that strongly depend on labeled datasets for training. But as observed by (ElSherief et al., 2021; Hartvigsen et al.,
2022), most of the available datasets mainly contain explicit forms of HS, ignoring abusive content expressed in more implicit or subtle ways.
This results in the current methods' poor detection performance on the implicit HS class as the training datasets are highly imbalanced (Ocampo et al.,
2023). To mitigate this issue, we propose a framework to generate on-scale close-to-human-like adversarial implicit HS texts using the pre-trained language model GPT3 (Brown et al., 2020).
Our generation framework is composed by four components (Figure 1): i) a demonstration-based prompt, ii) a search method, *iii)* a goal function, and iv) a set of constraints. From a starting demonstration-based prompt, the PLM completes the prompt with a possible next token at a time in such a way that the final output minimizes a goal function (i.e., indicating whether a message is challenging) and satisfies the constraints. Each next token is obtained through a search method that determines which of those possible next tokens are the most suitable to produce a challenging message. Except for the prompt, the other components depend on the classifier we aim to attack, to target its weaknesses. In the following, we describe each component of our framework, as illustrated in Figure 1.
## 3.1 Prompt Construction
Prompts are text fragments passed as input to the PLM to allow the generator to identify the context of the problem to be solved. Then, depending on how the prompt is written, the returned text will attempt to match the pattern accordingly. While there are several methods for prompting, a promising strategy is demonstration-based prompting (Gao et al., 2021), where example statements are injected into the prompt to push the PLM to generate similar messages. Figure 2 shows a use-case example where five implicit HS messages (shots) against migrants are used as prompts. Before the shots, an instruction is added to provide the PLM with more context on the output to generate. The quality of the generation will generally depend on the suitability of the instructions and the shot examples.
## 3.2 Search Method
Demonstration-based-prompting alone consistently produces HS statements against minority groups
(Hartvigsen et al., 2022). However, there is no guarantee that those messages would be challenging for a specific classifier. Therefore, we provide a variant of constrained beam search (CBS) (An-
![3_image_0.png](3_image_0.png)
derson et al., 2017; Hokamp and Liu, 2017) that implements constraints on the probabilities during beam search. The CBS maximizes at every step the following formula:
$$\lambda_{1}\log p_{L}(w_{i+1}|w_{0}...w_{i})+\tag{1}$$ $$\lambda_{2}\log(1-p_{C_{Imp}}(w_{0}...w_{i+1}))+$$ (2) $$\lambda_{3}\log F_{\rm lex}(w_{i+1})+$$ (3) $$\lambda_{4}\log\texttt{similarity}\left(\frac{1}{n}\sum_{i=i}^{n}S(s_{i}),S(w_{0}...w_{i+1})\right)\tag{4}$$
CBS, among all the possible following words wi+1, considers those which maximize the above expression and use top-k decoding to proceed with the next word. λ1*, ..., λ*4 are hyperparameters that determine how much each term contributes to the sum. Going into the details:
- (1) denotes the classical generation beam search approach where pL(wi+1|w0*...w*i) estimates the conditional probability of the next word wi+1 given the previous ones, w0*...w*i, as context.
- (2) challenges C by calculating pCImp (w0*...w*i+1), the probability of the newly generated sentence to be Implicit HS.
The closer to 0 it is, the harder for the classifier to detect it. At the same time, as C is a 3-label classifier, the above is equivalent to maximizing 1 − pCImp (w0*...w*i+1), the probability of the generated sentence being either Non-HS or Explicit HS.
- (3) weights generation by using an HS lexicon Lex, i.e., a set of pondered words between 0 and 1. We define FLex as the function that, given an input word w, assigns its weighted score in Lex provided that it belongs to the set. Otherwise, it returns 0. Note that this option can be used with
any HS lexicon that matches a word with a score between 0 and 1.
- (4) calculates the mean embedding of the shots in the prompt 1n Pn i=i S(si) and the embedding of the candidate sentence S(w0*...w*i+1) to obtain the *cosine similarity* between these two. Therefore, we expect the newly created instance to be semantically similar to the shots. Note that S can be any text embedding that encodes a statement into a representation space.
## 3.3 Goal Function And Constraints
A goal function takes as input a text message and returns a score specifying how challenging that text is with respect to a classifier. In our case, we opt for a variant of the Targeted Classification goal function (Morris et al., 2020), where we maximize the chances of an input statement being an incorrect label. That constraint is soft-added to the search method of our approach and used to measure if the final generated example is adversarial. For an input message x, a HS classifier should return three probabilities specifying how likely x is to be labeled as Non-HS, Explicit HS, or Implicit HS.
Following (2), the higher 1−PCImp (x) is, the more challenging x is. Therefore, we consider this math expression as our adversarial metric.
As for the constraints, we apply automatic filtering to discard the generated messages with incompleted texts, very short messages (less then 5 tokens), and non-ASCII characters.
## 4 Generation Of Implicit Hs Adversarial Messages
In this section, we report and analyze the experimental results to demonstrate the effectiveness of the proposed framework. First, we list the targeted research questions (Section 4.1), then, we describe the dataset we use in our experiments (Section 4.2),
the experimental setting (Section 4.3), and finally we discuss the obtained results (Section 4.4).
## 4.1 Research Questions
We target the following research questions:
- **RQ1:** Can we generate implicit HS messages with demonstration examples that attack only one protected group (OPG)?
- **RQ2:** Can we generate implicit HS messages with demonstration examples that attack **multiple protected groups (MPG)**?
- **RQ3:** How does each of the weighting terms
(expressions (1) to (4)) perform?
- **RQ4:** Does changing the prompt instructions impact on generation?
- **RQ5:** Is there an optimal number of demonstration examples to use?
## 4.2 The Ishate Dataset
To test our framework, we use the ISHate dataset (Ocampo et al., 2023), a newly created resource that collects messages from 7 available datasets for HS detection covering different topics and different social media platforms (i.e., the White Supremacy Forum Dataset (de Gibert et al.,
2018), HatEval (Basile et al., 2019), Implicit Hate Corpus (ElSherief et al., 2021), ToxiGen
(Hartvigsen et al., 2022), YouTube Video Comments Dataset (Hammer, 2017), CONAN (Chung et al., 2019) and Multi-Target CONAN (Fanton et al., 2021)). Messages in ISHate have been enriched with the following three-layer annotation: HS/non HS, Explicit/Implicit HS and Subtle/Non Subtle HS, obtaining an Inter Annotator Agreement (IAA) of Cohen's Kappa=0.793 for the implicit labels and 0.730 for the subtle ones.
In our experiments we focus on the following annotations: Non-HS, Explicit HS, and Implicit HS
because of the availability of more implicit HS
messages for training data, grounded on a clearer and well-founded definition of implicit content in the literature. Moreover, ISHate collects messages with their corresponding target group. The great majority of the messages are annotated with one target group only. For messages targeting more than one group with offensive content (as Asians and Migrants, or Jews and Women), the label corresponding to the predominant target is selected.
Tables 1 (a) and (b) show the ISHate data distribution and statistics on the targeted groups.
## 4.3 Experimental Setting
In our experiments, we rely on the text completion model GPT3 (Brown et al., 2020), the text-curie-001 version. While this is the second best version of GPT3 after text-davinci-003, it is known for being extremely powerful, with a much faster response time.
The HS classifier we challenge with adversarial attacks is the model considered as SOTA on the ISHate dataset, namely HateBERT. HateBERT
![4_image_0.png](4_image_0.png)
the ISHate dataset (Ocampo et al., 2023).
is a re-trained BERT model using over 1 million posts from banned communities on Reddit (Caselli et al., 2021) and then fine-tuned on the ISHate dataset. HateBERT obtained very promising results on the benchmarks HatEval, OffensEval (Zampieri et al., 2019), and AbusEval (Caselli et al., 2020).
Table 1 (c) reports on the classifier's results on the ISHate dataset (Ocampo et al., 2023). As for the *sentence similarity* model, used by the search method to compute the cosine similarity between the generated text and the shot examples (point (4),
Section 3.2), we used all-MiniLM-L6-v22from sentence-transformers. It has been trained on a 1B sentence pairs dataset to be used for information retrieval, clustering, and sentence similarity tasks.
Regarding the HS lexicon used by the search method to calculate the weights (point (3), Section 3.2), we opted for weighting the ISHate vocabulary, inspired by the HATE score proposed in (de Gibert et al., 2018), which relies on the Pointwise Mutual Information (PMI). PMI calculates the correlation of each expression concerning the categories they belong. However, unlike their approach, we use ISHate's Explicit HS and Implicit HS labels to calculate the correlation of ISHate terms to those categories. We aim to assign words a weight concerning their implicitness. Then in CBS, a candidate's next word, which is used in more implicit contexts, should score higher in our framework than in explicit contexts. Equation 5 shows the dif-2https://huggingface.co/sentence-transformers/
all-MiniLM-L6-v2
ImpScore ImpScore
resister .974 maga .028
paint .974 deport .045
lucky .974 fuck .049 plot .970 fucking .051
economically .965 potus .063
offices .965 alien .065 honour .965 burn .066 shirt .965 bunch .067 colonized .965 death .076
eggs .965 invasion .082
orchestrated .965 illegals .085 google .965 sick .104
handed .965 deserve .105
correctness .959 kill .114 celebrate .959 millions .118
ference between the PMI value of a word w and the category implicit, and the PMI of the same word w and the category explicit, resulting in the implicit hate score of w.
ImpScore(w) = PMI(w, ImpHS) − PMI(w, ExpHS) (5)
After that, we apply a sigmoid function to scale the weights between our required range. Table 2 shows that the least ranked words are derogatory and refer to targeted HS groups. On the other hand, the most rated tokens are neutral and can be found on whichever document on the web.
Concerning the research questions listed above, to answer to RQ1, we group the messages directed towards the most attacked targets in the ISHate dataset (see Table 1), namely Muslims, Migrants, Jews, Black People, Women, White People, and Asians. From these groups, we randomly select demonstration shots to be used to make a prompt, as shown in Figure 2. This prompt is passed as input to the generator in order to produce implicit adversarial examples. We generated a total of 20 instances per target group, obtaining 140 messages using only GPT3 without any constraint. Each run was manually validated by the same annotators of (Ocampo et al., 2023) following the ISHate annotation scheme and guidelines, in order to identify how many of these messages i) are actually implicit (% Implicit), ii) maintain the same target group (% Target), and *iii)* fool HateBERT
(% Attacks). Additionally, we calculated how similar the generations were to the prompts in average (x Similarity) and we equally separated the messages into three groups (% Easy), (% Medium),
and (% Hard) according to the goal function score defined in Section 3.3.
For RQ2, we repeat the same procedure without grouping per HS target. Implicit HS shot examples are selected from the ISHate dataset, and used as input prompts, so as to generate 140 adversarial examples. Each generated example is manually validated to calculate the metrics for RQ2 listed above and compare them with the metrics for RQ1.
As for RQ3, we carry out an ablation study adding to GPT3 each of the restrictions of our framework presented in Section 3.2, and then calculating the performances in order to compare the impact of the proposed strategies.
To answer RQ4, we repeat the same experiment we carried out to validate RQ3, but varying the instructions that appeared before the shots in the prompt (see Figure 2):
- I1 = *"These are some examples against <TARGET>. Write one more similar example."*
- I2 = *"Implicit Hate Speech against <TARGET>."* - I3 = Empty instruction.
I1 details the task we want the language model to perform. I2 uses only the name of the label and the target we are attacking. I3 is tested to evaluate the model performance when no instructions are given.
We rely on the results of the ablation study carried out to validate RQ4 to select the best generation approach, meaning the one that generates more implicit and adversarial examples according to the manual validation.
To answer RQ5 the best-performing generator is tested varying the number of shots in the prompt.
Based on pilot generations and in line with
(Hartvigsen et al., 2022), we run our experiments with five shots examples in the prompt to answer from RQ1 to RQ4, together with the following hyperparameters: the number of beams num_beams = 10, and λi = 0.5 for all i = 1*, ...,* 4 giving the same relevance to each constraint.
## 4.4 Obtained Results And Discussion
Table 3 shows the generation results for GPT3 using instruction I1, while Tables 5a and 5b those using I2 and I3. Regarding RQ1, we can see that the generation has very good results when the demonstration examples given as input focus on one target per query, obtaining 72% of implicit HS messages and maintaining the same target group as the one in the prompt. However, it decreases drastically when multiple targets per query are used (RQ2).
![6_image_0.png](6_image_0.png)
Table 3: Generation results with GPT3 using I1
![6_image_1.png](6_image_1.png)
MPG:(1) is the method that performs the worst with only 17% of HS implicit messages obtained, the lowest similarity to the shots, and among these 17%
messages, 83% led to concrete misclassification attacks. Also, during manual validation, most of the generated messages resulted in neutral or explicit cases with swear words and offensive words towards one of the groups mentioned in the prompts.
Possible reasons might be the inherent complexity for GPT3 to "understand" implicit messages, together with our starting premise for using this PLM,
i.e., its social bias against HS groups. Therefore, providing one particular target per query might trigger a specific social bias toward that target and better guide GPT3 toward generating an implicit message. For that reason, we proceed from now using OPG as the prompt construction strategy.
To answer to RQ3, once we consider the classifier's information with OPG:(1)+(2), almost all the obtained implicit cases were hard and caused an attack on HateBERT. However, it compromises the number of implicit HS cases we might obtain. OPG:(1)+(2)+(3)+(4) ends up being the more balanced approach overall, as it can equate with OPG:(1) in terms of the number of implicit instances, improving sentence similarity and still challenges HateBERT. Additionally, Table 4 shows how OPG:(1)+(2)+(3)+(4) is capable of generating implicit challenging examples across the seven experimented target groups.
To answer to RQ4, the same generation experiments were performed with instructions I2 and I3. Similar to I1, Tables 5a and 5b show how OPG:(1)+(2)+(3)+(4) ends up obtaining the most balanced results among the generation strategies. We can see how it has comparable results
![6_image_2.png](6_image_2.png)
with OPG:(1) concerning the obtained number of implicit instances and improvements in sentence similarity and the number of hard attacks to HateBERT. However, neither I2 nor I3 show significant variants in the results obtained with I1. This might suggest that implicit generation depends more on the shot examples provided and the target group of those shots.
Finally, we take OPG:(1)+(2)+(3)+(4), and we repeat the generation experiment varying this time the number of shots provided in order to answer to RQ5. Figure 3 shows a rapid improvement in obtaining implicit HS messages when incrementing from 1 to 10 shots. However, as soon as we surpass this number, we have no further benefits with 20, 30, 40, or even 50 examples. This indicates that using only 10 demonstrations might be suitable enough to get challenging instances with GPT3.
| Generator | (%) Implicit | (x) Similarity | (%) Target | (%) Easy | (%) Medium | (%) Hard | (%) Attacks |
|----------------------------------|----------------|------------------|--------------|------------|--------------|------------|---------------|
| MPG:(1) | .163 | .323 | - | .144 | 0 | .856 | .856 |
| OPG:(1) | .710 | .475 | .944 | .040 | .123 | .837 | .837 |
| OPG:(1)+(2) | .519 | .445 | .917 | .013 | 0 | .987 | .987 |
| OPG:(1)+(2)+(3) | .619 | .440 | .955 | .032 | .021 | .947 | .968 |
| OPG:(1)+(2)+(3)+(4) | .695 | .517 | .963 | .070 | .021 | .909 | .930 |
| (a) Results with instruction I2. | | | | | | | |
| Generator | (%) Implicit | (x) Similarity | (%) Target | (%) Easy | (%) Medium | (%) Hard | (%) Attacks |
| MPG:(1) | .196 | .344 | - | .183 | 0 | .817 | .817 |
| OPG:(1) | .734 | .488 | .935 | .056 | .155 | .780 | .818 |
| OPG:(1)+(2) | .535 | .437 | .941 | .017 | .046 | .937 | .937 |
| OPG:(1)+(2)+(3) | .641 | .459 | .963 | .032 | .027 | .941 | .968 |
| OPG:(1)+(2)+(3)+(4) | .710 | .505 | .947 | .095 | .017 | .888 | .905 |
| (b) Results with instruction I3. | | | | | | | |
Table 5: Generation results with GPT3 using instructions I2, I3 and generator OPG:(1)+(2)+(3)+(4).
## 5 Improving Hs Classifiers On Implicit Messages
In this section, we report on how our generation framework can be used to improve implicit hate speech detection on the ISHate benchmark, relying on a variant of the *build it, break it, fix it* approach
(Dinan et al., 2019; Vidgen et al., 2021).
## 5.1 Build It, Break It, Fix It Strategy
The *build it, break it, fix it* method translates a concept used in engineering to find faults in systems, to machine learning. The breaker would seek for failures in an already built classification model: the more failures the breaker can find, the better the fixes on the model might be.
During the "build it" phase, a machine learning classifier M0 is trained on the training set trainD
of a benchmark D, which defines a certain classification task. This classifier works as the baseline model we aim to improve. During the "breaking" phase, adversarial examples that break the initial model M0 should be created by giving one possible configuration of parameters to our generation framework. For the following round i, i > 1, the same generator must break the model obtained in the previous round Mi−1. The generator is fed with the benchmark training set trainD to generate hard cases through demonstration examples, so that to attack the classifier of the previous step Mi−1. During the "fix it" phase the model Miis updated with the newly generated adversarial data from the "break it" round. Each newly corrected model is evaluated on the test set of the benchmark. At the same time, hyperparameter selection and loss evaluation can be performed through the development set of D.
| Non-HS | Explicit HS | Implicit HS | | | | | | | |
|----------|---------------|---------------|-----|-----|-----|-----|-----|-----|-----|
| M | P | R | F1 | P | R | F1 | P | R | F1 |
| M0 | .90 | .90 | .90 | .83 | .83 | .83 | .50 | .56 | .53 |
| M1 | .93 | .85 | .89 | .81 | .84 | .82 | .43 | .82 | .56 |
| M2 | .93 | .85 | .89 | .81 | .85 | .83 | .45 | .82 | .58 |
| M3 | .93 | .82 | .87 | .81 | .85 | .83 | .36 | .82 | .50 |
## 5.2 Experimental Settings
Goal of these experiments is to improve the classification results of the HateBERT model (described in Section 4.2) on the Implicit HS class, without affecting its performances on the Non-HS and Explicit HS labels. The number of rounds used is R = 3, where each round has a total of 870 human-validated HS implicit adversarial examples. All the instances are scored by the goal function described in Section 3.3 in order to keep only those that are considered being Hard. Also, as our approach generates instances of only one class (i.e., Implicit HS), we randomly picked the same number of safe and explicit instances from ISHate. For the training parameters for the model, we follow (Ocampo et al., 2023), i.e., a batch_size = 2, epochs = 4, lr = 2 × 10−5, and weight_decay = 0.01.
## 5.3 Obtained Results
Table 6 reports the obtained classification results of HateBERT on the ISHate test set after three retraining rounds. All runs show satisfactory performances in the three categories. We note that after the first round, M1 had an increment on the F1-score of the implicit class compared with its predecessor M0 giving more relevance to recall (R)
than precision (P). Consider that the generator's goal function aims to produce an implicit HS message that maximizes the chances of being classified as either explicit or non-hateful, i.e., it forces the classifier to have false negatives. In that sense, after retraining, the SOTA model focuses its attention on prioritizing more recall than precision, as we can see in the results. Also, note that the results obtained on the majority classes are not affected.
This might be a consequence of not only providing adversarial examples on the target class, but also retraining on instances that are non-hateful and explicit from the ISHate dataset. After the second round M2, we see an improvement in the performances of the implicit class. M2 can update its parameters on this retraining round through the provided material, that helped it to correctly classify instances that were previously misclassified by M1.
On the contrary, from the third iteration, while the classification of false negatives improves through the generator's attacks, this forces the classifier to prioritize recall over precision, affecting the overall performance of M3 on the implicit class.
## 5.4 Error Analysis
To gain insights on the model's behavior, we perform a manual error analysis. As mentioned in Section 5.3, most of the errors after the fixing rounds come from false positives. In particular, as HateBERT prioritizes recall over precision, explicit and non-hateful instances labeled as implicit hate started to be more prominent. Among misclassified messages, there are some instances of explicit hate which are considered implicit by HateBERT,
as Example 1.
1. I believe that same sex attraction is a mental disorder that can be cured.
We can also find misclassified messages when commonly used explicit words are used in neutral situations. For example, the word animal, which is used sometimes in a pejorative form, might trigger the classifier to detect it as hateful (see Example 2).
2. *No, they are the fastest land animal on this* continent, so I think they are simply testing their speed.
## 6 Conclusions
In this paper, we have presented a new framework for generating on-scale adversarial implicit texts for HS detection using auto-regressive language models. The proposed framework follows (Hartvigsen et al., 2022) and guides a GPT3 PLM to output messages that fool current HS classifiers on implicit messages. Following (Hartvigsen et al., 2022), we have further developed a variant of constrained beam search decoding, providing a guided generation strategy through i) implicit hate demonstrationbased prompts, ii) scores of implicit hate detectors, iii) generation weights through implicit and explicit words in HS lexicons, and iv) text similarity constraints that compare the used prompt and the expected output. We show how the proposed framework can produce, from a batch of generated messages, 70% of implicit HS messages, where 90% of them result to be hard adversarial cases for a competitive SOTA model on the ISHate benchmark (i.e., HateBERT).
Furthermore, we have proposed a *build it, break* it, fix it, approach that uses the adversarial examples generated by the above described framework to incrementally retrain machine learning models and improve their classification performances.
We showed how adversarial generation leverages the HateBERT classification model on the ISHate dataset by improving false negative classification.
While this strategy may have potential issues, such as cyclical progress, it remains a valuable approach to improve model robustness, accelerate progress, define clear objectives (Dinan et al., 2019; Kiela et al., 2021; Vidgen et al., 2021; Wallace et al.,
2022), and gain a deeper understanding of models' errors as shown in our paper.
As for future work, we plan to explore how to embed implicit and subtle statements properly in representation spaces (Kim et al., 2022; Han and Tsvetkov, 2020), deciphering models for code language (Manzini et al., 2019) and provide bias mitigation strategies for social stereotypes (Sap et al.,
2020).
## Limitations
The main limitation of the proposed framework is its dependency on a reasonable amount of real implicit hate instances to be used as the prompting input material. Obtaining implicit and subtle messages from social media is undoubtedly a challenging and time consuming task. More importantly, another limitation lies in the fact that the proposed framework does not rely on an automatic metric to determine if the generated messages are actually implicit. Therefore, a human-in-the-loop step for validating the obtained newly generated instances is still required. Additionally, there has been mounting pressure to obtain debiased PLMs, which might lead to the generation of less challenging examples.
## Ethics Statements
This paper contains examples of HS from existing linguistic resources for HS detection and which do not reflect the authors' opinions.
While our purpose is to prevent and curate social media resources from HS, our study might still pose a potential misuse case, as our method can be employed to encourage a large language model to generate implicit and subtle instances of hate. However, we still consider that effective classifiers and new data creation/collection methods for this task are necessary to investigate and tackle implicit and subtle online hate speech on scale and prevent the spreading of this harmful content online. Our work aims at making a step towards that objective and encourages the scientific community to investigate these aspects.
## Acknowledgements
This work has been supported by the French government, through the 3IA Côte d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR- 19-P3IA-0002.
## References
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang.
2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics.
Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary image captioning with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 936–
945, Copenhagen, Denmark. Association for Computational Linguistics.
Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti.
2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54–63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Elisa Bassignana, Valerio Basile, and Viviana Patti.
2018. Hurtlex: A Multilingual Lexicon of Words to
Hurt. In Elena Cabrio, Alessandro Mazzei, and Fabio Tamburini, editors, Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018, pages 51–56. Accademia University Press.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language Models are Few-Shot Learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Rui Cao and Roy Ka-Wei Lee. 2020. HateGAN: Adversarial generative-based data augmentation for hate speech detection. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 6327–6338, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Tommaso Caselli, Valerio Basile, Jelena Mitrovic, and ´
Michael Granitzer. 2021. HateBERT: Retraining BERT for abusive language detection in English. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 17–25, Online. Association for Computational Linguistics.
Tommaso Caselli, Valerio Basile, Jelena Mitrovic, Inga ´
Kartoziya, and Michael Granitzer. 2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language. In *Proceedings* of the Twelfth Language Resources and Evaluation Conference, pages 6193–6202, Marseille, France. European Language Resources Association.
Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN -
COunter NArratives through nichesourcing: a multilingual dataset of responses to fight online hate speech. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 2819–2829, Florence, Italy. Association for Computational Linguistics.
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. *Proceedings of the International AAAI Conference on* Web and Social Media, 11(1):512–515. Number: 1.
Ona de Gibert, Naiara Perez, Aitor García-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. In *Proceedings of the* 2nd Workshop on Abusive Language Online (ALW2),
pages 11–20, Brussels, Belgium. Association for Computational Linguistics.
Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for
dialogue safety: Robustness from adversarial human attack. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537–4546, Hong Kong, China. Association for Computational Linguistics.
Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 345–363, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Margherita Fanton, Helena Bonaldi, Serra Sinem Tekiroglu, and Marco Guerini. 2021. Human-in- ˘
the-loop for data collection: a multi-target counter narrative dataset to fight online hate speech. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics*. Association for Computational Linguistics.
Antigoni Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior. *Proceedings of the International AAAI*
Conference on Web and Social Media, 12(1). Number: 1.
Björn Gambäck and Utpal Kumar Sikdar. 2017. Using convolutional neural networks to classify hate-speech.
In *Proceedings of the First Workshop on Abusive Language Online*, pages 85–90, Vancouver, BC, Canada.
Association for Computational Linguistics.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830, Online. Association for Computational Linguistics.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 3356–3369, Online. Association for Computational Linguistics.
Hugo Lewi Hammer. 2017. Automatic Detection of Hateful Comments in Online Discussion. In *Industrial Networks and Intelligent Systems*, pages 164–
173, Cham. Springer International Publishing.
Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying toxic speech detectors against veiled toxicity. In Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 7732–7739, Online. Association for Computational Linguistics.
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022.
ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 3309–3326, Dublin, Ireland.
Association for Computational Linguistics.
Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535–1546, Vancouver, Canada. Association for Computational Linguistics.
Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 4129–4142, Hong Kong, China. Association for Computational Linguistics.
David Jurgens, Libby Hemphill, and Eshwar Chandrasekharan. 2019. A just and comprehensive strategy for using NLP to address online abuse. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3658–
3666, Florence, Italy. Association for Computational Linguistics.
Jonathan W. Kanter, Monnica T. Williams, Adam M.
Kuczynski, Katherine E. Manbeck, Marlena Debreaux, and Daniel C. Rosen. 2017. A Preliminary Report on the Relationship Between Microaggressions Against Black People and Racism Among White College Students. *Race and Social Problems*,
9(4):291–299.
Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021.
Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110–4124, Online. Association for Computational Linguistics.
Youngwook Kim, Shinwoo Park, and Yo-Sub Han. 2022.
Generalizable implicit hate speech detection using contrastive learning. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 6667–6679, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Hannah Kirk, Bertie Vidgen, Paul Rottger, Tristan Thrush, and Scott Hale. 2022. Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1352–1368, Seattle, United States. Association for Computational Linguistics.
Ju-Hyoung Lee, Jun-U Park, Jeong-Won Cha, and YoSub Han. 2019. Detecting context abusiveness using hierarchical deep learning. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 10–19, Hong Kong, China.
Association for Computational Linguistics.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics.
Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 615–621, Minneapolis, Minnesota. Association for Computational Linguistics.
John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126, Online. Association for Computational Linguistics.
Kevin L. Nadal, Katie E. Griffin, Yinglee Wong, Sahran Hamit, and Morgan Rasmus. 2014. The Impact of Racial Microaggressions on Mental Health: Counseling Implications for Clients of Color. *Journal of* Counseling & Development, 92(1):57–66. _eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.15566676.2014.00130.x.
Isar Nejadgholi, Kathleen Fraser, and Svetlana Kiritchenko. 2022. Improving generalizability in implicitly abusive language detection with concept activation vectors. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5517–5529, Dublin, Ireland. Association for Computational Linguistics.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4885–4901, Online. Association for Computational Linguistics.
Nicolas Ocampo, Ekaterina Sviridova, Elena Cabrio, and Serena Villata. 2023. An in-depth analysis of implicit and subtle hate speech messages. In *Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics*,
pages 1997–2013, Dubrovnik, Croatia. Association for Computational Linguistics.
Ji Ho Park and Pascale Fung. 2017. One-step and twostep classification for abusive language detection on Twitter. In *Proceedings of the First Workshop on* Abusive Language Online, pages 41–45, Vancouver, BC, Canada. Association for Computational Linguistics.
Paul Röttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert.
2021. HateCheck: Functional tests for hate speech detection models. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 41–58, Online. Association for Computational Linguistics.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5477–5490, Online. Association for Computational Linguistics.
Sheikh Muhammad Sarwar and Vanessa Murdock. 2022.
Unsupervised domain adaptation for hate speech detection using a data augmentation approach. Proceedings of the International AAAI Conference on Web and Social Media, 16(1):852–862.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407–
3412, Hong Kong, China. Association for Computational Linguistics.
Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021. Learning from the worst: Dynamically generated datasets to improve online hate detection. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 1667–1682, Online. Association for Computational Linguistics.
Eric Wallace, Adina Williams, Robin Jia, and Douwe Kiela. 2022. Analyzing dynamic adversarial training data in the limit. In Findings of the Association for Computational Linguistics: ACL 2022, pages 202–
217, Dublin, Ireland. Association for Computational Linguistics.
Kunze Wang, Dong Lu, Caren Han, Siqu Long, and Josiah Poon. 2020. Detect all abuse! toward universal abusive language detection models. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 6366–6376, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A
typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78–84, Vancouver, BC, Canada.
Association for Computational Linguistics.
Michael Wiegand, Elisabeth Eder, and Josef Ruppenhofer. 2022. Identifying implicitly abusive remarks about identity groups using a linguistically informed approach. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5600–5612, Seattle, United States.
Association for Computational Linguistics.
Michael Wiegand, Maja Geulig, and Josef Ruppenhofer. 2021a. Implicitly abusive comparisons - a new dataset and linguistic analysis. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 358–368, Online. Association for Computational Linguistics.
Michael Wiegand, Josef Ruppenhofer, and Elisabeth Eder. 2021b. Implicitly abusive language - what does it actually look like and why are we not getting there? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 576–587, Online. Association for Computational Linguistics.
Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words - a feature-based approach. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046–1056, New Orleans, Louisiana. Association for Computational Linguistics.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar.
2019. SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 75–86, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section "Limitations", page 9
✓ A2. Did you discuss any potential risks of your work?
Section "Ethics Statements", page 9
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section "Introduction" page 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** We Use The Ishate Dataset, Section 4.2
✓ B1. Did you cite the creators of artifacts you used?
Section 4.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The dataset is a collection of available datasets.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4.2
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The dataset is a collection of available datasets that have properly anonymized.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4.2
## C ✓ **Did You Run Computational Experiments?** Sections 4.3 And 5.2
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sections 4.3 and 5.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sections 4.3 and 5.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 4.4 and 5.3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sections 4.4 and 5.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
moradshahi-etal-2023-x | {X}-{R}i{SAWOZ}: High-Quality End-to-End Multilingual Dialogue Datasets and Few-shot Agents | https://aclanthology.org/2023.findings-acl.174 | Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language.X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents. The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks. We establish strong baselines for X-RiSAWOZ by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source. | # X-Risawoz: High-Quality End-To-End Multilingual Dialogue Datasets And Few-Shot Agents
♣Mehrad Moradshahi1 ♣Tianhao Shen2 Kalika Bali3 **Monojit Choudhury**3 Gaël de Chalendar4 Anmol Goel5 Sungkyun Kim6 **Prashant Kodali**5 Ponnurangam Kumaraguru5 Nasredine Semmar4 Sina J. Semnani1 **Jiwon Seo**6 Vivek Seshadri3,7 Manish Shrivastava5 Michael Sun1 **Aditya Yadavalli**7 Chaobin You2 ♦Deyi Xiong2 ♦**Monica S. Lam**1 1 Computer Science Department, Stanford University, Stanford, USA 2 College of Intelligence and Computing, Tianjin University, Tianjin, China 3 Microsoft Research India, Bangalore, India 4 Université Paris-Saclay, CEA, List, Palaiseau, France 5International Institute of Information Technology, Hyderabad, India 6 Deep Learning & Big Data Systems Lab, Hanyang University, Seoul, South Korea 7 Karya Inc., India
## Abstract
Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed EnglishHindi language. X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents.
The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks.
We establish strong baselines for X-RiSAWOZ
by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source.1
## 1 Introduction
In recent years, tremendous effort has been put into the research and development of task-oriented dialogue agents; yet, it has been mainly focused on only a handful of popular languages, hindering the adoption of dialogue technology around the globe. Collecting dialogue data from scratch for a 1https://github.com/stanford-oval/
dialogues
♣ Co-first authors ♦ Co-corresponding authors new language is ideal but prohibitively expensive and time-consuming, leading to the current lack of reliable multilingual dialogue benchmarks.
In recent years, several non-English taskoriented dialogue (ToD) datasets have been created.
These datasets are either collected from scratch
(Quan et al., 2020; Zhu et al., 2020a), synthesized using a state machine with manually written templates, and paraphrased for fluency by crowdworkers (Lin et al., 2021), or manually translated from another language (Li et al., 2021b). All of these approaches are labor-intensive, costly, and timeconsuming; such investment is unlikely to be made for less widely spoken languages.
This motivates the development of zero and fewshot techniques that can produce a usable agent in a new language with no or only a few gold training dialogues in the target language. Concurrent with this work, Ding et al. (2022); Zuo et al. (2021);
Hung et al. (2022b) adopt a *translation and manual post-editing* process where data is translated with neural machine translation models first, and then post-edited by crowdworkers. This approach has shown promise on MultiWOZ; however, reported zero- and few-shot accuracies show a big degradation in performance compared to full-shot accuracy in the source language. Besides, the performance of the agent in the original language was not good to begin with, in part due to misannotations in the dataset (Eric et al., 2019a). Lastly, these datasets either focus only on the subtask of Dialogue State Tracking (DST) (Ding et al., 2022) or auxiliary tasks such as Response Retrieval (Hung et al., 2022b), or are too small (Zuo et al., 2021) to train end-to-end dialogue agents that require policy, interactions with databases, and response generation components.
Our overall goal is to make task-oriented dialogue research in major languages available to lowresource languages. The key is to produce highquality few-shot training, validation, and test set
Dataset
Few-shot Validation **Test**
# Domains 12 12 12
# Dialogues 100 600 600
# Utterances 1,318 8,116 9,286 # Slots 140 148 148
# Values 658 2,358 3,571
Table 1: Statistics for the few-shot, validation, and test.
with as little manual effort as possible to enable
zero-shot or few-shot training. We describe below
our contributions towards this goal.
## 1.1 Data Translation Techniques And Toolset
Machine translation followed by human postediting has been used as a method for extending monolingual NLP datasets to new languages (Yang et al., 2019; Ziemski et al., 2016; Giannakopoulos et al., 2011; Conneau et al., 2018). However, we discovered human post-editing to be the main pain point in creating new dialogue datasets.
The process is costly, requiring a lot of back-andforth among developers, translators, and annotators.
Even after several rounds, the results are still not adequate. To alleviate this, we devised a scalable methodology and an associated toolkit that automates parts of this process, and aids translators and annotators to iteratively check their work themselves without developer supervision. This allows fast and accurate creation of a new dialogue dataset annotated with slot values for a new language.
We show that the entity-aware translation technique proposed by Moradshahi et al. (2023) is also applicable to other end-to-end dialogue datasets.
We combine this technique with a dictionary-based alignment where multiple translations are generated for each entity individually (i.e. without context), using the same translation model used to translate the sentence. Then, the translated sentence is scanned to match any of the translation candidates, resulting in an improvement in the agent's performance.
Furthermore, we automatically check each step of data translation to ensure annotation consistency between dialogue utterances and API calls to the database. We are releasing this toolkit open-source for reproducibility as well as a resource for others.
## 1.2 X-Risawoz Dataset
We created X-RiSAWOZ, a multi-domain, largescale, and high-quality task-oriented dialogue benchmark, produced by translating the Chinese RiSAWOZ data to four diverse languages: English, French, Hindi, and Korean; and one code-mixed English-Hindi language. X-RiSAWOZ is an improvement over previous works in several aspects:
- **End-to-End**: Contains translations for all parts of dialogue including user and agent utterances, dialogue state, agent dialogue acts, and database results.
- **Larger**: RiSAWOZ is larger than MultiWOZ and covers a total of 11,200 dialogues with 151,982 turns. It also covers 12 domains compared to 7.
In addition to translating validation and test data, we also sample 100 dialogue examples from the training set and translate them using the same process to use as few-shot training data. This way, X-RiSAWOZ can be used to experiment with few-shot techniques as well as zero-shot.
- **Higher Quality**: We choose RiSAWOZ as it exhibits the lowest misannotation rate among popular dialogue benchmarks as shown by Moradshahi et al. (2021). The data translation methodology described above reduces the mismatch between entities in the sentence and annotations, meaning that our translation process does not introduce new misannotations.
- **Cheaper**: First, the methodology and toolset reduce the amount of post-editing effort needed.
Second, instead of using commercial translation systems such as Google Translate, we rely on open-source multilingual translation models such as MBART (Liu et al., 2020) for the translation of training data. This reduces the translation cost by at least 100x which could otherwise be a prohibiting factor when building datasets for new languages.
## 1.3 Experimental Results
We establish strong baseline results for our new X-RiSAWOZ dataset. In the full-shot setting, our model produces a new SOTA on the original Chinese dataset. With few-shot training, across languages, our model achieves between 60.7-84.6%
accuracy for Dialogue State Tracking (DST), 38.070.5% accuracy for Dialogue Act (DA), and 28.546.4% for BLEU score when evaluated using gold data as the conversational context. Cumulatively over a conversation, our model achieves 17.2%,
11.9%, 11.3%, 10.6%, and 2.3% on Dialogue Success Rate (DSR), respectively. The remaining gap between zero or few-shot results on new languages and the full-shot results on Chinese creates opportunities for research and finding new techniques to further improve the dialogue agent performance.
## 2 Related Work 2.1 Multilingual Dialogue Datasets
MultiWOZ (Budzianowski et al., 2018; Ramadan et al., 2018; Eric et al., 2019b), CrossWOZ (Zhu et al., 2020a), and RiSAWOZ (Quan et al., 2020)
are three monolingual Wizard-Of-Oz multi-domain dialogue datasets for travel dialogue agents. For the 9th Dialog System Technology Challenge (DSTC9) (Gunasekara et al., 2020), MultiWOZ was translated to Chinese and CrossWOZ was translated to English using Google Translate. A portion of their evaluation and test sets were post-edited by humans, while the training set remained entirely machine translated. Moradshahi et al. (2021) translated RiSAWOZ to English and German using open-source machine translation models with alignment. However, the validation and test data were not verified by humans, resulting in potentially over-estimating the accuracy of agents. Several works (Ding et al., 2022; Zuo et al., 2021; Hung et al., 2022a) continued translation of MultiWOZ to other languages. For example, GlobalWOZ translates to several languages, with human translators post-editing machine-translated dialogue templates, and filling them with newly collected local entities.
However, these works address only one or two subtasks of a full dialogue, and therefore training an end-to-end agent is not possible with them.
Different from these translation-based approaches, Lin et al. (2021) introduced BiToD, the first bilingual dataset for *end-to-end* ToD modeling. BiToD uses a dialogue simulator to generate dialogues in English and Chinese, and asks crowdworkers to paraphrase them for naturalness. This simulation-based approach eliminates the need for translation but requires hand-engineered templates and savvy developers with knowledge of the target language and dialogue systems. Besides, paraphrasing the entire dataset is costly.
## 2.2 Cross-Lingual Approaches For Tod
With the advent of pre-trained language models, contextual embeddings obtained from pre-trained multilingual language models (Devlin et al., 2018; Xue et al., 2021; Liu et al., 2020) have been used to enable cross-lingual transfer in many natural language tasks, including task-oriented dialogue agents. Unfortunately, most of this work has only focused on the DST subtask, which is a limitation we aim to rectify with this paper.
To further improve the cross-linguality of these embeddings, Tang et al. (2020) and Moghe et al.
(2021) proposed fine-tuning multilingual BERT on a synthetic code-switching dataset. Glavaš et al.
(2020) performed language adaptation by using intermediate masked language modeling in the target languages and improving zero-shot cross-lingual transfer for hate speech detection task.
Using machine translation for multilingual dialogue tasks has also been studied. Uhrig et al.
(2021) used machine translation during inference to translate to English for semantic parsing. Instead, Sherborne et al. (2020) use machine translation to generate semantic parsing data to train a semantic parser in the target language which leads to better results. Moradshahi et al. (2023); Nicosia et al.
(2021) proposed using alignment to improve the quality of translated data by ensuring entities are translated faithfully.
## 3 The End-To-End Tod Task
In end-to-end task-oriented dialogues, a user speaks freely with an agent over several turns to accomplish their goal according to their intents (e.g.,
"book a hotel with at least 5 stars"). In each turn, the agent must access its database if necessary to find the requested information (e.g., find a hotel that meets user constraints), decide on an action
(e.g., present the information to the user or ask for additional information), and finally respond to the user in natural language based on the action it chooses. Following (Moradshahi et al., 2023), we decompose a dialogue agent into four subtasks:
1. *Dialogue State Tracking (DST)*: Generate the new belief state, for the current turn based on the previous belief state, the last two agent dialogue acts, and the current user utterance.
2. *API Call Detection (ACD)*: Determine if an API
call is necessary to query the database.
3. *Dialogue Act Generation (DAG)*: Generate the agent dialogue act based on the current belief state, the last two agent dialogue acts, the user utterance, and the result from the API call.
4. *Response Generation (RG)*: Convert the agent dialogue act to produce the new agent utterance.
## 4 The Common Dialogue Interface
Over the years, various ToD datasets have been introduced (Budzianowski et al., 2018; Byrne et al.,
2019; Zhu et al., 2020b; Quan et al., 2020; Lin et al., 2021), each with its own representation, making it difficult for researchers to experiment with different datasets. To facilitate experimentation, we have developed Common Dialogue, a standard interface for ToD tasks. This interface defines a unified format for datasets, their annotations, ontologies, and API interfaces. We show that the most widely-used recent dialogue datasets (such as MultiWoZ, RiSAWOZ, and BiToD) can be converted to this representation with a simple script. The standardization lets all different datasets be processed with the same software and models, significantly reducing the implementation time and cost.
Previously, other libraries such as ParlAI (Miller et al., 2017), ConvLab (Zhu et al., 2020c, 2022),
and Nemo (Kuchaiev et al., 2019) were introduced so researchers can work with different dialogue datasets and interact with the trained models. However, these libraries are limited. They either do not provide a standard abstraction, making it difficult to add new datasets, or a modular interface that can connect with other code bases, requiring new models to be implemented in their repository before they can be used. Additionally, the training code needs to be modified to support a new dataset or language for an existing dataset.
## 5 Dataset Creation
In this section, we describe the process used to extend RiSAWOZ to the new languages. The original RiSAWOZ dataset is in Chinese. We manually translate the validation data (600 dialogues),
test data (600 dialogues), and 1% of the training dataset (100 dialogues), which we refer to as few-shot, from Chinese to English. For other languages, we use English as the source language, since bilingual speakers of English and the target language are more accessible than Chinese and the target language. Since the English data is manually translated, this approach avoids double translationese (Vanmassenhove et al., 2021) and ensures the best data quality. We machine-translate the English data and manually post-edit the translation for fluency and correctness. Besides the few-shot data, we also machine-translate all of the Chinese training data into each of the languages (including English) and train with them; we refer to training with just this data set as *zero-shot*, since no human labor is used during dataset creation.
In the following, we discuss the steps and methods for preparing data for translation, including building alignment between entities and performing iterative quality checks. We also describe how to create the target language ontology, which serves as a database for API calling and provides a mapping between source and target language entities.
## 5.1 Translation And Alignment For Few-Shot, Validation, And Test Data 5.1.1 From Chinese To English
Figure 1 shows the process used to translate the Chinese dataset to English. First, human professional translators manually translate the Chinese dialogue utterances and ontology in the validation, test, and few-shot training data sets to English. We provide the translators with an annotation tool (Figure 2) to navigate through data examples, perform translation, and highlight entity spans in the translated sentence. The tool helps verify the consistency of slot value translations between user/agent utterances and their annotations after translation.
For each utterance in a dialogue, our tool automatically identifies the values in dialogue states and user/agent actions. Slots are *canonicalized* before calling the database, meaning that their values must lexically match those in the ontology. Since slot values appearing in the utterances may differ from the canonicalized version, we ask translators to manually identify and mark the non-canonicalized form of slot values and their word spans in the utterances.
The tool automatically checks the number of highlighted spans to prevent missing entity translations. After checking, the annotation tool outputs the English dialogue texts and a correspondence
(i.e. alignment) between source and target language slot values.
## 5.1.2 From English To Other Languages
Automatic Translation. For validation, test, and few-shot data, we use commercial translation models since (1) translation is done only once, (2) data size is smaller so it is affordable, and (3) higher data quality reduces post-editing effort.
Manual Post Editing. We hire bilingual speakers of English and the target language to post-edit the translations for fluency and correctness. We instruct them to update the alignment if they modify the translated entities. We provide several tools that automatically check their work and help them during the process. We describe the details in Section 5.4.2.
![4_image_0.png](4_image_0.png)
## 5.2 Zero-Shot Training Data Translation & Alignment
To create the zero-shot training datasets for the target languages (including English), we use opensource machine translation models to translate the Chinese data to the target language. We pick opensource models since (1) their results are reproducible, (2) open-source models provide access to model weights necessary for hybrid alignment
(described below), (2) they allow tuning text generation hyperparameters such as temperature (Ficler and Goldberg, 2017) or beam size (Freitag and AlOnaizan, 2017) and (3) they cost less, thus allowing effective scaling to more languages.
Hybrid Alignment for NMT. Previous work (Moradshahi et al., 2021; Li et al., 2021a)
proposed using alignment for tracking the position of entities during translation to ensure they can be replaced with the desired translation both in the utterance and the belief state. For this purpose, the encoder-decoder cross-attention weights of the neural machine translation model were used in a method called *neural alignment*. Although neural alignment often works well, it can produce incorrect spans as it is a probabilistic approach and has particularly low recall on long multi-token entities.
Ideally, if there exists a dictionary that provides a mapping between each source entity and all possible translations in the target language, we can directly scan the translated sentence to see if there is a match. We call such an approach dictionary alignment. Unfortunately, there is no such dictionary. We propose to build such a dictionary for each sentence on-the-fly. To do so, we first extract the entities from the sentence, then translate each individually and use nucleus sampling (Holtzman et al., 2019) with different temperature values to generate K translation candidates. This way, we build a mapping between each entity and possible translations which serves as the dictionary for dictionary alignment. Finally, we combine the two methods in a *hybrid* approach: We try to use dictionary alignment first, and if there is no matching translation in the output, we fall back to neural alignment.
## 5.3 Creating English-Hindi Code-Mixed Zero-Shot Training Data
For generating English-Hindi code-mixed train set, we implemented a pipeline combining GCM (Rizvi et al., 2021), and alignment based word substitution. An overview of the pipeline is shown in Fig. 3. GCM automatically generates code-mixed text given parallel data in two languages, based on two linguistic theories of code-mixing, the Equivalence Constraint theory (Poplack, 1980) and the Matrix Language theory (Scotton, 1993).
We take the Chinese training set as source and translate user and agent utterances to English (en)
and Hindi (hi). The translated sentences are fed as input to GCM, which produces code-mix utterances. For sentences where GCM fails to generate any candidate, we rely on word-alignment-based word substitution to generate a code-mixed utterance. Alignments are generated using cosine similarities between sub-word representations from mBERT in a parallel sentence pair (Dou and Neubig, 2021).
## 5.4 Translation Of Annotations
The next step is to translate the slot values in the belief state, user and agent acts, and database search results in the source language to the target language. Since the translations of the same slot value may vary according to the context (e.g., "是" corresponds to is, does, has or other words indicating affirmative), we create a one-to-many mapping between source language slot values and corresponding translations based on the slot value alignments obtained above. We ask human translators to select the most appropriate expression from all candidate translations as the canonicalized translation. We follow two basic principles in this process:
Part-of-Speech (POS) Consistency. The translator should pick, for each slot, values with the same POS tags where possible. For example, for the "production country/region" slot in the TV series domain, we will use the unified noun form (i.e.,
"America"/"India") instead of mixing the noun and adjective form (i.e., "American"/"India").
Value Consistency. The translator should use the same translation across domains and slots. For example, the Chinese word "中等" when used as a
"price-range" can be translated into "moderate" or
"medium". We consistently map "中等" to "moderate" for all "price-range" slots across all domains.
## 5.4.1 Creating Ontology And Databases
We found that ontology construction should be done in tandem with dataset translation. In prior work, using a predefined ontology limited fluency and diversity of the translations (Zuo et al., 2021),
and replacing entities in sentences after translation without careful attention to parts of speech or context resulted in grammatically incorrect sentences (Moradshahi et al., 2020; Ding et al., 2022).
Each value in the source database is automatically mapped to its canonicalized translation. Note that since not all slot values are seen in the training dataset, translators are asked to provide canonicalized translations for those values.
The original RiSAWOZ dataset only provides final search results from databases instead of intermediate API calls. We hence also restore the API calls through the dialogue state, database, and search results for complete database interactions.
This improves the extensibility of the dataset and helps to generalize RiSAWOZ to other languages and domains in the future.
## 5.4.2 Annotation Checker
Manual errors are inevitable, especially for translators who are unfamiliar with the process. We have developed an annotation checker to automatically flag and correct errors where possible:
Entity Checking. Our annotation checker ensures that changes made in the English translation of entities are propagated to the downstream translation for other target languages. It compares the revised annotations with current annotations and deleted incorrect or redundant slots. Additionally, it locates missing entities or entities that need reannotation to help annotators quickly synchronize the latest changes.
API Checking. Some datasets such as RiSAWOZ, include the ground truth database search results. For these datasets, we can check the consistency of the API by comparing the results of the API call with the provided ground truth. Our checker resolves observed discrepancies by automatically deleting redundant slots and values in constraints and adding the differences to the slot value mappings. It also shows the precise locations of changes for annotators to review.
## 6 Experiment
The goal of our experiments is to create an agent in a *target* language, given full training data in the source (Chinese) language, and a varying amount of training data in the target language. We also assume we have access to a machine translation model from Chinese to the target language. We perform our experiments on different target languages in X-RiSAWOZ. Table 1 shows statistics of different data splits used in the experiments, which is the same across all *target* languages.
## 6.1 Setting
Full-Shot (mono-lingual). This setting is only possible for Chinese since we do not have full training data for target languages. In the full-shot experiments, all of the original Chinese training data is used for training. Note that this setting is not a cross-lingual experiment per se, but a point of comparison for other settings.
Zero-Shot (cross-lingual). In our zero-shot experiments, no manually created target language data is available for training. Instead, we automatically create training data by machine translation of the source language as described in Section 5.1.2.
Additionally, we perform two ablations on our automatic training data translation approach: (1) Only using neural alignment (− Dictionary Align) (2)
No alignment of any type (− Neural Align).
Few-Shot (cross-lingual). In the few-shot setting, we start from a zero-shot model (with its various ablations) and further fine-tune it on the few-shot dataset in the target language. So the model is trained on both machine translated data and few-shot manually created dataset. In this setting, we also perform an ablation where we only train on the few-shot training data and no machine translated data (*Few-shot Only*).
## 6.2 Models
In all our experiments, we use the m2m100 (Fan et al., 2020) model for Korean and mBART (Liu et al., 2020) for all other languages. We found mBART to be especially effective in zero-shot settings as the language of its outputs can be controlled by providing a language-specific token at the beginning of decoding. Additionally, its denoising pre-training objective improves its robustness to the remaining translation noise. In each setting, all four dialogue subtasks are done with a single model, where we specify the task by prepending a special token to the input.
Since the dataset for target languages is introduced in this paper, there is only prior work on the Chinese dataset. In Section 7.3, we compare our results to the best previously reported result on RiSAWOZ from Moradshahi et al. (2021)
that achieved SOTA on the DST subtask using an mBART model, and from Quan et al. (2020) for other subtasks which use DAMD (Zhang et al.,
2020), a Seq2Seq RNN end-to-end dialogue model.
We use seven widely-used automatic metrics to compare different models. Please see Section A.2 for details of each metric.
## 7 Results And Discussion
We first evaluate the models for each turn, assuming that all previous subtasks and steps are correct. We then evaluate the end-to-end accuracy for the whole conversation.
## 7.1 Turn By Turn Evaluation
To understand how each component performs independently, our first experiment uses the gold data of all the previous turns and subtasks as input in our evaluation (Table 2). In this scenario, errors do not propagate from one subtask to the next in each turn.
Ours refers to our main approach, which combines all techniques. Each ablation incrementally takes away one of the techniques.
In the zero-shot setting, results vary across added languages, where the agent achieves between 34.684.2% on DST, 42.8-67.3% on DA, and 10.2-29.9%
on BLEU score. Fine-tuning on the few-shot data improves all metrics for all languages, with the agent achieving between 60.7-84.6% on DST, 38.070.5% on DA, and 28.5-46.4% on BLEU score.
The improvement in DST is particularly prominent for Hindi, Korean, and English-Hindi, where the quality of machine translation may not be as good.
Nonetheless, adding automatically translated data to training greatly improves the accuracy for these languages over the "few-shot only" result.
## 7.2 Error Analysis
To better understand the inference limitations of our trained agents, we manually inspected the model predictions by randomly selecting 100 validation turns for each domain where the prediction was incorrect. The following are the most common error patterns we observed across all languages:
Implicit Entities: In X-RiSAWOZ dialogues, some entities are not mentioned explicitly in the user's utterance and need to be *inferred*. These entities include the corresponding price range for a luxury diner, a speaker's desired attraction for a date with their partner, and hotel rating. These errors are partly due to the limited common-sense capability of the pre-trained language model used (Zhou et al.,
2020) and partly due to the training data encouraging the model to copy entities verbatim from the input instead of making logical reasoning. This category accounts for 27% of errors observed.
Multiple Correct Dialogue Acts: In X-RiSAWOZ,
the agent often provides an answer as soon as it receives the API call results. However, in some
| Language | Setting | DST Acc. ↑ | DA Acc. ↑ | BLEU ↑ |
|------------------------------------------------------------------------------------------------------------------|--------------------|--------------|-------------|----------|
| Full-Shot | | | | |
| Chinese | Ours | 96.43 | 91.74 | 51.99 |
| Zero-Shot | | | | |
| Ours | 84.23 | 67.27 | 27.14 | |
| English | − Dictionary Align | 83.42 | 66.51 | 22.67 |
| − Neural Align | 82.33 | 67.79 | 13.24 | |
| Ours | 70.75 | 59.27 | 29.88 | |
| French | − Dictionary Align | 68.22 | 56.32 | 25.43 |
| − Neural Align | 64.53 | 53.33 | 18.12 | |
| Ours | 52.09 | 56.06 | 27.42 | |
| Hindi | − Dictionary Align | 50.12 | 54.34 | 23.43 |
| − Neural Align | 48.11 | 53.21 | 18.32 | |
| Ours | 34.55 | 49.56 | 10.17 | |
| Korean | − Dictionary Align | 31.47 | 50.17 | 9.87 |
| − Neural Align | 29.87 | 49.51 | 4.59 | |
| English-Hindi | Ours | 49.95 | 42.78 | 11.31 |
| Few-Shot | | | | |
| Chinese | Few-shot Only | 82.75 | 77.33 | 38.87 |
| Ours | 84.62 | 69.44 | 46.37 | |
| − Dictionary Align | 83.37 | 69.74 | 46.16 | |
| English | − Neural Align | 82.01 | 70.45 | 45.43 |
| Few-shot Only | 74.52 | 58.97 | 45.53 | |
| Ours | 73.12 | 61.11 | 42.21 | |
| − Dictionary Align | 71.12 | 60.21 | 40.12 | |
| French | − Neural Align | 69.68 | 57.12 | 38.14 |
| Few-shot Only | 67.55 | 50.96 | 44.77 | |
| Ours | 75.16 | 59.02 | 38.38 | |
| − Dictionary Align | 75.32 | 57.66 | 37.54 | |
| Hindi | − Neural Align | 73.21 | 54.32 | 34.32 |
| Few-shot Only | 55.77 | 49.88 | 38.18 | |
| Ours | 71.17 | 53.52 | 34.93 | |
| − Dictionary Align | 69.57 | 52.37 | 34.75 | |
| Korean | − Neural Align | 69.91 | 52.00 | 33.80 |
| Few-shot Only | 60.65 | 41.47 | 32.76 | |
| English-Hindi | Ours | 60.67 | 37.97 | 26.77 |
| Few-shot Only | 56.53 | 36.50 | 28.54 | |
| Table 2: Results on the validation set of X-RiSAWOZ, obtained by feeding the gold input for each subtask in each | | | | |
Korean Ours 71.17 53.52 **34.93**
− Dictionary Align 69.57 52.37 34.75
− Neural Align 69.91 52.00 33.80 Few-shot Only 60.65 41.47 32.76 English-Hindi Ours 60.67 **37.97** 26.77 Few-shot Only 56.53 36.50 **28.54**
Table 2: Results on the validation set of X-RiSAWOZ, obtained by feeding the gold input for each subtask in each turn. The best result in each section is in bold. ↑ indicates higher number shows better performance.
cases, the agent asks follow-up questions (e.g.,
"how many seats do you want for the car?") to narrow down the search results. Since the dataset is constructed via human interactions and not simulation, there are no well-defined policies governing the agent's behavior. Thus, there are examples where multiple dialogue acts can be correct given the input and API constraints. Since during evaluation we can only check the model output against the one response provided as gold, another perfectly fine response can be deemed as incorrect.
We discovered that 38% of errors are of this nature.
Incorrect Entities: In DST and DA subtasks, the accuracy is highly dependent on identifying the correct entities in the input. However, there are cases where the model (1) predicts a wrong entity,
(2) predicts part of the entity, (3) predicts the entity along with prepositions, articles, etc. (4) omits the entity, or (5) fully hallucinates an entity. We found (1) and (2) to be the most common patterns.
(3) can be addressed by a simple pre-processing or text lemmatization. (4) happens with complex sentences with many entities where the model often mispredicts the slot names as well as slot values.
(5) is usually caused by data mis-annotations or errors in data processing, where a slot is missing from the input and the model generates the most probable value for it. The remaining 35% of errors fall under this category.
For each language, we also performed a similar analysis to understand if there are language-specific attributes that affect the accuracy and quality of the translated datasets. The result of these analyses is included in the appendix (A.4-A.7).
## 7.3 Full Conversation Evaluation
The main results of our experiments are reported in Table 3. Following Lin et al. (2021), the evaluation for these experiments is performed end-to-end meaning for each turn, the model output from the previous subtask is used as input for the next subtask. This reflects a real-world scenario where an agent is conversing with the user interactively.
Overall, in the full-shot setting, when training on the Chinese dataset, we improve the state of the art in Joint Goal Accuracy (JGA) by 1.33%,
Task Success Rate (TSR) by 5.04%, Dialogue Success Rate (DSR) by 5.35%, and BLEU by 6.82%.
| Language | Setting | JGA ↑ | TSR ↑ | DSR ↑ | API ↑ | DAA ↑ | BLEU ↑ | SER ↓ |
|--------------------|--------------------|---------|---------|---------|---------|---------|----------|---------|
| Full-Shot | | | | | | | | |
| Chinese | Ours | 78.23 | 53.67 | 45.67 | 72.70 | 73.68 | 34.72 | 26.41 |
| SOTA | 76.90 | 48.63 | 40.32 | - | - | 27.90 | 30.32 | |
| Zero-Shot | | | | | | | | |
| Ours | 43.64 | 22.46 | 16.00 | 44.95 | 40.81 | 14.12 | 47.08 | |
| − Dictionary Align | 38.70 | 19.22 | 13.50 | 39.84 | 37.35 | 11.34 | 49.64 | |
| English | − Neural Align | 38.96 | 9.50 | 5.67 | 40.95 | 41.96 | 8.21 | 59.90 |
| Ours | 24.04 | 12.58 | 7.17 | 34.20 | 38.32 | 10.88 | 58.45 | |
| French | − Dictionary Align | 20.32 | 5.43 | 4.18 | 28.51 | 35.78 | 9.72 | 60.25 |
| − Neural Align | 19.43 | 3.23 | 2.11 | 24.64 | 28.36 | 6.81 | 68.89 | |
| Ours | 20.32 | 10.11 | 4.32 | 32.32 | 34.23 | 9.13 | 60.43 | |
| Hindi | − Dictionary Align | 18.31 | 5.15 | 3.98 | 30.12 | 32.31 | 8.11 | 65.43 |
| − Neural Align | 17.32 | 3.12 | 3.10 | 28.51 | 28.13 | 7.00 | 67.25 | |
| Ours | 21.41 | 10.75 | 5.00 | 32.08 | 36.57 | 7.27 | 64.33 | |
| − Dictionary Align | 19.53 | 9.46 | 4.83 | 27.75 | 36.33 | 7.55 | 35.84 | |
| Korean | − Neural Align | 17.77 | 8.77 | 3.67 | 27.19 | 25.45 | 7.12 | 38.98 |
| English-Hindi | Ours | 9.22 | 4.81 | 2.03 | 10.43 | 26.47 | 5.41 | 63.26 |
| Few-Shot | | | | | | | | |
| Chinese | Few-shot Only | 37.69 | 28.04 | 21.00 | 40.73 | 42.30 | 13.89 | 45.44 |
| Ours | 48.91 | 23.13 | 17.17 | 50.06 | 42.45 | 26.33 | 44.93 | |
| − Dictionary Align | 48.40 | 22.79 | 16.67 | 50.03 | 42.26 | 25.29 | 45.01 | |
| English | − Neural Align | 46.31 | 22.68 | 16.50 | 47.61 | 42.54 | 25.78 | 44.78 |
| Few-shot Only | 29.87 | 16.09 | 10.50 | 32.30 | 30.45 | 20.00 | 52.79 | |
| Ours | 30.85 | 17.17 | 11.83 | 39.97 | 45.03 | 20.92 | 46.26 | |
| − Dictionary Align | 28.51 | 16.11 | 9.54 | 38.11 | 43.41 | 19.91 | 48.35 | |
| French | − Neural Align | 26.45 | 15.54 | 9.13 | 35.74 | 42.15 | 16.99 | 49.26 |
| Few-shot Only | 19.43 | 3.23 | 2.11 | 24.64 | 28.36 | 6.81 | 68.89 | |
| Ours | 25.62 | 15.67 | 11.31 | 37.54 | 41.32 | 18.51 | 44.26 | |
| − Dictionary Align | 23.12 | 15.11 | 10.32 | 35.14 | 39.51 | 16.34 | 46.76 | |
| Hindi | − Neural Align | 21.12 | 13.22 | 8.61 | 34.11 | 34.12 | 15.33 | 48.97 |
| Few-shot Only | 18.48 | 8.16 | 4.50 | 19.09 | 23.41 | 13.15 | 62.24 | |
| Ours | 26.24 | 14.32 | 10.60 | 35.42 | 38.42 | 20.32 | 43.21 | |
| − Dictionary Align | 24.13 | 12.53 | 8.45 | 23.42 | 33.34 | 19.32 | 47.32 | |
| Korean | − Neural Align | 23.54 | 10.23 | 7.54 | 22.31 | 30.42 | 18.34 | 50.33 |
| Few-shot Only | 20.66 | 9.16 | 5.17 | 19.39 | 23.56 | 17.81 | 54.57 | |
| English-Hindi | Ours | 21.80 | 4.13 | 1.83 | 22.64 | 21.69 | 5.29 | 66.31 |
| Few-shot Only | 16.07 | 3.69 | 2.33 | 15.65 | 16.97 | 3.93 | 69.61 | |
The improvements are due to the improved and succinct dialogue representation we have created
(Section 4), and contextual representations of transformer models.
In the zero-shot setting, results vary across languages, where the English, French, Hindi, Korean, and English-Hindi agents achieve 35%, 16%, 9%,
11%, and 4% of the DSR score of the full-shot Chinese agent, respectively. In the few-shot setting, the ratio improves to 38%, 26%, 25%, 23%, and 5%. The smallest and biggest improvements are on the English and Hindi dataset respectively. This suggests that the impact of few-shot data is greater when the quality of the pretraining data is lower, which is related to the quality of the translation model between Chinese and the target language.
The Response Generation subtask receives the largest improvement in performance when provided with human supervision in the few-shot data, with a BLEU score improvement of over 10%. This suggests that while translation with alignment is effective for understanding user input, it is not as effective for generating output text. This is partly due to the agent model used, mBART, which is trained with a denoising objective and is thus able to handle noisy input text better.
## 8 Conclusion
This paper presents a solution for balancing the trade-offs between standard machine translation and human post-editing. By standardizing and establishing best practices for "translation with manual post-editing", and releasing associated toolkits, post-editing can be made faster, more efficient, and cost-effective. We use our methodology to create X-RiSAWOZ, a new end-to-end, high-quality, and large multi-domain multilingual dialogue dataset, covering 5 diverse languages and 1 code-mixed language. We also provide strong baselines for zero/few-shot creation of dialogue agents via crosslingual transfer. In the few-shot setting, our agents achieve between 60.7-84.6% on DST, 38.0-70.5%
on DA, and 28.5-46.4% on RG subtasks across different languages. Overall, our work paves the way for more efficient and cost-effective development of multilingual task-oriented dialogue systems.
## 9 Limitations
We would have liked to evaluate the generalization of our cross-lingual approach on more languages.
For instance, we partially rely on machine translation models for Chinese-to-English translation.
Available translation models for other language pairs, especially from/to low-resource languages have much lower quality, and it would be desirable to measure the effect of that in our experiments.
The ontology used for new languages is derived by translating the Chinese ontology. As a result, the entities are not localized. Creating local ontology requires manual effort as one would need to identify websites or databases for scraping or collecting the entities. Once the local entities are collected, we can automatically replace translated entities with local ones to localize the dataset.
Another limitation is the lack of human evaluation for agent responses. BLEU score does not correlate well with human judgment (Sulem et al.,
2018), and SER only accounts for the factuality of the response but not grammar or fluency. In future work, we wish to address this by conducting human evaluations in addition to automatic metrics.
## 10 Ethical Considerations
We do not foresee any harmful or malicious misuse of the technology developed in this work. The data used to train models is about seeking information about domains like restaurants, hotels and tourist attractions, does not contain any offensive content, and is not unfair or biased against any demographic.
This work does focus on widely-spoken languages, but we think the cross-lingual approach we proposed can improve future dialogue language technologies for a wider range of languages.
We fine-tune multiple medium-sized (several hundred million parameters) neural networks for our experiments. We took several measures to avoid wasted computation, like performing one run instead of averaging multiple runs (since the numerical difference between different models is large enough to draw meaningful conclusions), and improving batching and representation that improved training speed, and reduced needed GPU
time. Please refer to Appendix A.1 for more details about the amount of computation used in this paper.
## Acknowledgements
We would like to thank Ruchi Jain for helping us validate the automatically translated Hindi dialogues. This work is supported in part by the National Science Foundation under Grant No. 1900638, the Alfred P. Sloan Foundation under Grant No. G-2020-13938, the Verdant Foundation, Microsoft, KDDI, JPMorgan Chase, and the Stanford Human-Centered Artificial Intelligence
(HAI) Institute. This work is also co-funded by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (No. 2022D01D43), Zhejiang Lab (No. 2022KH0AB01). This project has also received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 101021797 (Starlight), and the European Union's Horizon Europe research and innovation programme under grant agreement N° 101070192 (CORTEX2). This work is also supported in part by Institute of Information &
communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2020-0-01373 and IITP-20222021-0-01817).
## References
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Ultes Stefan, Ramadan Osman, and Milica Gašic. 2018. Multiwoz - a large- ´
scale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language* Processing (EMNLP-IJCNLP), pages 4506–4517.
Giovanni Campagna, Silei Xu, Mehrad Moradshahi, Richard Socher, and Monica S. Lam. 2019. Genie:
A generator of natural language semantic parsers for virtual assistant commands. In *Proceedings of the* 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, pages 394–410, New York, NY, USA. ACM.
Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. arXiv preprint arXiv:1809.05053.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Bosheng Ding, Junjie Hu, Lidong Bing, Mahani Aljunied, Shafiq Joty, Luo Si, and Chunyan Miao. 2022.
GlobalWoZ: Globalizing MultiWoZ to develop multilingual task-oriented dialogue systems. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 1639–1657, Dublin, Ireland. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112–2128, Online.
Association for Computational Linguistics.
Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek HakkaniTur. 2019a. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. *arXiv* preprint arXiv:1907.01669.
Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek HakkaniTur. 2019b. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprint arXiv:1907.01669.
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2020. Beyond english-centric multilingual machine translation. arxiv e-prints, page.
arXiv preprint arXiv:2010.11125.
Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation.
arXiv preprint arXiv:1707.02633.
Markus Freitag and Yaser Al-Onaizan. 2017. Beam search strategies for neural machine translation.
arXiv preprint arXiv:1702.01806.
George Giannakopoulos, Mahmoud El-Haj, Benoit Favre, Marina Litvak, Josef Steinberger, and Vasudeva Varma. 2011. Tac 2011 multiling pilot overview. TAC.
Goran Glavaš, Mladen Karan, and Ivan Vulic. 2020. ´
XHate-999: Analyzing and detecting abusive language across domains and languages. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 6350–6365, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek HakkaniTür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang,
Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David Traum, Maxine Eskenazi, Ahmad Beirami, Eunjoon, Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, and Rajen Subba. 2020. Overview of the ninth dialog system technology challenge: Dstc9.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. *arXiv preprint arXiv:1904.09751*.
Chia-Chien Hung, Anne Lauscher, Ivan Vulic, Simone ´
Ponzetto, and Goran Glavaš. 2022a. Multi2WOZ: A
robust multilingual dataset and conversational pretraining for task-oriented dialog. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 3687–3703, Seattle, United States. Association for Computational Linguistics.
Chia-Chien Hung, Anne Lauscher, Ivan Vulic, Si- ´
mone Paolo Ponzetto, and Goran Glavaš. 2022b.
Multi2woz: A robust multilingual dataset and conversational pretraining for task-oriented dialog. *arXiv* preprint arXiv:2205.10400.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Oleksii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, et al. 2019. Nemo: a toolkit for building ai applications using neural modules. *arXiv preprint* arXiv:1909.09577.
Taku Kudo and John Richardson. 2018. Sentencepiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing.
arXiv preprint arXiv:1808.06226.
Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021a.
MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2950–2962, Online. Association for Computational Linguistics.
Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Ryuichi Takanobu, Minlie Huang, and Jianfeng Gao. 2021b.
Multi-domain task-oriented dialog challenge ii at dstc9. In AAAI-2021 Dialog System Technology Challenge 9 Workshop.
Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, Peng Xu, Feijun Jiang, Yuxiang Hu, Chen Shi, and Pascale Fung. 2021. BiToD: A bilingual multidomain dataset for task-oriented dialogue modeling. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1 pre-proceedings (NeurIPS Datasets and Benchmarks 2021).
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A
dialog research software platform. arXiv preprint arXiv:1705.06476.
Nikita Moghe, Mark Steedman, and Alexandra Birch.
2021. Cross-lingual intermediate fine-tuning improves dialogue state tracking. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1137–1150, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mehrad Moradshahi, Giovanni Campagna, Sina Semnani, Silei Xu, and Monica Lam. 2020. Localizing open-ontology QA semantic parsers in a day using machine translation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 5970–5983, Online. Association for Computational Linguistics.
Mehrad Moradshahi, Sina Semnani, and Monica Lam.
2023. Zero and few-shot localization of task-oriented dialogue agents with a distilled representation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 886–901, Dubrovnik, Croatia. Association for Computational Linguistics.
Mehrad Moradshahi, Victoria Tsai, Giovanni Campagna, and Monica S Lam. 2021. Contextual semantic parsing for multilingual task-oriented dialogues.
arXiv preprint arXiv:2111.02574.
Massimo Nicosia, Zhongdi Qu, and Yasemin Altun.
2021. Translate & fill: Improving zero-shot multilingual semantic parsing with synthetic data. arXiv preprint arXiv:2109.04319.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances*
in neural information processing systems, 32:8026–
8037.
Shana Poplack. 1980. Sometimes i'll start a sentence in spanish y termino en espaÑol: toward a typology of code-switching 1. *Linguistics*, 18:581–618.
Jun Quan, Shian Zhang, Qian Cao, Zizhong Li, and Deyi Xiong. 2020. RiSAWOZ: A large-scale multidomain Wizard-of-Oz dataset with rich semantic annotations for task-oriented dialogue modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 930–940, Online. Association for Computational Linguistics.
Osman Ramadan, Paweł Budzianowski, and Milica Gasic. 2018. Large-scale multi-domain belief tracking with knowledge sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 2, pages 432–437.
Mohd Sanad Zaki Rizvi, Anirudh Srinivasan, Tanuja Ganu, Monojit Choudhury, and Sunayana Sitaram.
2021. GCM: A toolkit for generating synthetic codemixed text. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 205–211, Online. Association for Computational Linguistics.
Carol Scotton. 1993. *Duelling languages : grammatical* structure in codeswitching. Clarendon Press Oxford University Press, Oxford, Eng. New York.
Tom Sherborne, Yumo Xu, and Mirella Lapata. 2020.
Bootstrapping a crosslingual semantic parser.
Elior Sulem, Omri Abend, and Ari Rappoport. 2018.
Bleu is not suitable for the evaluation of text simplification. *arXiv preprint arXiv:1810.05995*.
Chuanxin Tang, Chong Luo, Zhiyuan Zhao, Wenxuan Xie, and Wenjun Zeng. 2020. Joint time-frequency and time domain learning for speech enhancement.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence.
Sarah Uhrig, Yoalli Garcia, Juri Opitz, and Anette Frank.
2021. Translate, then parse! a strong baseline for cross-lingual AMR parsing. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 58–64, Online. Association for Computational Linguistics.
Eva Vanmassenhove, Dimitar Shterionov, and Matthew Gwilliam. 2021. Machine translationese: Effects of algorithmic bias on linguistic complexity in machine translation. *arXiv preprint arXiv:2102.00287*.
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, PeiHao Su, David Vandyke, and Steve Young. 2015.
Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers:
State-of-the-art natural language processing. *ArXiv*,
abs/1910.03771.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies.
Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. Paws-x: A cross-lingual adversarial dataset for paraphrase identification. *arXiv preprint* arXiv:1908.11828.
Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Taskoriented dialog systems that consider multiple appropriate responses under the same context. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
05, pages 9604–9611.
Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020. Evaluating commonsense in pretrained language models. In *Proceedings of the AAAI*
Conference on Artificial Intelligence, volume 34, pages 9733–9740.
Qi Zhu, Christian Geishauser, Hsien chin Lin, Carel van Niekerk, Baolin Peng, Zheng Zhang, Michael Heck, Nurul Lubis, Dazhen Wan, Xiaochen Zhu, Jianfeng Gao, Milica Gašic, and Minlie Huang. ´
2022. Convlab-3: A flexible dialogue system toolkit based on a unified data format. arXiv preprint arXiv:2211.17148.
Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang. 2020a. CrossWOZ: A large-scale Chinese cross-domain task-oriented dialogue dataset.
Transactions of the Association for Computational Linguistics, 8:281–295.
Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang. 2020b. Crosswoz: A large-scale chinese cross-domain task-oriented dialogue dataset.
Transactions of the Association for Computational Linguistics, 8:281–295.
Qi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, and Minlie Huang. 2020c. Convlab2: An open-source toolkit for building, evaluating, and diagnosing dialogue systems. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics.
Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1. 0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation
(LREC'16), pages 3530–3534.
Lei Zuo, Kun Qian, Bowen Yang, and Zhou Yu. 2021.
Allwoz: Towards multilingual task-oriented dialog systems for all. *arXiv preprint arXiv:2112.08333*.
## A Appendix A.1 Implementation Details
Our code is implemented in PyTorch (Paszke et al.,
2019) using GenieNLP (Campagna et al., 2019)
2library for training and evaluation. We use our newly written library (described in Section 4) for data preprocessing and evaluation which will be released upon publication. We use pre-trained models available through HuggingFace's Transformers library (Wolf et al., 2019). We use *m2m100-418M*
model for Korean and *mbart-large-50* for other languages as the neural model for our agent. Both models use a standard Seq2Seq architecture with a bidirectional encoder and left-to-right autoregressive decoder. mBART uses sentence-piece (Kudo and Richardson, 2018) for tokenization, and is pretrained on text reconstruction task in 50 languages.
In each setting, all four dialogue subtasks are done with a single model, where we specify the task by prepending a special token to the input.
We found mBART to be especially effective in zero-shot settings as the language of its outputs can be controlled by providing a language-specific token at the beginning of decoding. Additionally, its denoising pre-training objective improves its robustness to the remaining translation noise.
For translation, we use the publicly available mbart-large-50-many-to-many-mmt (~611M parameters) and *m2m100-418M* (~1.2B parameters)
models which can directly translate text from any of the 50 supported languages.
We use greedy decoding and train our models using teacher-forcing and token-level cross-entropy loss. We use Adam (Kingma and Ba, 2014) as our optimizer with a start learning rate of 2×10−5and linear scheduling. These hyperparameters were chosen based on a limited hyperparameter search on the validation set. For the numbers reported in the paper, due to cost, we performed only a single run for each experiment.
Our models were trained on virtual machines with a single NVIDIA V100 (16GB memory) GPU
on the Azure platform. For a fair comparison, all models were trained for the same number of iterations of 200K in the full-shot setting. In the few-shot setting, we fine-tuned the model for 10K
2https://github.com/stanford-oval/genienlp steps on the few-shot data. Sentences are batched based on their input and approximate output token count for better GPU utilization. We set the total number of tokens per batch to 720. Training and evaluating each model takes about 15 GPU hours on average.
At inference time, we use the predicted belief state as input to subsequent turns instead of ground truth. However, to avoid the conversation from diverging from its original direction, similar to Lin et al. (2021), we use ground-truth agent acts as input for the next turn. We made sure the settings are equivalent for a fair comparison. Additionally, we noted that in many examples the prediction is similar to the gold truth except for small differences such as in case (e.g., "district" vs "District"),
or extra punctuation in the predicted output. To address this, during evaluation, we apply entity normalization by using canonical mapping and string pattern matching to map entities to their canonicalized form.
## A.2 Evaluation Metrics
Following Moradshahi et al. (2023), we use the following metrics to compare different models. Scores are averaged over all turns unless specified otherwise.
- **Joint Goal Accuracy (JGA)** (Budzianowski et al., 2018): The standard metric for evaluating DST. JGA for a dialogue turn is 1 if all slotrelation-value triplets in the generated belief state match the gold annotation, and is 0 otherwise.
- **Task Success Rate (TSR)** (Lin et al., 2021): A
task, defined as a pair of domain and intent, is completed successfully if the agent correctly provides all the user-requested information and satisfies the user's initial goal for that task. TSR is reported as an average over all tasks.
- **Dialogue Success Rate (DSR)** (Lin et al., 2021):
DSR is 1 for a dialogue if all user requests are completed successfully, and 0 otherwise. DSR is reported as an average over all dialogues. We use this as the main metric to compare models, since the agent needs to complete all dialogue subtasks correctly to obtain a full score on DSR.
- API: For a dialogue turn, is 1 if the model correctly predicts to make an API call, and all the constraints provided for the call match the gold.
It is 0 otherwise.
- **Dialogue Act Accuracy (DAA)**: For a dialogue turn, is 1 if the model correctly predicts all the di-
alogue acts including entities, and is 0 otherwise.
- **BLEU** (Papineni et al., 2002): Measures the natural language response fluency based on n-gram matching with the human-written gold response.
BLUE is calculated at the corpus level.
- **Slot Error Rate (SER)** (Wen et al., 2015): It complements BLEU as it measures the factual correctness of natural language responses. For each turn, it is 1 if the response contains all entities present in the gold response, and is 0 otherwise.
## A.3 Human Post-Editing
Bilingual speakers of the source and output language were recruited as human translators and posteditors by each team. A user interface (see Fig. 2)
was provided for them to perform translation and alignment tasks. The translators were instructed to ensure that the resulting translations were both accurate and fluent. Compensation for their work was provided at the standard rate in their respective countries.
## A.4 Error Analysis: French
For the French language, we focused on the Response Generation subtask. We selected 300 of the prediction examples marked as false as not matching exactly the reference. In this set 42.8% of the predictions are completely wrong. Polite forms are particularly problematic. As an example, we can cite the case of the expression "Tout le plaisir est pour moi, au revoir. (*It's my pleasure, goodbye.*)"
which has four different wrong predictions *"Pas* de courtoisie, au revoir. (*No courtesy, goodbye.*)",
"Pas de gentillesse, au revoir. (*No kindness, goodbye.*)", *"Je suis heureux de vous servir, au revoir.*
(*I am pleased to serve you, goodbye.*)" and "Pas de bonheur, adieu! (*No happiness, goodbye*)". The root cause most likely stems from the literal translation of Chinese idioms used in polite expressions
(不要客气) to French in the zero-shot training data.
However, most of the polite expressions should be easy enough to correct.
We noted that 7.4% of the predictions are just slightly off semantically. For example (*"Je recommande l'université de Xi'an Jiaotong-Liverpool.* (I
recommend Xi'an Jiaotong-Liverpool University.)"
vs. *"l'université de Xi'an Jiaotong-Liverpool de* Liverpool est très bien. (*Xi'an Jiaotong-Liverpool* University from Liverpool is very good.)" with the wrong insertion of "de Liverpool (*from Liverpool*)".
On the other hand, 15.4% of the predictions are semantically correct, but with minor errors (syntactic errors, repetitions, etc.). For instance, the meaning of the sentence "Il y aura une brise sans direction continue samedi prochain. (There will be a continuous directionless breeze next Saturday.)"
is the same than the one of the sequence of words
"Le vent une brise sans direction continue vent doit être doux. (*The wind a breeze without continuous* wind direction must be gentle.)" but this sequence of words is syntactically wrong. The date reference is also missing but it was not mandatory for the correctness of the dialogue. Finally, 34.4% of the supposedly wrong generated responses are in fact correct but just expressed differently, like in
"Elle a une note de 4,3. (*It has a rating of 4.3.*")
vs. "La note de ce lieu est de 4,3. (*The rating for* this location is 4.3.)". We think that this kind of difference could be handled by a computation of sentence embedding distance.
As we focused on the Response Generation component, we did not carry out a large-scale qualitative analysis of the Slot-Relation subtask but a quick look at the data seems to indicate that some given slots are often missing from the generated part, like *"the_most_suitable_people"*. For some other slots like "*metro_station*", some values seem to be missing from normalization data like "*peut*"
which should be equivalent to "*pouvoir*" and "*true*".
This latter error will be quite simple to correct.
## A.5 Error Analysis: Hindi
We sample 10% of the errors from each domain from the Hindi validation dataset and analyze these examples manually. The following are the error patterns we observe:
Response Generation. As discussed in Section A.7, there are multiple ways to generate a sentence while matching the semantic content of the gold truth. While such RGs should ideally be marked as correct, their BLEU scores are low. Such instances amount to over 65% of all RG errors. In addition to such kind of errors, we observe that approximately 18% of all the RG error samples are largely accurate but they lack fluency. Here is one such instance where the model is trying to say bye to the user: "ajib hai, alvida!", which translates to "That's strange, bye!". In this example, the model conveys the right message but not in the most polite way. Such instances become more common when the model has to fill the "general" slot, used mainly in greetings. This is possibly because the model finds it more difficult to generate open-ended text than content-guided text.
Erroneous Slot-Relation Values. In some cases, the model predicts the right slot-relation values, but they are deemed incorrect because it predicts the synonym of the gold truth. This amounts to 28% of all the erroneous slot-relation value examples. In addition to such instances, we observe that some slot-relation values are marked as incorrect because of minor differences between the gold truth and the model prediction. These include extra spaces, punctuations, stop words, and the usage of synonyms. Such kind of errors amount to 17.8% of the sampled erroneous slot relation values. Lastly, our analysis reveals that there seems to be an increased amount of confusion between the following pairs of slots: "inform", "request" and "date", "time".
## A.6 Error Analysis: Korean
The Korean language poses some unique challenges. In Korean, a word can be made up of multiple characters, and an *eo-jeol* is formed by one or more words to convey a coherent meaning.
Spaces are used to delimit an eo-jeol. For instance, postpositions, or *jo-sa* in Korean, are connected to a noun to form an eo-jeol to indicate its grammatical relation to other words in a sentence.
Consider "저는 샤먼에 갈거에요", a sentence containing 3 eo-jeol and 9 characters that means
"I will go to Xiamen". "샤먼에" is an eo-jeol meaning "to Xiamen", where "샤먼" is "Xiamen" and "에" (which is a jo-sa) means "to". Because the two words are connected into a single eo-jeol, the annotation is more prone to mistakes. Furthermore, extracting an entity in an eo-jeol is more difficult.
This leads to more "incorrect entities" problems in the results for Korean.
Furthermore, Korean possesses distinctive auxiliary verbs/adjectives known as *bo-jo yong-eon*.
These bo-jo yong-eon can be connected to the main verbs/adjectives either within a single combined eojeol or across multiple eo-jeols, leading to similar challenges in entity annotation. For example, both
"친구들과 갈" and "친구들과 가는" means "to go with friends". Here, both "-ㄹ" (the character at the bottom of "갈") and "는" are bo-jo yongeon meaning "to go". A single English auxiliary verb can map to a wide variety of bo-jo yong-eon depending on the context.
We modified the annotation tool so that it works at the character level instead of word level. To identify number entities, we use heuristics to extract the jo-sa from eo-jeol composed of a number and a jo-sa. Despite this, our analysis suggests that the eo-jeol and bo-jo yong-eon issues account for approximately 5% of the errors encountered.
Another issue that we ran into is how negative questions are answered differently in Korean. For example, when asked "isn't it hot?", "yes" means
"it is hot" in English, but "it is not hot" in Korean.
This discrepancy caused issues during the annotation process. At times, translators mistakenly mapped "yes" in English to mean "no" in Korean for negative questions, or they transformed them into positive inquiries, which we discovered later on. The former case of mapping "yes" to "no" resulted in inconsistency in entity mapping, especially when both positive and negative questions are present in the dataset for the domain and slots.
To address this, we manually corrected the annotation results to ensure consistent entity mappings, which resolved the majority of the errors.
## A.7 Error Analysis: English-Hindi Code-Mixed
To understand the errors of English-Hindi (en-hi)
code-mix set, we also sampled 10% of the erroneous examples for each domain from the en-hi validation set. In addition to the error categories noticed for English (Section 7.2), we observe the following patterns:
Response Generation (45%). Model prediction for Response Generation step is low on BLEU
score because there can be multiple ways of codemixing a sentence. The response could be monolingual, or can be code-mixed to various degrees, or different spans within a sentence could be switched, and such errors account to 19% of the total errors.
For example, the gold truth is "yah 179 minutes tak chalta hai" and the model output is "movie ki duration 179 minutes hai" 3. For around 20% samples, the generated responses are incoherent, malformed sentences or unnatural code-mixed sentences. We also observed that the generated sentences are low on fluency, while matching the semantic content of the gold truth, accounting for 6% of total errors. It is our conjecture that the erroneous code-mixed text generation can be ascribed to mBART's restricted ability to generate code-mixed sentences.
3In the examples, Hindi tokens (in italic) are written in romanized format for ease of reading. In the datasets, Hindi tokens are in Devanagri script.
Erroneous slot-relation-value (35%). In some cases the model predicts additional slot-relationvalues, in addition to the correct slot-relationvalues (10% of the erroneous samples). For example, gold truth is "(weather) date equal_to next Tuesday" and the predicted output is "(weather)
city equal_to Suzhou, date equal_to next Tuesday".
It is likely that the model is copying additional slot-relation-value tuples that are available in the knowledge part of the input. In 23% of the analyzed erroneous samples, the model output has the wrong action, domain, slot, relation or slot values.
About 1% of the erroneous samples hallucinated slot values.
Language and Script Difference (20%).
Across the DST, DA, and RG steps, the gold truth differs from prediction in terms of the script or the language or both. For instance, the slot value could be in Hindi in the Devanagari script, whereas the model prediction is in English or/and in the Roman script. In some cases, although the values match, differences in script/languages can cause the automatic approach to identify them as an error.
For example, the gold truth "(train) date equal_to
'next Sunday morning' , seat_type equal_to 'second class ticket' " differs only slightly from the model output "(train) date equal_to 'next Sunday morning', seat_type equal_to 'second class' ". The measured error rate may not reflect the correct model performance because some of these errors can be reduced by accounting for the semantic match between the generated output and the gold truth.
## A.8 Dialogue Example
In Table 4, we show two turns of an example in the original dataset, and its translation to other languages.
## A.9 Example Of The Checking Process
Figure 4 shows an example of our checking process described in Section 5 during the translation from English to French.
| DST API | |
|-----------|-------|
| Turn 1 | DA RG |
| Input (EN) | DST: <state> null <endofstate> <history> USER: Hi, my friend is coming to Suzhou to visit me, I want to take him to a commercial center in the mid-price range. Do you have anything to recommend? <endofhistory> |
|--------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Output (EN) | ( attraction ) consumption " mid " , type " commercial center " |
| Input (ZH) | DST: <state> null <endofstate> <history> USER: 你好,我朋友要来苏州找我玩,我 想带他找一个消费中等的商业中心逛逛,求推荐。 <endofhistory> |
| Output (ZH) | ( attraction ) consumption " 中等 " , type " 商业中心 " |
| Input (EN) | API: <knowledge> null <endofknowledge> <state> ( attraction ) consumption " mid " , type " commercial center " <endofstate> <history> USER: Hi, my friend is coming to Suzhou to visit me, I want to take him to a commercial center in the mid-price range. Do you have anything to recommend? <endofhistory> |
| Output (EN) | yes |
| Input (ZH) | API: <knowledge> null <endofknowledge> <state> ( attraction ) consumption " 中 等 " , type " 商业中心 " <endofstate> <history> USER: 你好,我朋友要来苏州找 我玩,我想带他找一个消费中等的商业中心逛逛,求推荐。 <endofhistory> |
| Output (ZH) | yes |
| Input (EN) | DA: <knowledge> ( attraction ) address " Guanqian Street, Gusu District, Suzhou City. " , area " Gusu District " , available_options " 4 " , consumption " moderate " , metro_station " true " , name " Guanqian Street " , opening_hours " all day " , phone_number " N/A " , score " 4.3 " , the_most_suitable_people " friends " , ticket_price " free " , type " commercial center " <endofknowledge> <state> ( attraction ) consumption " mid " , type " commercial center " <endofstate> <history> USER: Hi, my friend is coming to Suzhou to visit me, I want to take him to a commercial center in the mid-price range. Do you have anything to recommend? <endofhistory> |
| Output (EN) | ( attraction ) recommend name " Guanqian Street " |
| Input (ZH) | DA: <knowledge> ( attraction ) address " 苏州市姑苏区观前街 " , area " 姑 苏区 " , available_options " 4 " , consumption " 中等 " , metro_station " 是 " , name " 观前街 " , opening_hours " 全天 " , phone_number " 无 " , score " 4.3 " , the_most_suitable_people " 朋友出游 " , ticket_price "免费 " , type " 商业中心 " <endofknowledge> <state> ( attraction ) consumption " 中等 " , type " 商业中心 " <endofstate> <history> USER: 你好,我朋友要来苏州找我玩,我想带他找一个消 费中等的商业中心逛逛,求推荐。<endofhistory> |
| Output (ZH) | ( attraction ) recommend name " 观前街 " |
| Input (EN) | RG: <actions> ( attraction ) recommend name " Guanqian Street " <endofactions> <history> USER: Hi, my friend is coming to Suzhou to visit me, I want to take him to a commercial center in the mid-price range. Do you have anything to recommend? <endofhistory> |
| Output (EN) | You can go to Guanqian Street. |
| Input (ZH) | API: <knowledge> null <endofknowledge> <state> ( attraction ) consumption " 中 等 " , type " 商业中心 " <endofstate> <history> USER: 你好,我朋友要来苏州找 我玩,我想带他找一个消费中等的商业中心逛逛,求推荐。 <endofhistory> |
| Output (ZH) | 去观前街吧。 |
| Input (EN) | DST: <state> ( attraction ) consumption " mid " , type " commercial center " <endofstate> <history> AGENT_ACTS: ( attraction ) recommend name " Guanqian Street " USER: Oh yeah, why didn't I think of that? When is it open? <endofhistory> | | | | | | | | | | | | | |
|------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----|-------------|----|-----|----|----|------|----|----------|--------|----|----|
| Output (EN) | ( | attraction | ) | consumption | " | mid | " | , | name | " | Guanqian | Street | " | , |
| the_most_suitable_people " friend " , type " commercial center " | | | | | | | | | | | | | | |
| Input (ZH) | DST: <state> ( attraction ) consumption " 中等 " , type " 商业中心 " <endofstate> <history> AGENT_ACTS: ( attraction ) recommend name " 观前街 " USER: 对 呀,我怎么没想起来呢,什么时间开放呀? <endofhistory> | | | | | | | | | | | | | |
| Output (ZH) | ( attraction ) consumption " 中等 " , name " 观前街 " , type " 商业中心 " | | | | | | | | | | | | | |
| DST | Input (EN) | API: <knowledge> ( attraction ) address " 苏州市姑苏区观前街 " , area " 姑 苏区 " , available_options " 4 " , consumption " 中等 " , metro_station " 是 " , name " 观前街 " , opening_hours " 全天 " , phone_number " 无 " , score " 4.3 " , the_most_suitable_people " 朋友出游 " , ticket_price " 免费 " , type " 商业中 心 " <endofknowledge> <state> ( attraction ) consumption " 中等 " , name " 观 前街 " , type " 商业中心 " <endofstate> <history> AGENT_ACTS: ( attraction ) recommend name " 观前街 " USER: 对呀,我怎么没想起来呢,什么时间开放呀? <endofhistory> | | | | | | | | | | | | |
| Output (EN) | yes | | | | | | | | | | | | | |
| Input (ZH) | API: <knowledge> null <endofknowledge> <state> ( attraction ) consumption " 中 等 " , type " 商业中心 " <endofstate> <history> USER: 你好,我朋友要来苏州找 我玩,我想带他找一个消费中等的商业中心逛逛,求推荐。 <endofhistory> | | | | | | | | | | | | | |
| Output (ZH) | yes | | | | | | | | | | | | | |
| API | Input (EN) | DA: <knowledge> ( attraction ) address " Guanqian Street, Gusu District, Suzhou City. " , area " Gusu District " , available_options " 1 " , consumption " moderate " , features " You can try food from time-honored Suzhou brands, such as Songhelou Restaurant, Huang Tianyuan, and visit Xuanmiao Temple, the place that gave the street its name. " , metro_station " true " , name " Guanqian Street " , opening_hours " all day " , phone_number " N/A " , score " 4.3 " , the_most_suitable_people " friends " , ticket_price " free " , type " commercial center " <endofknowledge> <state> ( attraction ) consumption equal_to " mid " , name equal_to " Guanqian Street " , the_most_suitable_people equal_to " friend " , type equal_to " commercial center " <endofstate> <history> AGENT_ACTS: ( attraction ) recommend name equal_to " Guanqian Street " USER: Oh yeah, why didn't I think of that? When is it open? <endofhistory> | | | | | | | | | | | | |
| Output (EN) | ( attraction ) inform opening_hours " all day " | | | | | | | | | | | | | |
| Input (ZH) | DA: <knowledge> ( attraction ) address " 苏州市姑苏区观前街 " , area " 姑 苏区 " , available_options " 4 " , consumption " 中等 " , metro_station " 是 " , name " 观前街 " , opening_hours " 全天 " , phone_number " 无 " , score " 4.3 " , the_most_suitable_people " 朋友出游 " , ticket_price "免费 " , type " 商业中心 " <endofknowledge> <state> ( attraction ) consumption " 中等 " , type " 商业中心 " <endofstate> <history> USER: 你好,我朋友要来苏州找我玩,我想带他找一个消 费中等的商业中心逛逛,求推荐。<endofhistory> | | | | | | | | | | | | | |
| Output (ZH) | ( attraction ) inform opening_hours " 全天 " | | | | | | | | | | | | | |
| Turn 2 | DA | Input (EN) | RG: <actions> ( attraction ) inform opening_hours " all day " <endofactions> <history> USER: Oh yeah, why didn't I think of that? When is it open? <endofhistory> | | | | | | | | | | | |
| Output (EN) | It's open all day. | | | | | | | | | | | | | |
| Input (ZH) | RG: <actions> ( attraction ) inform opening_hours " 全天 " <endofactions> <history> USER: 对呀,我怎么没想起来呢,什么时间开放呀? <endofhistory> | | | | | | | | | | | | | |
| Output (ZH) | 全天开放哟。 | | | | | | | | | | | | | |
| RG | | | | | | | | | | | | | | |
![18_image_0.png](18_image_0.png)
![18_image_1.png](18_image_1.png)
![19_image_0.png](19_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 9
✓ A2. Did you discuss any potential risks of your work?
Section 10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4,5
✓ B1. Did you cite the creators of artifacts you used?
Section 1,2,3,4,5
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Artifacts are under Apache License 2.0. Please check https://github.com/stanford-oval/dialogues/blob/main/LICENSE
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 5,6,9,10
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
X-RiSAWOZ dataset is translated from the original Chinese RiSAWOZ. No new data was collected.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5
## C ✓ **Did You Run Computational Experiments?** Section 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section A.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section A.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section A.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 5, A.2
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Detail of annotation and instructions are discussed in Section 5 and A.2.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 5, A.2
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
X-RiSAWOZ dataset is translated from the original Chinese RiSAWOZ. No new data was collected.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. X-RiSAWOZ dataset is translated from the original Chinese RiSAWOZ. No new data was collected.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 5, A.2. |
meyer-buys-2023-subword | Subword Segmental Machine Translation: Unifying Segmentation and Target Sentence Generation | https://aclanthology.org/2023.findings-acl.175 | Subword segmenters like BPE operate as a preprocessing step in neural machine translation and other (conditional) language models. They are applied to datasets before training, so translation or text generation quality relies on the quality of segmentations. We propose a departure from this paradigm, called subword segmental machine translation (SSMT). SSMT unifies subword segmentation and MT in a single trainable model. It learns to segment target sentence words while jointly learning to generate target sentences. To use SSMT during inference we propose dynamic decoding, a text generation algorithm that adapts segmentations as it generates translations. Experiments across 6 translation directions show that SSMT improves chrF scores for morphologically rich agglutinative languages. Gains are strongest in the very low-resource scenario. SSMT also learns subwords that are closer to morphemes compared to baselines and proves more robust on a test set constructed for evaluating morphological compositional generalisation. |
## Subword Segmental Machine Translation: Unifying Segmentation And Target Sentence Generation Francois Meyer And Jan Buys
Department of Computer Science University of Cape Town [email protected], [email protected]
## Abstract
Subword segmenters like BPE operate as a preprocessing step in neural machine translation and other (conditional) language models. They are applied to datasets before training, so translation or text generation quality relies on the quality of segmentations. We propose a departure from this paradigm, called subword segmental machine translation (SSMT). SSMT
unifies subword segmentation and MT in a single trainable model. It learns to segment target sentence words while jointly learning to generate target sentences. To use SSMT during inference we propose *dynamic decoding*, a text generation algorithm that adapts segmentations as it generates translations. Experiments across 6 translation directions show that SSMT improves chrF scores for morphologically rich agglutinative languages. Gains are strongest in the very low-resource scenario. SSMT also learns subwords that are closer to morphemes compared to baselines and proves more robust on a test set constructed for evaluating morphological compositional generalisation.
## 1 Introduction
The continued success of neural machine translation (NMT) can be partially attributed to effective subword segmenters. Algorithms like byte-pair encoding (BPE) (Sennrich et al., 2016) and Unigram LM (ULM) (Kudo, 2018) are computationally efficient preprocessing steps that enable smaller vocabularies and open-vocabulary models.
These methods have proved quite effective, but fall short in certain contexts. For morphologically complex languages they are sub-optimal (Klein and Tsarfaty, 2020) and inconsistent (Meyer and Buys, 2022). This is amplified in low-resource settings (Zhu et al., 2019; Wang et al., 2021; Ács, 2019), where handling rare words is crucial. These issues can be partially attributed to the fact that subword segmentation is separated from model training. BPE and ULM are applied to the training
| Train Test |
|--------------|
I do understand. → Ndi-ya-qonda. I am tired. → Ndi-diniwe.
Where are you from? → U-vela phi? Are you busy? → Ingaba u-xakekile?
Test
Do you understand? → U-ya-qonda? I am busy. → Ndi-xakekile.
Table 1: Parallel English-Xhosa sentences with morphologically segmented Xhosa words. The train/test split shows why its critical to accurately model morphemes and morphological compositional generalisation i.e. novel combinations of known morphemes.
corpus before training starts, so models are reliant on their output. This is not ideal, since these algorithms do not learn segmentations that optimise model performance.
He et al. (2020) address this issue by proposing dynamic programming encoding (DPE), which trains an NMT model that marginalises over target sentence segmentations. After training they apply their model as a subword segmenter by computing the maximising segmentations. DPE is still a preprocessing step (a separate vanilla NMT model is trained on a corpus segmented by DPE), but since its segmentations are trained on MT, they are at least connected to the task at hand.
In this paper we go one step further by fully unifying NMT and subword segmentation. We propose subword segmental machine translation
(SSMT), an end-to-end NMT model that learns subword segmentation during training and can be used directly for inference. It is trained with a dynamic programming algorithm that enables learning subword segmentations that optimise its MT training objective. The architecture is a Transformerbased adaptation of the subword segmental language model (SSLM) (Meyer and Buys, 2022) for the joint task of MT and target-side segmentation.
We also propose *dynamic decoding*, a decoding algorithm for subword segmental models that dynamically adapts subword segmentations as it generates translations. The fact that our model can be used directly to generate translations sets it apart from existing segmenters. SSMT is not a preprocessing step in any sense - it is single model that learns how to translate and how to segment words, and it can be used to generate translations.
We evaluate on English → (Xhosa, Zulu, Swati, Finnish, Tswana, Afrikaans). As shown in table 2, these languages span 3 morphological typologies and several levels of data availability, so they provide a varied test suite to evaluate subword methods across different linguistic contexts. SSMT outperforms baselines on languages that are agglutinating and conjunctively written (the highest morphological complexity), but is outperformed on simpler morphologies. SSMT achieves its biggest gains on Swati, which is our most data scarce language.
We conclude that SSMT is justified for morphologically complex languages and especially useful when the languages are low-resourced.
We analyse the linguistic plausibility of SSMT
by applying it to unsupervised morphological segmentation. SSMT subwords are closer to morphemes than our baselines. Lastly, we adapt the methods of Keysers et al. (2020) to construct an MT test set for morphological compositional generalisation - the ability to generalise to previously unseen combinations of morphemes. The performance of all models degrade on the more challenging test set, but SSMT exhibits the greatest robustness. We posit that SSMT's performance gains on morphologically complex languages are due to its morphologically consistent segmentations and its superior modelling of morphological composition.1
## 2 Related Work
Subword segmentation has been widely adopted in NLP. Several algorithms have been proposed, with BPE (Sennrich et al., 2016) and ULM (Kudo, 2018) among the most popular. BPE starts with an initial vocabulary of characters and iteratively adds frequently co-occuring subwords. ULM starts with a large initial vocabulary and iteratively discards subwords based on the unigram language model likelihood. Both of these exemplify the dominant 1Our code and models are available at https://
github.com/francois-meyer/ssmt.
| Language | Morphology | Orthography | Sentences |
|---------------|---------------|---------------|-------------|
| Xhosa | 8.7mil | | |
| Zulu | 3.9mil | | |
| agglutinative | conjunctive | | |
| Finnish | 1.6mil | | |
| Swati | 165k | | |
| Tswana | agglutinative | disjunctive | 5.9mil |
| Afrikaans | analytic | disjunctive | 1.6mil |
Table 2: Morphological typology and training data sizes for the target languages used in our experiments.
paradigm in NLP: subword segmentation as a preprocessing step. Segmenters are applied to datasets before models are trained on the segmented text.
There are downsides to relegating subword segmentation to the domain of preprocessing. The algorithms are task-agnostic. BPE is essentially a compression algorithm (Gage, 1994), while ULM
assumes independence between subword occurrences. Neither of these strategies are in any way connected to the task for which the subwords will eventually be used (in our case machine translation). Ideally subword segmentation should be part of the learnable parameters of a model, so that it can be adjusted to optimise the training objective.
There has been some research on unifying subword segmentation and machine translation. Following recent character-based language models
(Clark et al., 2022; Tay et al., 2022), there has been work on character-level NMT models that learn latent subword representations (Edman et al.,
2022). However, Libovický et al. (2022) found that subword NMT models still outperform their character-level counterparts. Kreutzer and Sokolov
(2018) learn source sentence segmentation during training and find that models prefer character-level segmentations. DPE (He et al., 2020) learns target sentence segmentation during training and is then applied as a subword segmenter.
This line of work is related to a more general approach known as segmental sequence modelling, where sequence segmentation is viewed as a latent variable to be marginalised over during training. It was initially proposed for tasks like handwriting recognition (Kong et al., 2016) and speech recognition (Wang et al., 2017). Subsequently segmental language models (SLMs) have been proposed for unsupervised Chinese word segmentation (Sun and Deng, 2018; Kawakami et al., 2019; Downey et al., 2021). This was adapted for subword segmentation by Meyer and Buys (2022), who proposed subword segmental language modelling (SSLM). This is the
![2_image_0.png](2_image_0.png)
line of work we build on in this paper, adapting subword segmental modelling for NMT.
Our model contrasts with DPE in a few ways.
Firstly, our lexicon consists of the V most frequent character n-grams, so unlike DPE we don't rely on BPE to build the vocabulary. Secondly, we supplement our subword model with a character decoder, which is capable of generating out-of-vocabulary subwords. Lastly, through our proposed dynamic decoding we use SSMT directly to generate translations, instead of having to train an additional NMT
model from scratch on our segmentations.
## 3 Subword Segmental Machine Translation (Ssmt) 3.1 Architecture
SSMT is a Transformer-based encoder-decoder
(Figure 1). The encoder is that of a vanilla Transformer NMT model. Source language sentences are pre-segmented with BPE. The decoder adapts the subword segmental architecture of Meyer and Buys (2022) to be Transformer-based (as opposed to their LSTM-based model) and conditioned on the source sentence. During training SSMT considers all possible subword segmentations of the target sentence and learns which of these optimise its translation training objective.
Given a source sentence of BPE tokens x =
x1, x2*, ..., x*|x|, SSMT generates the target sentence characters y = y1, y2*, ..., y*|y| as a sequence of subwords s = s1, s2*, ..., s*|s|. We introduce a conditional semi-Markov assumption, whereby each subword probability is computed as
$$\begin{array}{c}{{p(s_{i}|\mathbf{s}_{<\mathrm{i}},\mathbf{x})\approx p(s_{i}|\pi(\mathbf{s}_{<\mathrm{i}}),\mathbf{x})}}\\ {{=p(s_{i}|\mathbf{y}_{<\mathrm{j}},\mathbf{x}),}}\end{array}$$
where π(s<i) is a concatenation operator that converts the sequence s<iinto the raw unsegmented characters y<j preceding subword si. Conditioning on the unsegmented history enables efficiency when we marginalise over subword segmentations.
The subword probability of Equation 2 is based on a mixture (shown on the right in Figure 1),
$$p(s_{i}|\mathbf{y}_{<\mathbf{j}},\mathbf{x})=g_{j}p_{\text{char}}(s_{i}|\mathbf{y}_{<\mathbf{j}},\mathbf{x})+$$ $$(1-g_{j})p_{\text{lex}}(s_{i}|\mathbf{y}_{<\mathbf{j}},\mathbf{x}),\tag{3}$$
which combines probabilities from a character LSTM decoder (pchar) and a fully connected layer that outputs a probability (plex) if siis in the lexicon. The lexicon contains the V most frequent character sequences (n-grams) up to some maximum segment length in the training corpus (V is a prespecified vocabulary size). The lexicon models frequent subwords (e.g. common morphemes), while the character decoder models rare subwords and previously unseen words (e.g. it can copy names from source to target sentences). The mixture coefficient g (computed by a fully connected layer)
allows SSMT to learn, based on context, when the next subword is likely to be in the lexicon and when it should rely on character-level generation.
## 3.2 Training
We use this architecture to train a model that jointly learns translation and target-side subword segmentation. The subword segmentation of a target sentence is treated as a latent variable and marginalised over to compute the probability
$$p(\mathbf{y}|\mathbf{x})=\sum_{\mathbf{s}:\pi(\mathbf{s})=\mathbf{y}}p(\mathbf{s}|\mathbf{x}),\qquad\qquad(4)$$
$$\mathrm{(1)}$$ (2) ...
where the probability of a specific subword segmentation s is computed with the chain rule as
![3_image_0.png](3_image_0.png)
a product of its individual subword probabilities
(each computed as Equation 3).
We can compute this marginal efficiently with a dynamic programming algorithm, where at each character position k in the raw target sentence y the forward probability is,
$$\alpha_{k}=\sum_{j=f({\bf y},k)}^{k}\alpha_{k}p(s={\bf y_{j:t}}|{\bf y}{<}{\bf j},{\bf x}),\quad\quad(5)$$
with α0 = 1. The function f(y, k) outputs the starting index of the longest possible subword ending at character k. This will either be k −m, where m is the maximum segment length (a pre-specified hyperparameter) or it will be the starting index of the current word (if character k − m precedes the start of the current word).
This last constraint is critical, since it limits the model to learn segmentation of words into subwords. The function f(y, k) ensures that our model cannot consider segments that cross word boundaries; the only valid segments are those within words. Characters that separate words (e.g. spaces and punctuation) are treated as 1-character segments. In this way we also implicitly model the beginning and end of words, since these are the boundaries of valid segments.
## 4 Dynamic Decoding
For standard subword models, beam search over the subword vocabulary is the *de facto* approach.
However, the SSMT mixture model (Equation 3) has two vocabularies, a character vocabulary and a subword lexicon. Beam search can be applied to either one. However, to approximate finding the highest scoring translation, subword prediction should be based on the full mixture distribution.
During training SSMT considers all possible segmentations of the target sentence with dynamic programming. We would like to consider different segmentations during decoding as well, instead of being limited to the subword boundaries dictated by greedy prediction. Doing this requires retaining part of the dynamic program during decoding, similar to Yu et al. (2016) who modelled the latent alignment between (multi-word) segments. In this section we outline *dynamic decoding*, an algorithm that (1) incorporates both the character and lexicon models and (2) dynamically adjusts subword segmentation during generation.
## 4.1 Next Character Prediction
Dynamic decoding generates one character at a time and computes next-character probabilities with the full mixture model. Since we generate characters we also explicitly model subword boundary decisions, i.e., when we generate a character we consider whether the character ends a subword (it is the last character in the subword) or whether it continues a subword (more characters will follow in the subword). The mixture model's next-character probability calculation is different, depending on whether we compute the probability of the next character ending the current subword (denoted end)
or continuing the current subword (denoted con).
Similarly, at each character generation step we have to consider whether the *preceding* character ends or continues a subword. If it ends a subword, then the next character starts a new subword. If the preceding character continues a subword, then the next character is the latest addition to the current subword. These considerations also affects the next-character probability.
Given this setup, we have 4 possible cases for
Algorithm 1: Dynamic decoding Input: x is a source sentence of BPE tokens Output: y
∗is the generated translation, a character sequence concluding with <eot> (end-of-translation)
Notation: C is a character vocabulary yend: partial translation, last char ends subword ycon: partial translation, last char continues subword ycon = arg max y∈C
pend-con(y|x), ycon = [ycon]
yend = arg max y∈C
pend-end(y|x), yend = [yend]
while yend[−1] ̸= <eot> do ycon-con = arg max y∈C
pcon-con(y|ycon, x)
yend-con = arg max y∈C
pend-con(y|yend, x)
ycon = arg max p(y)
y∈{[ycon,ycon-con],[yend,yend-con]}
![4_image_1.png](4_image_1.png)
end
![4_image_0.png](4_image_0.png)
return y
next-character generation:
1. **con-end** - the preceding character continues a subword that the next character ends, 2. **end-con** - the preceding character ends a subword and the next character starts a new one, 3. **end-end** - both preceding and next characters end subwords, 4. **con-con** - both preceding and next characters continue the same subword.
Each case requires different calculations to obtain next-character probabilities with the SSMT
mixture model. We present and motivate probability formulas for all 4 cases in Appendix A,
defining the probabilities used in algorithm 1
(pcon-end, pend-con, pend-end, pcon-con).
## 4.2 Dynamic Segmentation
One could use next-character probabilities to greedily generate translations one character at a time, inserting subword boundaries when pcon-end >
pcon-con or pend-end > pend-con. However, this would amount to a greedy search over the space of possible subword segmentations, which might be suboptimal given characters that are generated later. A
naive beam search would not distinguish between complete and incomplete subwords, which introduces a bias towards short subwords during decoding. Ideally the decoding algorithm should make the final segmentation decision based on characters to the left and right of a potential subword boundary, without directly comparing complete and incomplete subwords. To achieve this we design a decoding algorithm that retains part of the dynamic program during generation (see algorithm 1).
For simplicity we explain dynamic decoding for a beam size of 1. Figure 2 demonstrates the generation of the first few characters of a translation.
The key is to hold out on finalising segmentations until subsequent characters have been generated.
We compute candidates for the next character, but do so separately for candidates that continue the current subword and those that end the current subword (step (a) in Figure 2). The segmentation decision is postponed until after the next character has been generated. We now essentially have two
"potential" beams - one for continuing the current subword and another for ending it. For each of these potential beams, we repeat the previous step: we compute candidates for the next character, keeping separate the candidates that continue and end the subword (step (b) in Figure 2).
Now we reconsider past segmentations. We compare sequence probabilities across the two potential beams of the character generated one step back
(comparisons are visualised by arcs under step (c)).
We select the best potential beam that continues the current subword and the best potential beam that ends the current subword. We then repeat the process on these new potential beams. Essentially we are retrospectively deciding whether the previous character should end a subword. Since we have postponed the decision, we are able to consider how it would affect the generation of the next character. For example, in step (2.c) of Figure 2, the subword boundary after character "n" is reconsidered and discarded, given that it leads to lower probability sequences when we generate one character ahead.
During training, we consider all possible subword segmentations of a target sentence. During decoding, at each generation step we consider all possible segmentations of the two most recently generated characters. In this way we retain part of the dynamic program for subword segmentation.
## 5 Machine Translation Experiments
We train MT models from English to 6 languages.
As shown in table 2, the chosen languages allow us to compare how effective SSMT is across 3 different morphological typologies - agglutinating conjunctive, agglutinating disjunctive, and an-
| Model → | BPE | ULM | DPE | SSMT | | | | |
|--------------|-------|-------|-------|--------|------|------|------|------|
| English to ↓ | BLEU | chrF | BLEU | chrF | BLEU | chrF | BLEU | chrF |
| Xhosa | 14.3 | 53.2 | 15.0 | 53.3 | 14.9 | 53.3 | 15.0 | 53.5 |
| Zulu | 13.5 | 53.2 | 13.7 | 53.0 | 14.2 | 53.7 | 14.2 | 53.7 |
| Finnish | 15.0 | 50.1 | 15.0 | 49.6 | 15.4 | 50.0 | 14.4 | 50.1 |
| Swati | 0.2 | 23.4 | 0.4 | 23.7 | 0.3 | 23.5 | 0.7 | 26.2 |
| Tswana | 10.2 | 36.9 | 10.1 | 35.5 | 9.1 | 34.6 | 9.7 | 36.5 |
| Afrikaans | 33.4 | 64.2 | 33.5 | 64.3 | 34.6 | 65.0 | 32.0 | 63.6 |
Table 3: MT test set performance (FLORES devtest). Underline indicates best BLEU and chrF scores, while **bold**
indicates scores with differences from the best that are not statistically significant (p-value of 0.05)
| Model | chrF |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|
| 2 models to segment + translate with beam search +BPEvocab –char (DPE) 23.3 +lexicon –char (SSMT –char) 23.7 +lexicon +char (SSMT) 23.1 1 model with dynamic decoding +lexicon –char (SSMT –char) 26.2 +lexicon +char (SSMT) 26.4 | |
alytic. Most of the languages are agglutinating conjunctive, since prior work has highlighted the importance of subword techniques for morphologically complex languages (Klein and Tsarfaty, 2020; Meyer and Buys, 2022). For English to Finnish we train on Europarl2, while for the other directions we train on WMT22_African.3 The parallel dataset sizes are given in table 2. We use FLORES dev and devtest as validation and test sets, respectively.
Each probability in the SSMT dynamic program
(Equation 5) requires a softmax computation, so SSMT takes an order of magnitude (10×) longer to train than pre-segmented models. For example, English to Zulu with BPE trained for 1 day, while SSMT trained for 10 days (both on a single A100 GPU). SSMT training times are comparable to those of the DPE segmentation model. On our test sets it takes on average 15 seconds to translate a single sentence (as opposed to our baselines, which take 0.05 seconds per sentence). We did experiment with naive beam search over the combined lexicon and character vocabularies of SSMT,
but this results in much worse validation performance than dynamic decoding (49.8 vs 53.8 chrF
on the English to Zulu validation set; see table 7 in the Appendix). We use a beam size of 5 for beam search with our baselines and for dynamic decoding, since this optimised validation performance
(table 7). Further training and hyperparameter details are provided in Appendix B.
## 5.1 Mt Results
We evaluate our models with BLEU and chrF. The chrF score (Popovic´, 2015) is a character-based metric that is more suitable for morphologically rich languages than token-based metrics like BLEU
(Bapna et al., 2022). MT performance metrics on the full test sets are shown in table 3. We perform statistical significance testing through paired bootstrap resampling (Koehn, 2004). In terms of chrF,
SSMT outperforms or equals all baselines on all 4 agglutinating conjunctive languages. The same holds for BLEU on 3 of the 4 languages.
These results prove that SSMT is an effective subword approach for morphologically complex languages. They also corroborate the findings of Meyer and Buys (2022) that subword segmental modelling leads to greater consistency across different morphologically complex languages. On Xhosa, Zulu, and Finnish, SSMT and DPE exhibit comparable performance. However, DPE requires multiple training steps: a DPE segmenter model, applying that to a corpus, and then training a NMT
model on the segmented corpus. SSMT has the notable benefit of being a single model for segmentation and generation.
On the languages with simpler morphologies
(Tswana and Afrikaans), SSMT is outperformed by baselines. There is a sharp contrast between the relative performance of SSMT on the morphologically
| Xhosa | Zulu | Swati | | | | | | | |
|---------|--------|---------|-------|-------|-------|-------|-------|-------|-------|
| Model | P | R | F1 | P | R | F1 | P | R | F1 |
| BPE | 37.16 | 25.42 | 30.19 | 51.57 | 29.62 | 37.62 | 19.57 | 16.17 | 17.71 |
| ULM | 61.22 | 34.65 | 44.25 | 63.70 | 31.72 | 42.35 | 52.48 | 45.26 | 48.61 |
| DPE | 51.52 | 44.24 | 47.60 | 59.66 | 41.64 | 49.05 | 16.96 | 17.00 | 16.98 |
| SSMT | 49.55 | 72.60 | 58.90 | 52.87 | 66.41 | 58.87 | 47.47 | 61.89 | 53.73 |
complex and morphologically simple languages.
SSMT does not seem to be justified for languages that are not agglutinating and conjunctive.
## 5.2 Low-Resource Translation Analysis
SSMT improves performance most drastically on Swati, which is distinct among the translation directions in being extremely data scarce. We confirm that this is not simply because of particular hyperparameter choices, because the finding holds across different settings during hyperparameter tuning (see Figure 4 in the Appendix). To investigate the factors behind SSMT's success, we perform an ablation analysis of the different components of SSMT (shown in table 4) compared to DPE.
Learning a subword vocabulary with BPE (the approach of DPE) does not improve performance over the frequency-based lexicon of SSMT. Our results also show that when the goal is to use the model as a segmenter, supplementing the subword model with a character model worsens performance. Dynamic decoding is the most important factor in the success of SSMT. The largest gains do not come from learning subword segmentation during training, but from using the same model directly during inference with dynamic decoding.
Having a single model for segmentation, MT, and generation leads to the best performance overall.
## 6 Unsupervised Morphological Segmentation
Morphemes are the primary linguistic units in agglutinative languages. We can analyse to what extent SSMT subwords resemble morphemes by applying it as a segmenter to the task of unsupervised morphological segmentation. The task is fully unsupervised, since our baselines and SSMT
models are tuned to optimise validation MT performance and never have access to morphological annotations (they are trained on raw text). The task amounts to evaluating whether these subword segmenters "discover" morphemes as linguistic units.
We evaluate our models on data from the SADiLaR-II project (Gaustad and Puttkammer, 2022). The dataset contains 146 parallel sentences in English and 3 of the agglutinating conjunctive languages for which we train MT models (Xhosa, Zulu, Swati). The dataset provides morphological segmentations for all words in the parallel sentences. We apply the preprocessing scripts of Moeng et al. (2021) to extract surface segmentations.
To apply SSMT as a segmenter we use the Viterbi algorithm to compute the highest scoring subword segmentation of a target sentence given the source sentence. We compare SSMT subwords to the baseline segmenters from our MT experiments.
Table 5 reports precision, recall, and F1 for morpheme boundary identification. SSMT has greater F1 scores than any of the baselines across all 3 languages, indicating that generally SSMT learns subword boundaries that are closer to morphological boundaries. SSMT also has the highest recall for all 3 languages, but lower precision. This show that SSMT sometimes over-segments words, which Meyer and Buys (2022) also found to be the case for SSLM. Table 6 in the Appendix shows similar results for the same task using morpheme identification as metric.
## 7 Morphological Compositional Generalization
SSMT learns morphological segmentation better than standard segmenters, but is it also learning to compose the meanings of words from their constituent morphemes? To investigate this we design an experiment aimed at testing morphological compositional generalisation.
Compositional generalisation is the ability to compose novel combinations from known parts
(Partee, 1984; Fodor and Pylyshyn, 1988). Recent works have investigated whether neural models are able to achieve such generalisation (Lake and Baroni, 2018; Hupkes et al., 2020; Kim and Linzen, 2020). For example, Keysers et al. (2020) test whether models can handle novel syntactic combinations of known semantic phrases. They construct train/test splits with similar phrase distributions, but divergent syntactic compound distributions. We adapt their approach to construct a test set with a similar morpheme distribution to the train set, but a divergent word distribution. This evaluates whether models can handle novel combinations of known morphemes (previously unseen words consisting of previously seen morphemes). Table 8 in the Appendix categorises our experiment according to the generalisation taxonomy of Hupkes et al. (2022).
## 7.1 Compound Divergence
Keysers et al. (2020) propose compound divergence as a metric to quantify how challenging it is to generalise compositionally from one dataset to another. We use it to sample a subset of a test set that requires morphological compositional generalisation from a training set.
To compute morpheme distributions we segment our train and test sets into morphemes with the trained morphological segmenters of Moeng et al.
(2021). Following Keysers et al. (2020), we refer to morphemes as *atoms* and words as *compounds*.
For a dataset T, we compute the distribution of its compounds FC(T) as the relative word frequencies and the distribution of its atoms FA(T) as the relative morpheme frequencies. For a train set V
and test set W we compute compound divergence DC(V ||W) and atom divergence DA(V ||W), respectively quantifying how different the word and morpheme distributions of the train and test sets are (larger divergence implies greater difference).
We use the definitions of compound and atom divergence proposed by Keysers et al. (2020) and include these in Appendix C. We implement a procedure (also outlined in Appendix C) for extracting a subset of the test set such that DC can be specified and DA is held as low as possible, producing a test set that requires models trained on V to generalise to new morphological compositions.
## 7.2 Results
For this experiment we focus on English → Zulu translation. We extract 2 test subsets of 300 sentences each from Zulu FLORES devtest. For the first subset we specified D
target C = 0.2, while for the second D
target C = 0.3. We settled on these val-
![7_image_0.png](7_image_0.png)
ues since it was not possible to extract test subsets outside this range with equal atom divergence to the train set (around 0.07 for both). The result is 2 test subsets that require varying degrees of morphological generalisation. The subset with DC = 0.3 is more challenging than the DC = 0.2 subset, provided the model is trained on the same train set as ours (English-Zulu WMT22 dataset).
The results are shown in Figure 3. On the less challenging subset (DC = 0.2), DPE slightly ouperforms SSMT, while the average chrF score of the 4 models is 54.1. On the more challenging subset (DC = 0.3), the average chrF score drops to 51.5, which shows that models cannot maintain the same level of performance when more morphological generalisation is required. This points to the fact that neural MT models are not reliably learning morphological composition, instead sometimes relying on surface-level heuristics (e.g. learning subword-to-word composition that is not morphologically sound). SSMT proves to be most robust to the distributional shift, achieving the best chrF
score on the more challenging subset. This shows that SSMT is learning composition more closely resembling true morphological composition. SSMT
and DPE comfortably outperform BPE and ULM,
indicating more generally that learning subword segmentation during training improves morphological compositional generalisation.
## 8 Conclusion
SSMT unifies subword segmentation, MT training, and decoding in a single model. Our results show that it improves translation over existing segmenters when the target language is agglutinative and conjunctively written. It also produces subwords that are closer to morphemes and learns subword-to-word composition that more closely resembles morphological composition. In future work our dynamic decoding algorithm could be used to generate text with subword segmental models for text generation tasks other than MT.
## Limitations
The main downside of SSMT (compared to presegmentation models like BPE and ULM) is its computational complexity. Our architecture (Figure 1) introduces additional computation in 2 way.
Firstly, the decoder conditions on the characterlevel history of the target sentence, so it has to process more tokens than a standard subword decoder.
Secondly, the dynamic programming algorithm
(Equation 5) requires more computations than standard MT models training on pre-segmented datasets. In practice, SSMT takes an order of magnitude (10×) longer to train than models training on a pre-segmented dataset. Dynamic decoding also adds computational complexity to testing, although this is less of an issue since test set sizes usually permit run times within a few hours.
It would depend on the practitioner to decide whether the performance boosts obtained by SSMT
justify the longer training and decoding times.
However, since SSMT is particularly strong for data scarce translation, the computational complexity might be less of an issue. For translation directions like English to Swati, training times are quite short for all models (less than a day for SSMT on subpartitions of the A100 GPU), so the increased training times are manageable.
## Acknowledgements
This work is based on research supported in part by the National Research Foundation of South Africa
(Grant Number: 129850). Computations were performed using facilities provided by the University of Cape Town's ICTS High Performance Computing team: hpc.uct.ac.za. Francois Meyer is supported by the Hasso Plattner Institute for Digital Engineering, through the HPI Research School at the University of Cape Town.
## References
Ali Araabi and Christof Monz. 2020. Optimizing transformer for low-resource neural machine translation.
In Proceedings of the 28th International Conference on Computational Linguistics, pages 3429–3435,
Barcelona, Spain (Online). International Committee on Computational Linguistics.
Ankur Bapna, Isaac Caswell, Julia Kreutzer, Orhan Firat, Daan van Esch, Aditya Siddhant, Mengmeng Niu, Pallavi Nikhil Baljekar, Xavier Garcia, Wolfgang Macherey, Theresa Breiner, Vera Saldinger Axelrod, Jason Riesa, Yuan Cao, Mia Chen, Klaus Macherey, Maxim Krikun, Pidong Wang, Alexander Gutkin, Apu Shah, Yanping Huang, Zhifeng Chen, Yonghui Wu, and Macduff Richard Hughes. 2022. Building machine translation systems for the next thousand languages. Technical report, Google Research.
J.K Chung, P.L Kannappan, C.T Ng, and P.K Sahoo.
1989. Measures of distance between probability distributions. Journal of Mathematical Analysis and Applications, 138(1):280–292.
Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2022. Canine: Pre-training an efficient tokenization-free encoder for language representation. *Transactions of the Association for Computational Linguistics*, 10:73–91.
C. M. Downey, Fei Xia, Gina-Anne Levow, and Shane Steinert-Threlkeld. 2021. A masked segmental language model for unsupervised natural language segmentation. *arXiv:2104.07829*.
Lukas Edman, Antonio Toral, and Gertjan van Noord.
2022. Subword-delimited downsampling for better character-level translation.
Jerry A. Fodor and Zenon W. Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis.
Cognition, 28(1):3–71.
Philip Gage. 1994. A new algorithm for data compression. *C Users J.*, 12(2):23–38.
Tanja Gaustad and Martin J. Puttkammer. 2022. Linguistically annotated dataset for four official south african languages with a conjunctive orthography:
Isindebele, isixhosa, isizulu, and siswati. Data in Brief, 41:107994.
Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2020. Dynamic programming encoding for subword segmentation in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3042–3051, Online. Association for Computational Linguistics.
Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: How do neural networks generalise? (extended abstract).
In *Proceedings of the Twenty-Ninth International* Joint Conference on Artificial Intelligence, IJCAI-20, pages 5065–5069. International Joint Conferences on Artificial Intelligence Organization. Journal track.
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra,
Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell, and Zhijing Jin. 2022. State-of-the-art generalisation research in NLP: a taxonomy and review.
CoRR.
Kazuya Kawakami, Chris Dyer, and Phil Blunsom.
2019. Learning to discover, ground and use words with segmental neural language models. In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 6429–6441, Florence, Italy. Association for Computational Linguistics.
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A comprehensive method on realistic data. In International Conference on Learning Representations.
Najoung Kim and Tal Linzen. 2020. COGS: A compositional generalization challenge based on semantic interpretation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9087–9105, Online. Association for Computational Linguistics.
Stav Klein and Reut Tsarfaty. 2020. Getting the \#\#life out of living: How adequate are word-pieces for modelling complex morphology? In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 204–209, Online. Association for Computational Linguistics.
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In *Proceedings of the* 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics.
Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016.
Segmental recurrent neural networks. In *4th International Conference on Learning Representations,*
ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
Julia Kreutzer and Artem Sokolov. 2018. Learning to segment inputs for NMT favors character-level processing. In *Proceedings of the 15th International* Conference on Spoken Language Translation, pages 166–172, Brussels. International Conference on Spoken Language Translation.
Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics.
Brenden M. Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks.
In *International Conference on Machine Learning*.
Jindˇrich Libovický, Helmut Schmid, and Alexander Fraser. 2022. Why don't people use character-level machine translation? In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2470–2485, Dublin, Ireland. Association for Computational Linguistics.
Francois Meyer and Jan Buys. 2022. Subword segmental language modelling for nguni languages.
arXiv:2210.06525.
Tumi Moeng, Sheldon Reay, Aaron Daniels, and Jan Buys. 2021. Canonical and surface morphological segmentation for nguni languages. In Proceedings of the Second Southern African Conference for Artificial Intelligence Research (SACAIR), pages 125–139, Online. Springer.
Barbara Partee. 1984. Compositionality. Varieties of formal semantics, 3:281—-311.
Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Zhiqing Sun and Zhi-Hong Deng. 2018. Unsupervised neural word segmentation for Chinese via segmental language modeling. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 4915–4920, Brussels, Belgium.
Association for Computational Linguistics.
Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. 2022.
Charformer: Fast character transformers via gradientbased subword tokenization. In *International Conference on Learning Representations*.
Chong Wang, Yining Wang, Po-Sen Huang, Abdelrahman Mohamed, Dengyong Zhou, and Li Deng. 2017.
Sequence modeling via segmentations. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, page 3674–3683. JMLR.org.
Xinyi Wang, Sebastian Ruder, and Graham Neubig.
2021. Multi-view subword regularization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 473–482, Online. Association for Computational Linguistics.
Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online segment to segment neural transduction. In *Proceedings* of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1307–1316, Austin, Texas. Association for Computational Linguistics.
Yi Zhu, Benjamin Heinzerling, Ivan Vulic, Michael ´
Strube, Roi Reichart, and Anna Korhonen. 2019. On the importance of subword information for morphological tasks in truly low-resource languages. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 216–226, Hong Kong, China. Association for Computational Linguistics.
Judit Ács. 2019. Exploring bert's vocabulary.
## A Next-Character Probabilities
Here we present the formulas to compute nextcharacter probabilities with the SSMT mixture model. The probability computations depend on whether the preceding character and next character continue or end subwords, so we provide definitions for all possible subword boundary conditions.
We consider the simplest case first. Given that the previously generated character at position j − 1 concludes a subword, the probability of the next subword being a single character y is
$$p_{\mathrm{end-end}}(y|\mathbf{y}_{<\mathrm{j}},\mathbf{x})=g_{j}p_{\mathrm{char}}(y,\mathrm{<cos>}\,|\,\mathbf{y}_{<\mathrm{j}},\mathbf{x})+$$ $$(1-g_{j})p_{\mathrm{lex}}(y|\mathbf{y}_{<\mathrm{j}},\mathbf{x}),\tag{6}$$
where <eos> is a special end-of-subword token. We can compute this for all y in the character vocabulary and return the top candidates for next character.
We modify this for the case where character j − 1 does not conclude a subword, but character j still does. Then character j constitutes the last character in a subword that started at an earlier character.
The probability of next character is then
$$p_{\mathrm{con-end}}(y|{\bf y}_{<\mathrm{j}},{\bf x})$$ $$=g_{j}p_{\mathrm{char}}(y,\mbox{\rm{\small COS}}>|\,{\bf y}_{\mbox{\rm{\small kj}}-1},{\bf y}_{<\mbox{\rm{k}}},{\bf x})+$$ $$(1-g_{j})p_{\mathrm{lex}}(y|{\bf y}_{\mbox{\rm{\small kj}}-1},{\bf y}_{<\mbox{\rm{k}}},{\bf x}),\tag{7}$$
where k is the starting position of the current subword (concluding at j) and yk:j−1 are the characters generated so far in the current subword.
These cases still only give us candidates for when the next character concludes a subword. We can modify equation 6 to compute the probability of the next character starting and continuing a subword as
$$\begin{array}{r l}{{p_{\mathrm{end-con}}(y|\mathbf{y}_{<\mathbf{j}},\mathbf{x})=}}&{{g_{j}p_{\mathrm{char}}(y|\mathbf{y}_{<\mathbf{j}},\mathbf{x})+}}\\ {{}}&{{(1-g_{j})\sum_{\mathbf{s}:s_{1}=y,\mathbf{s}\neq y}p_{\mathrm{lex}}(\mathbf{s}|\mathbf{y}_{<\mathbf{j}},\mathbf{x}).}}\end{array}$$
where the first mixture component is simply the probability of the next character under the character-level model (without the <eos> token).
The second component marginalises over all subwords starting with y. This considers all the possible ways in which the next subword could start with character y. It excludes the 1-character subword y (s ̸= y), since this constitutes a subword ending with character j (covered by equation 6).
Like equation 6, this covers the case in which the previous character concludes a subword. Similarly to how we generalised equation 6 to equation 7, we can generalise equation 8 to the case where character j continues a subword started at any given previous character. This produces
$$\begin{array}{l}\mbox{$p_{\rm con-con}(y|{\bf y}_{<j},{\bf x})=\ g_{j}p_{\rm char}(y|\,{\bf y}_{{\bf k}:j-1},{\bf y}_{<{\bf k}},{\bf x})+$}\\ \mbox{$(1-g_{j})\sum_{{\bf s}:s_{1}=y,{\bf s}\neq y}p_{\rm lex}({\bf s}|\,{\bf y}_{{\bf k}:j-1},{\bf y}_{<{\bf k}},{\bf x}).$}\end{array}\tag{9}$$
## B Training Details
SSMT is implemented as a sequence-to-sequence model in the fairseq library. For all our MT models we used the training hyperparameters of the fairseq transformer-base architecture4(6 encoder layers, 6 decoder layers). We extensively tuned the vocabulary sizes of our models on both English-Xhosa and English-Zulu (including separate vocabularies).
Validation performance peaked for both at a shared vocabulary of 10k subwords for the baselines. For SSMT it peaked at 5k BPE subwords for the source language and 5k subwords in the target language lexicon. We applied these vocabulary settings to the remaining languages (excluding Swati, which we tuned separately).
Our SSMT subwords have a maximum segment length of 5 characters, since this was computationally feasible and validation performance did not improve with longer subwords. We trained all our models for 25 epochs initially and then continued training until validation performance stopped improving for 5 epochs. We trained our DPE segmentation models for 20 epochs (following He et al.
4https://github.com/facebookresearch/
fairseq/blob/main/fairseq/models/ transformer/transformer_legacy.py
Model P R F1 P R F1 P R F1
| Xhosa | Zulu | Swati |
|---------|--------|---------|
BPE 18.04 14.23 15.91 24.51 17.52 20.43 9.13 3.77 5.33
ULM 31.59 22.51 26.29 31.47 20.88 25.10 **32.31** 13.72 19.26
DPE 28.82 26.16 27.43 33.01 26.36 29.31 7.97 3.72 5.08
SSMT 31.58 **41.50 35.87 33.81 39.57 36.46** 27.57 **15.49 19.83**
(2020)), so DPE required 20 epochs of training for the segmentation model, followed by 25+ epochs for the translation model. We tried sampling ULM
segmentations during training for regularisatiion, but initial experiments showed that maximising segmentations led to better validation performance.
Since models are more sensitive to hyperparameter settings in the data scarce setting (Araabi and Monz, 2020), we performed more extensive hyperparameter tuning for the extremely low-resource case of English → Swati. We tuned the number of layers and the vocabulary size (see Figure 4).
We found that smaller models (less layers) greatly improved validation performance for all models.
| Mixture beam search | Dynamic decoding | | | |
|-----------------------|--------------------|------|------|------|
| Beam size | BLEU | chrF | BLEU | chrF |
| 1 | 11.8 | 49.7 | 13.6 | 52.2 |
| 3 | 11.5 | 49.2 | 14.1 | 53.6 |
| 5 | 11.2 | 49.4 | 14.5 | 53.8 |
| 7 | 11.4 | 49.6 | 14.3 | 53.8 |
| 10 | 11.5 | 49.8 | 14.4 | 53.8 |
## C Morphological Compositional Generalisation Test Subset Extraction
For a train set V and test set W we compute the compound divergence and atom divergence, respectively as
$$D_{C}(V||W)=1-C_{0.1}(F_{C}(V)||F_{C}(W)),\tag{10}$$ $$D_{A}(V||W)=1-C_{0.5}(F_{A}(V)||F_{A}(W)),\tag{11}$$
![11_image_0.png](11_image_0.png)
where Cα(P||Q) is the Chernoff coefficient
(Chung et al., 1989). This is a measure of the similarity of 2 distributions P and Q computed as
$$C_{\alpha}(P||Q)=\sum_{k}p_{k}^{\alpha}q_{k}^{1-\alpha},\qquad\quad(12)$$
where α is a parameter that weights the importance of the distributions in the similarity metric. We follow Keysers et al. (2020) in setting α = 0.1 for compound divergence (more important to measure whether or not compounds occur in train than to measure how close the distributions are) and α =
0.5 for atom divergence (atom distributions should match as far as possible).
We implement a procedure that, given a train set V , extracts a prespecified number of sentences from a test set W, such that DC(V ||W) = D
target C
(where D
target Cis the desired compound divergence) and DA(V ||W) is held as low as possible.
The procedure starts with the empty test subset and iteratively adds one sentence from the test set. At each step, it randomly samples k sentences from the test set (we set k = 100) and adds the sentence
![12_image_0.png](12_image_0.png)
that minimises
$$|{\mathcal{D}}_{C}-{\mathcal{D}}_{C}^{\mathrm{target}}|+{\mathcal{D}}_{A},$$
$$(13)$$
C| + DA, (13)
where D
target Cis the prespecified compound divergence target for the experiment. Iteratively adding sentences that minimise equation 13 results in a test subset containing atoms (morphemes) that the model was exposed to during training, but compounds (words) that it was not. We can control the degree of compositional novelty in the test subset compounds by setting D
target Cin our procedure.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Unnumbered section after section 8.
✗ A2. Did you discuss any potential risks of your work?
We do not believe that there are significant risks to the code or models we plan to release,
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 5 And 7
✓ B1. Did you cite the creators of artifacts you used?
Sections 5 and 6
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The artifacts we use and release are all open-source and publicly available.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data we used was released for the WMT22 shared task, so we trust that this has already been done.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sections 1, 5, 7
## C ✓ **Did You Run Computational Experiments?** Sections 5, 6, 7
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sections 3, 4, 5, limitations sections, and appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sections 5 and appendix C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
datta-etal-2023-measuring | Measuring and Mitigating Local Instability in Deep Neural Networks | https://aclanthology.org/2023.findings-acl.176 | Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample {``}label entropy{''} across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90{\%} of the gold standard, achieved by ensembling, at a fraction of the computational cost. | # Measuring And Mitigating Local Instability In Deep Neural Networks
Arghya Datta†and **Subhrangshu Nandi**†and **Jingcheng Xu**†∗and **Greg Ver Steeg**
and **He Xie** and **Anoop Kumar** and **Aram Galstyan**
Amazon Alexa Seattle, WA, USA
{argdatta, subhrn, gssteeg, hexie, anooamzn, argalsty} @amazon.com
{xjc}@stat.wisc.edu
† Equal contribution, alphabetical order
## Abstract
Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding
(NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample "label entropy" across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study dataagnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates.
We find that our localized data-specific mitigation strategy dramatically outperforms dataagnostic methods, and comes within 90% of the gold standard, achieved by ensembling, at a fraction of the computational cost.
## 1 Introduction
While training large deep neural networks on the same data and hyperparameters may lead to many distinct solutions with similar loss, we say that the model is *underspecified* (D'Amour et al., 2022).
One tangible manifestation of underspecification is that a model prediction on a single data point can change across different training runs, without any change in the training data or hyperparameter settings, due to stochasticity in the training procedure.
This extreme sensitivity of model output, which has been termed as model variance/instability or *model*
∗ Work done as an intern at Amazon Alexa jitter/churn (Hidey et al., 2022; Milani Fard et al.,
2016), is highly undesirable as it prohibits comparing models across different experiments (Dodge et al., 2019). We refer to this problem as *local* instability 1, a term that highlights our focus on the non-uniformity of instability across data points.
Local instability can lead to highly undesirable consequences for deployed industrial systems, as it can cause inconsistent model behavior across time, eroding trust on AI systems (Dodge et al.,
2020; D'Amour et al., 2020a). The problem is further exacerbated by the fact that industry models are typically more complex and trained on diverse datasets with potentially higher proportion of noise.
| Utterance (gold | pˆ[min−max] , | Label |
|----------------------------------------|-----------------|-------------------------------------------------|
| label) | σm(ˆp) | predictions over 50 runs |
| funny joke (general) | [0.98-0.99], | general:50 |
| 0.003 (low) | | |
| start house | [0.002-0.97], | lists:26, IOT:6, |
| cleanup (IOT) | 0.17 (high) | general:6, play:5, news:3, social:1, calendar:1 |
| search for gluten free menus (cooking) | [0.002-0.693], | lists:28, takeaway:18, social:1, |
| 0.06 (low) | music:1, cooking:1, play:1 | |
Table 1: Variability in predictions of utterances from Massive dataset. This shows different predictions over 50 model runs with different seeds. pˆ is the prediction score on gold labels and σm is the standard deviation over multiple model outputs pˆ1*, . . . ,* pˆ50. For example, start house cleanup with gold label IOT is predicted to label *lists* 26 out of the 50 model runs. Its prediction score on IOT ranges between 0.002 and 0.97. green:
low variability, predictions match gold label, red: high predicted label flipping or switching.
Table 1 shows examples of local instability for a domain classification problem, where we used a pre-trained language model DistilBERT (Sanh et al., 2019) to train 50 independent classifiers 1We use *local instability* to mean *local model instability*
(with random initial conditions) on Massive dataset
(FitzGerald et al., 2022). It shows that a validation set utterance *start house cleanup* with gold label IOT gets assigned seven different predicted labels over the 50 runs, with the predicted confidence on gold label pˆ ranging between 0.002 and 0.97, with high σm (the standard deviation of {pˆi}
50 i=1) of 0.17.
In comparison, *search for gluten free menus* gets 6 different predicted labels over 50 runs, with a relatively low σm of 0.06. The differences in stability across examples demonstrates that the phenomenon is localized to certain data points. See Figures 4 and 5 in Appendix. Examples in table 1 also highlight that variability in confidence is not perfectly aligned with stability of predictions.
Measuring Local Model Instability While detecting and quantifying local instability across multiple runs that is trivial for toy problems, it becomes infeasible when dealing with much larger industrial datasets. Previous research (Swayamdipta et al.,
2020) suggested the use of single-run training dynamics to estimate the variance in prediction scores over multiple epochs for a model. However, as shown in Table 1, low prediction variance does not always lead to less label switching, which is the defining feature of local instability. Instead, here we introduced *label switching entropy* as a new metric for characterizing local instability. Furthermore, we have demonstrated that label switching entropy calculated over different training epochs of a single run is a good proxy for label switching over multiple runs, so that data points with high prediction instability over time also exhibit high instability across multiple training runs.
Mitigating Local Model Instability One straightforward strategy of mitigating local instability is to train an ensemble of N models and average their weights or their predictions.
Unfortunately, ensembling neural networks such as large language models is often computationally infeasible in practice, as it requires multiplying both the training cost and the test time inference cost by a factor of N. Therefore, we proposed and compared more economical and computationally feasible solutions for mitigating local instability.
Here we proposed a more efficient smoothingbased approach where we train a pair of models.
The first (teacher) model is trained using the onehot encoded gold labels as the target variable. Once the model has converged and is no longer in the transient learning regime (after N training or optimization steps), we compute the temporal average predicted probability vector over K classes after each optimization step, which is then adjusted by temperature T to obtain the smoothed predicted probability vector. A student model is then trained using these "soft" labels instead of the one-hot encoded gold labels. We call this Temporal Guided Temperature Scaled Smoothing (TGTSS). TGTSS
allows local mitigation of local instability as each datapoint is trained to its unique label in the student model. In contrast to existing methods such stochastic weight averaging (Izmailov et al., 2018)
or regularizing options such as adding L2-penalty, TGTSS significantly outperforms existing methods and reaches within 90% of the gold standard of ensemble averaging.
We summarize our contributions as follows:
- We propose a new measure of local instability that is computationally efficient and descriptive of actual prediction changes.
- We introduce a data-centric strategy to mitigate local instability by leveraging temporally guided label smoothing.
- We conduct extensive experiments with two public datasets and demonstrate the effectiveness of the proposed mitigation strategy compared to existing baselines.
2 Related work Sophisticated, real-world applications of Deep Neural Networks (DNNs) introduce challenges that require going beyond a myopic focus on accuracy.
Uncertainty estimation is increasingly important for deciding when a DNN's prediction should be trusted, by designing calibrated confidence measures that may even account for differences between training and test data (Nado et al., 2021).
Progress on uncertainty estimation is largely orthogonal to another critical goal for many engineered systems: *consistency* and *reliability*. Will a system that works for a particular task today continue to work in the same way tomorrow?
One reason for inconsistent performance in realworld systems is that even if a system is re-trained with the same data, predictions may significantly change, a phenomenon that has been called model churn(Milani Fard et al., 2016). The reason for this variability is that neural networks are underspecified (D'Amour et al., 2020b), in the sense that there are many different neural networks that have nearly equivalent average performance for the target task. While randomness could be trivially removed by fixing seeds, in practice tiny changes to data will still significantly alter stochasticity and results. We will explore the case of altering training data in future studies. Studying how stochasticity affects model churn addresses a key obstacle in re-training engineered systems while maintaining consistency with previous results.
The most common thread for reducing model churn focuses on adding constraints to a system so that predictions for re-trained system match some reference model. This can be accomplished by adding hard constraints (Cotter et al., 2019) or distillation (Milani Fard et al., 2016; Jiang et al., 2021; Bhojanapalli et al., 2021).
We adopt a subtly different goal which is to train at the outset in a way that reduces variability in predictions due to stochasticity in training. Previous research (Hidey et al., 2022) have suggested a co-distillation procedure to achieve this. Label smoothing, which reduces over-confidence (Müller et al., 2019), has also been suggested to reduce variance, with a local smoothing approach to reduce model churn appearing in (Bahri and Jiang, 2021).
A distinctive feature of our approach is a focus on how properties of the data lead to instability. Inspired by dataset cartography (Swayamdipta et al.,
2020) which explored variance in predictions over time during training of a single model, we investigate how different data points vary in predictions across training runs. Non-trivial patterns emerge, and we use sample-specific instability to motivate a new approach to reducing model churn.
Our work draws connections between model stability and recent tractable approximations for Bayesian learning (Izmailov et al., 2018; Maddox et al., 2019). Recent Bayesian learning work focuses on the benefits of Bayesian model ensembling for confidence calibration, but an optimal Bayesian ensemble would also be stable. Bayesian approximations exploit the fact that SGD training dynamics approximate MCMC sampling, and therefore samples of models over a single training run can approximate samples of models across training runs, although not perfectly (Fort et al., 2019; Wenzel et al., 2020; Izmailov et al., 2021).
We have studied connections between prediction variability within a training run and across training runs, and used this connection to devise practical metrics and mitigation strategies.
Similar to BANNs (Furlanello et al., 2018), our teacher and corresponding student models use the same model architecture with same no. of parameters rather than using a high-capacity teacher model, however, unlike BANNS, our work is geared towards addressing model instability. Architecturally, our methodology (TGTSS) uses a temperature scaled temporally smoothed vector that is obtained from the last N checkpoints from the teacher model instead of the finalized teacher model and not use the annotated labels for the utterances.
## 3 Model Instability Measurement
The examples in Table 1 show that re-training a model with different random seeds can lead to wildly different predictions. The variance of predictions across models, σ 2m, is intuitive, but is expensive to compute and does not necessarily align with user experience since changes in confidence may not change predictions. A changed prediction, on the other hand, may break functionality that users had come to rely on. Hence we want to include a metric which measures how often predictions change.
Therefore, we computed the label switching entropy. Given a setup with training data {xi, yi} ∈
X where X are utterances, y ∈ {1*, ..., K*} are the corresponding gold labels, the multi-run Label Entropy (LEm) over N independent runs for an utterance xi can be computed as,
$$L E_{m}^{(i)}=\sum_{k=1}^{K}-\frac{n_{k}^{(i)}}{N}\log(\frac{n_{k}^{(i)}}{N})\qquad\quad(1)$$
where, nk is the number of times utterance i was predicted to be in class k across N models trained with different random seeds. For example, if an utterance gets labeled to three classes A, B and C for 90%, 5% and 5% of the time respectively, then its multi-run label entropy (LE(i)
m ) will be
−(0.9∗log(0.9)+0.05∗log 0.05+0.05 log 0.05) =
0.39. Similarly, an utterance that is consistently predicted to belong to one class over N runs will have a LE(i)
m of 0 (even if it is consistently put in the *wrong* class). We can compute the overall LEm by averaging LE(i)
m for all the utterances.
Empirically, we also observed a relatively strong linear relationship between LEm and σm (Figure 1).
![3_image_0.png](3_image_0.png)
Since computing LEm is computationally expensive due to training N independent models, we propose using single-run Label *Entropy* (LEs) that can
![3_image_1.png](3_image_1.png)
be computed over a single model run. Mathematically, the formula for label entropy stays consistent for both multi-run and single-run, however, LEs is computed across different model checkpoints.
In our analyses, we computed LEs by accumulating the predicted class after each optimization step, where as LEm was computed by accumulating the final predicted class across N models on the validation set.
Empirically, we found that there exists a strong linear relationship between LEs and LEm (Figure 2). This demonstrates that utterances which suffer from local instability across multiple independent runs exhibit similar instability across multiple optimization steps for a single model. This finding supports our hypothesis that LEs is a suitable proxy for LEm in real world production settings for NLU
systems.
## 4 Model Instability Mitigation
In our study, we have explored 3 baseline mitigation strategies to address model instability: ensembling, stochastic weight averaging (SWA) and uniform label smoothing. These methodologies have been used in numerous other works to improve generalization as well as predictive accuracy across a diverse range of applications. Performance of the ensembling strategy serves as our upper bound in reducing model instability. We propose a novel model instability mitigation strategy, temporal guided temperature scaled label smoothing, that is able to recover 90% of the reduction in model instability as ensembling at a fraction of model training time and computational cost. We describe all the mitigation strategies below.
## 4.1 Ensemble Averaging And Regularizing
In this setting, we trained N independent models, initialized with different random seeds, using the standard cross-entropy loss, computed between the ground truth labels and the predicted probability vector. For every utterance in the test set, we recorded the mean predicted probability of the gold label, the predicted label and our proposed local instability metric, label entropy, across N models.
We also trained another baseline by leveraging L2 regularization. No other mitigation strategies were used in the process since our aim was to emulate the current model training scenario in natural language understanding(NLU) production settings.
## 4.2 Stochastic Weight Averaging
Stochastic weight averaging(SWA) (Izmailov et al.,
2018) is a simple yet effective model training methodology that improves generalization performance in deep learning networks. SWA performs an uniform average of the weights traversed by the stochastic gradient descent based optimization algorithms with a modified learning rate. In our implementation, we equally averaged the weights at the end of the last two training epochs. We also explored equal averaging of weights from two randomly selected epochs out of the final 3 epochs but that strategy did not yield better results. We left the work of using a modified learning rate to a future study with a significantly larger training dataset.
## 4.3 Label Smoothing
Label smoothing (Szegedy et al., 2016) is a popular technique to improve performance, robustness and calibration in deep learning models. Instead of using "hard" one-hot labels when computing the cross-entropy loss with the model predictions, label smoothing introduces "soft" labels that are essentially a weighted mixture of one-hot labels with the uniform distribution. For utterances {xi, yi} where y ∈ {1*, ..., K*} for K classes, the new "soft" label is given by y LS = (1 − α) ∗ y + α/K where α is the label smoothing parameter. The "soft" labels are then used in the softmax cross-entropy loss.
## 4.4 Ensemble Baseline
To obtain consistent predictions with low local instability, ensembling is often utilized as the default mitigation strategy. Given a problem setup with training data {xi, yi} ∈ X where X are utterances, y ∈ {1*, ..., K*} are the corresponding gold labels, then intuitively, ensembling over N independent models,where N is sufficiently large, will converge to the average predicted probability by the law of large numbers. Hence, using a sufficiently large ensemble of independently trained models would give stable predictions in general.
In our study, we used ensembling to aggregate
(uniform average) predictions for each utterance across N independently trained models. Each model was trained using the softmax cross-entropy loss between the predicted logit vector zi over K
classes and the one-hot encoded vector representing the gold label. For an utterance xi, the uniform average predicted probability vector p¯i across N
models over all class K (softmax probability vector of length K) is adjusted by a temperature T, to obtain the smoothed predicted probability vector qi:
$$q_{i}={\frac{{\bar{p}}_{i}^{\;T}}{\sum_{k=1}^{K}{\bar{p}}_{k}^{\;T}}}$$
T(2)
The temperature T can be used to control the entropy of the distribution. The smoothed probability vector q is now used as the "soft" labels to train a model instead of the "hard" one hot encoded gold labels and the resultant model is robust to local instability. One challenge for ensembling is that it requires training, storing and running inference on a large number of models which is often infeasible for large scale NLU systems.
## 4.5 Temporal Guided Temperature Scaled Smoothing (Tgtss)
Since ensembling is infeasible for large models in practice, we propose temporal guided label smoothing that does not require training large ensembles to compute the soft labels.
In this setup, we train a pair of models as opposed to training a large ensemble of models. The first (teacher) model is trained using the one-hot encoded gold labels as the target. Once the model has converged and is no longer in the transient training state (after N training or optimization steps), we compute the uniform average predicted probability vector (p¯i) after each optimization step of the model, which is then adjusted by temperature T to obtain the smoothed predicted probability vector qi using eqn.(2). A suitable N can be chosen by looking at the cross-entropy loss curve for the validation dataset. The second (student) model is now trained using qi as the "soft" label instead of the one-hot encoded gold labels.
The significant advantage of TGTSS over ensembling is that it does not require training, storing, or inferring over large ensembles. A key feature of TGTSS is that it uniformly averages predictions over numerous training steps instead of averaging predictions over numerous independent models.
This saves the cost of training multiple models.
Moreover, we never need to store multiple models for TGTSS since we can store a running average of the predictions over time. Finally, at inference time we only need to call a single model (the trained student model), as opposed to N models for the ensemble.
## 5 Experimental Setup And Results For Mitigation 5.1 Base Model Architecture
$$\mathbf{\Omega}(2)$$
For all our experiments, we used DistilBERT (Sanh et al., 2019) as the pre-trained language model.
We used the implementation of *DistilBERT-baseuncased* from the *Huggingface* library by leveraging *AutoModelForSequenceClassification*. The pretrained language model is then fine-tuned on the benchmark datasets by using the training set. DistilBERT is a widely used pre-trained language model that is currently used in production in many large scale NLU systems. One key advantage of using DistilBERT is that it is able to recover more than 90% performance of the larger *BERT-base-uncased* model while using 40% less parameters on the GLUE language understanding benchmark (Wang et al., 2018). Using other BERT models as the pretrained language model was outside the scope of this study.
## 5.2 Datasets
To study local instability and compare different mitigation strategies, we used two open source benchmark datasets (Table 2): Massive and Clinc150.
- Massive: Massive (FitzGerald et al., 2022)
dataset is an open source multilingual NLU
dataset from Amazon Alexa NLU system consisting of 1 million labeled utterances spanning 51 languages. For our experiments, we only used the *en-US* domain utterances for domain classification task across 18 domains
(alarm, audio, general, music, recommendation, etc.).
- Clinc150 DialoGLUE: Clinc150 (Larson et al., 2019) is an open source dataset from DialoGLUE (Mehri et al., 2020), a conversational AI benchmark collection. We utilized Clinc150 for intent classification task across 150 intents (translate, transfer, timezone, taxes, etc).
| Attribute | MASSIVE | CLINC150 |
|---------------------|-----------|---------------------|
| Source | Amazon | DialoGLUE |
| Alexa AI | | |
| Domains | 18 | - |
| Intents | 60 | 150 |
| Train | 11,514 | 15,000 |
| Holdout(Unseen) | 2974 | 3,000 |
| Balanced? | No. | Yes. 100 per intent |
| Classification task | Domain | Intent |
Table 2: Benchmark dataset statistics
## 5.3 Training And Evaluation Protocol
We compared the performance of our proposed mitigation strategy, *temporal guided temperature* scaled smoothing (TGTSS), with other baseline mitigation strategies such as ensembling averaging, L2 regularization, uniform label smoothing, SWA
and ensembling. We trained 50 independent models with the same hyper-parameters for each mitigation strategy using different random initialization seeds. We reported the Mean ± SD of domain classification accuracy for the Massive dataset and Mean ± SD of intent classification accuracy for the Clinc150 dataset. For both the datasets, we also reported the percentage reduction in LEm when compared to the control baseline over 50 independent model runs for all the utterances as well as for high label entropy utterances whose label entropy was over 0.56 in the control baseline. For each method, we computed the sum of LEm over all the N utterances in the test set as PN
i=1 LEmi
. The
∆LEm is then computed as the percentage reduction among these values for each method and the control baseline. We did similar computations for
∆LEs in Table 4.
The LEm value 0.56 for an utterance indicates that if the utterance was assigned to 2 different labels over 50 independent model runs, then its membership is split 75%-25% between the two labels. A lower value of label entropy indicates better model robustness and consequently, lower local instability. An utterance will have LEm = 0 if it is consistently predicted to be the same label across 50 independent model runs. All the results for both the benchmark datasets have been reported on an unseen holdout set. A model having high overall accuracy and low label entropy is usually preferred.
## 5.3.1 Hyper-Parameters
In our empirical analyses, all the models across different mitigation strategies were trained using the ADAM (Kingma and Ba, 2014) optimizer with a learning rate of 0.0001. For both the benchmark datasets, all the models were trained for 5 epochs with a batch size of 256. For the control baseline with L2 regularization, we selected a weight decay value of 0.001. For the ensemble baseline, we selected N as 200 i.e. the pre-temperature scaled
"soft" labels were computed after uniformly averaging outputs from 200 independent models for each utterance in the training set. In the uniform label smoothing mitigation strategy, we used α as 0.5 for the Clinc150 dataset and α as 0.1 for the Massive dataset. For SWA, we equally averaged the model weights after the last 2 epochs. For experiments using temporal guided temperature scaled smoothing on the Clinc150 dataset, we used N as 200 where as for the Massive dataset, we set N as 180. This indicates that model outputs after first 200 training or optimization steps were recorded for the Clinc150 dataset and uniformly averaged for each utterance before temperature scaling. Similarly, for the Massive dataset, model outputs were recorded after 180 training steps. For both the ensemble guided and temporal guided temperature scaled smoothing mitigation strategies, we set the temperature T at 0.5.
## 5.4 Results & Discussion
We compared the proposed mitigation strategy with other baselines described in Section 4.1. We highlight the effectiveness of our proposed local instability metric, *label entropy*, in capturing local instability over 50 independent model runs as well as a single model run.
## Ensemble Is The Best Mitigation Strategy
In our empirical analyses, we found that ensemble baseline is often the best performing mitigation strategy in terms of both model accuracy and LEm for both the benchmark datasets(Table 3).
## Tgtss Is Comparable To Ensembing At A Fraction Of Computation Cost
We found that TGTSS is able to recover about 91% of the performance of ensembling in the multirun experiments. TGTSS trains only one teacherstudent pair and drastically reduces the computational cost of ensembling. Hence, it is much more feasible to deploy TGTSS in production NLU systems. We also found that TGTSS is significantly better than model-centric local instability mitigation strategies such as SWA and L2 regularization.
However, as mentioned in Section 4.5, TGTSS
computes "soft" labels across multiple optimization steps which leads to multiple inference cycles.
In our experiments, we ran inference after each optimization step once the model is no longer in the transient training state. However, it may be possible to further reduce the number of inference cycles by running inference after every X optimization steps and this is left for future studies.
## Efficacy Of Single Run Label Entropy (Les**) As A** Local Instability Metric
In Table 3, we demonstrated how TGTSS is able to reduce local instability in terms of our proposed metric LEm over multiple independent runs of the model and recover 91% of the performance of ensembling. We proposed LEs as a more practical metric for local instability. We showed that TGTSS
is still able to recover more than 90% of the performance of ensembling for the Clinc150 and the Massive datasets (Table 4). For high LEm utterances in the control baseline, TGTSS was able to considerably reduce LEs (Appendix Table 6).
In figure 3, we can observe that TGTSS significantly reduces variation in prediction scores compared to the control baseline. In the top panels, we see utterances that are easy to learn and the classifier converged to the gold label within 2 epochs. In bottom panels, we see utterances that exhibit high variation in prediction scores through the training process, and consequently, high LEs. Post mitigation by TGTSS, the bottom right panel shows the significant reduction in prediction score variation and LEs. Figure 8 in Appendix shows more examples of reduction in LEs over the course of training.
## Global Label Smoothing Is Not As Effective
In our empirical analyses, we found that uniform label smoothing reduces local instability by 7-9%
compared to the control baseline but falls short of ensembling and TGTSS. Label smoothing involves computing a weighted mixture of hard targets with the uniform distribution, where as both ensembling and TGTSS uses the model's average predictions over multiple runs and multiple optimization steps, respectively. Tuning the smoothing factor(α) did not improve model stability in terms of label entropy.
## Importance Of Temperature Scaling For Tgtss
We conducted ablation studies to understand how temperature scaling affects the performance of TGTSS. Temperature scaling uses a parameter T < 1 for all the classes to scale the uniformly averaged predictions. We found that the proposed methodology reduces label entropy by 17.5% over the control baseline without temperature scaling for the Massive dataset on the validation set (31.5%
reduction with temperature scaling). This also indicates that temporal uniform averaging is independently able to significantly reduce label entropy.
6 Conclusion In this work, we studied the problem of model instability/churn in deep neural networks in the context of large scale NLU systems. Assigning different labels to the same training data over multiple training runs can be detrimental to many applications based on DNNs. We noticed that the instability of model predictions are non-uniform over the data, hence we call it local instability. We proposed a new metric, *label switching entropy*, that is able to quantify model instability over multi-runs as well as a single training run. We also introduced Temporal Guided Temperature Scaled Smoothing that reduces model
| Massive | Clinc150 | | | | | |
|------------------------|-------------|-----------|---------|-------------|-----------|---------|
| Methods | Accuracy(%) | ∆LEm(%) ↑ | % of Eb | Accuracy(%) | ∆LEm(%) ↑ | % of Eb |
| Control baseline | 90.6 ± 0.6 | - | - | 95.1 ± 0.8 | - | - |
| Ensemble baseline (Eb) | 91.3 ± 0.5 | 34.5 | - | 95.4 ± 0.6 | 31.1 | - |
| L2 Regularization | 90.3 ± 0.5 | -2.3 | -7 | 94.9 ± 0.7 | -0.6 | -2 |
| SWA | 91.0 ± 0.5 | 17.6 | 51 | 95.2 ± 0.7 | 7.3 | 23 |
| Label Smoothing | 90.8 ± 0.5 | 5.7 | 17 | 95.2 ± 0.8 | 6.1 | 20 |
| TGTSS (Ours) | 91.3 ± 0.6 | 31.4 | 91 | 95.3 ± 0.8 | 26.7 | 86 |
![7_image_0.png](7_image_0.png)
Predicted probabilities
| ∆LEs (%) ↑ | | |
|-------------------|---------|----------|
| Methods | Massive | Clinc150 |
| Label Smoothing | 37.9 | 40.5 |
| Ensemble baseline | 55.5 | 61.7 |
| TGTSS (Ours) | 53.4 | 55.9 |
churn by a considerable margin. We have shown in experiments that TGTSS is able to recover up to 91% of the performance of ensembling at a fraction of computational cost for training and storing, thereby providing a viable alternative to ensembling in large scale production systems. Future directions of research include expanding our analyses to multi-modal datasets and further dissecting the root causes behind local model instability.
## Limitations
Even though our proposed methodology, TGTSS,
was able to significantly reduce model instability, there is still a gap in performance with the gold standard ensembling techniques. More work needs to be done to bridge this gap. In our empirical analysis, we used two open source datasets, Massive and Clinc150. Both these datasets are small and may not represent the complexity in real world production datasets which may contain substantially large noise. In our proposed methodology, we train a pair of models successively, a teacher and a student, which is significantly better than ensembling in terms of computational cost. However, this setup may still be challenging in many sophisticated real world production NLU systems. More work needs to be done to reduce the computational complexity of training and inference for these systems.
## Ethics Statement
The authors foresee no ethical concerns with the research presented in this work.
## Acknowledgement
The authors would like to thank the anonymous reviewers and area chairs for their suggestions and comments.
## References
Dara Bahri and Heinrich Jiang. 2021. Locally adaptive label smoothing for predictive churn. arXiv preprint arXiv:2102.05140.
Srinadh Bhojanapalli, Kimberly Wilber, Andreas Veit, Ankit Singh Rawat, Seungyeon Kim, Aditya Menon, and Sanjiv Kumar. 2021. On the reproducibility of neural network predictions. *arXiv preprint arXiv:2102.03349*.
Andrew Cotter, Heinrich Jiang, Maya R Gupta, Serena Wang, Taman Narayan, Seungil You, and Karthik Sridharan. 2019.
Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. *J. Mach.*
Learn. Res., 20(172):1–59.
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, and D. Sculley. 2022. Underspecification presents challenges for credibility in modern machine learning. *Journal of Machine Learning Research*,
23(226):1–61.
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show your work: Improved reporting of experimental results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2185–2194, Hong Kong, China. Association for Computational Linguistics.
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. *arXiv preprint arXiv:2002.06305*.
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. 2020a. Underspecification presents challenges for credibility in modern machine learning. *Journal of Machine Learning Research*.
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. 2020b. Underspecification presents challenges for credibility in modern machine learning. *Journal of Machine Learning Research*.
Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, et al. 2022. Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages.
arXiv preprint arXiv:2204.08582.
Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. 2019.
Deep ensembles: A loss landscape perspective. *arXiv* preprint arXiv:1912.02757.
Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pages 1607–1616.
PMLR.
Christopher Hidey, Fei Liu, and Rahul Goel. 2022. Reducing model jitter: Stable re-training of semantic parsers in production environments. *arXiv preprint arXiv:2204.04735*.
Pavel Izmailov, Dmitrii Podoprikhin, T. Garipov, Dmitry P.
Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization.
ArXiv, abs/1803.05407.
Pavel Izmailov, Sharad Vikram, Matthew D Hoffman, and Andrew Gordon Gordon Wilson. 2021. What are bayesian neural network posteriors really like? In International conference on machine learning, pages 4629–4640. PMLR.
Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, and Afshin Rostamizadeh. 2021. Churn reduction via distillation. *arXiv preprint arXiv:2106.02654*.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *International Conference on* Learning Representations.
Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1311–1316, Hong Kong, China. Association for Computational Linguistics.
Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P
Vetrov, and Andrew Gordon Wilson. 2019. A simple baseline for bayesian uncertainty in deep learning. *Advances in* Neural Information Processing Systems, 32.
Shikib Mehri, Mihail Eric, and Dilek Z. Hakkani-Tür. 2020.
Dialoglue: A natural language understanding benchmark for task-oriented dialogue. *ArXiv*, abs/2009.13570.
Mahdi Milani Fard, Quentin Cormier, Kevin Canini, and Maya Gupta. 2016. Launch and iterate: Reducing prediction churn. *Advances in Neural Information Processing Systems*, 29.
Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. 2019.
When does label smoothing help? Advances in neural information processing systems, 32.
Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W Dusenberry, Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton, et al.
2021. Uncertainty baselines: Benchmarks for uncertainty & robustness in deep learning. *arXiv preprint* arXiv:2106.04015.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert:
smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. arXiv preprint arXiv:2009.10795.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In *Proceedings* of the IEEE conference on computer vision and pattern recognition, pages 2818–2826.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multitask benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
Florian Wenzel, Kevin Roth, Bastiaan S Veeling, Jakub Swi ˛atkowski, Linh Tran, Stephan Mandt, Jasper Snoek, ´
Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin.
2020. How good is the bayes posterior in deep neural networks really? *arXiv preprint arXiv:2002.02405*.
## A Appendix A.1 Variance Confidence Plots
We have plotted the mean confidence and the variance of utterances in the validation dataset for both the Massive (Figure 4) and Clinc150 (Figure 5) datasets. From our analysis, we see that there are utterances that exhibit high variance and medium confidence (around 0.5) which often leads to predicted label flips or model churn over multiple training runs of the model. We also see that there are utterances that possess low confidence corresponding to the gold label and has very low variance. These utterances are probably annotation errors.
The bulk of the utterances have high confidence on average corresponding to the gold label and low confidence which signifies that the model predictions are mostly consistent on these utterances.
Figure 4: Plot of multi-run confidence(µm) and standard
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
deviations(σm) of prediction scores for Massive data
(validation dataset), from the domain classifier model Figure 5: Plot of multi-run confidence(µm) and standard deviations(σm) of prediction scores for Clinc150 data
(validation dataset), from the intent classifier model
## A.2 Relationship Between Les, Lem And Μm
As shown earlier in the massive dataset, there is a strong relationship between LEm and µm. We observe a similar trend in the Clinc150 dataset as well (Figure 7). We also observe a similar relationship between single run and multiple run label entropy (LE) for Clinc150 dataset (Figure 6). This finding supports our analysis that label entropy is a suitable proxy for model churn.
## A.3 Lem & Les **Reduction For High Entropy** Samples
We computed the percentage reduction in LEm and LEs post mitigation for utterances that have high LEm in the control baseline. In our empirical studies, we showed that TGTSS
was able to considerably reduce LEm and LEs across multirun and single-run experiments when compared to the gold standard ensembling (Appendix Tables 5, 6).
## A.4 Label Entropy Over Optimization Steps
We have used LEs as a suitable proxy for LEm. In Figure 8, we provide empirical evidence that our proposed methodology, TGTSS, was able to reduce label entropy as the model is trained over multiple optimization steps. We computed cumulative label entropy till optimization step T and observed that
| ∆LEm (%) ↑ | | |
|-------------------|---------|----------|
| Methods | Massive | Clinc150 |
| Control baseline | - | - |
| Ensemble baseline | 27.4 | 24.1 |
| L2 Reg. | 3.8 | 4.2 |
| SWA | 11.3 | 4.3 |
| Label Smoothing | 5.4 | 8.1 |
| TGTSS (Ours) | 26 | 22.4 |
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
Table 5: Empirical analyses highlights TGTSS reduces LEm for high LEm samples of the control baseline by a considerable margin in multi-run experiments. The column ∆LEm(%) ↑ is computed as percentage reduction between the sum of per-utterance LEm for each method and that of the control baseline. A higher value indicates greater reduction in LEm over control baseline.
as the model was being trained, the label entropy of some of the utterances dropped closer to 0.
| ∆LEs (%) ↑ | | |
|-------------------|---------|----------|
| Methods | Massive | Clinc150 |
| Label Smoothing | 14.9 | 20.7 |
| Ensemble baseline | 36.4 | 40.7 |
| TGTSS (Ours) | 31.5 | 33.6 |
![11_image_0.png](11_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 7 (Limitations)
✓ A2. Did you discuss any potential risks of your work?
section 7 (Limitations)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Introduction (section 1)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4, 5
✓ B1. Did you cite the creators of artifacts you used?
section 9 (References)
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In section 5, we have cited the public datasets that were used for this research.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5. Results section
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5 Results
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 and 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
zhu-etal-2023-knowledge | What Knowledge Is Needed? Towards Explainable Memory for k{NN}-{MT} Domain Adaptation | https://aclanthology.org/2023.findings-acl.177 | kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus. As a result, the constructed datastore is usually large and possibly redundant. In this paper, we investigate the interpretability issue of this approach: what knowledge does the NMT model need? We propose the notion of local correctness (LAC) as a new angle, which describes the potential translation correctness for a single entry and for a given neighborhood. Empirical study shows that our investigation successfully finds the conditions where the NMT model could easily fail and need related knowledge. Experiments on six diverse target domains and two language-pairs show that pruning according to local correctness brings a light and more explainable memory for kNN-MT domain adaptation. |
## What Knowledge Is Needed? Towards Explainable Memory For K**Nn-Mt** Domain Adaptation
Wenhao Zhu1,2, Shujian Huang1,2, Yunzhe Lv1,2, Xin Zheng1,2**, Jiajun Chen**1,2 1 National Key Laboratory for Novel Software Technology, Nanjing University, China 2 Collaborative Innovation Center of Novel Software Technology and Industrialization
{zhuwh,lvyz,zhengxin}@smail.nju.edu.cn, {huangsj,chenjj}@nju.edu.cn
## Abstract
kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus. As a result, the constructed datastore is usually large and possibly redundant. In this paper, we investigate the interpretability issue of this approach: what knowledge does the NMT model need? We propose the notion of local correctness (LAC) as a new angle, which describes the potential translation correctness for a single entry and for a given neighborhood. Empirical study shows that our investigation successfully finds the conditions where the NMT
model could easily fail and need related knowledge. Experiments on six diverse target domains and two language-pairs show that pruning according to local correctness brings a light and more explainable memory for kNN-MT
domain adaptation1.
## 1 Introduction
Domain adaptation in neural machine translation
(NMT) aims at adapting pre-trained NMT models to a target domain (Chu et al., 2017; Thompson et al., 2019; Hu et al., 2019; Zhao et al., 2020; Zheng et al., 2021). Fine-tuning (Luong and Manning, 2015) has been the de facto standard for adaptation. However, fine-tuning suffers from the catastrophic forgetting problem (McCloskey and Cohen, 1989; French, 1999).
Recently, Khandelwal et al. (2021) propose kNN-MT, showing a new paradigm for domain adaptation. kNN-MT first explicitly extracts translation knowledge in the target domain training data into a key-*value* datastore with a pre-trained NMT
model. For each datastore entry, the key is a continuous representation and the value is a symbolic token. The datastore is then used to assist the NMT
1Code will be released at https://github.com/NJUNLP/
knn-box model during translation. The kNN-MT framework circumvents the necessity to disturb the parameters of the pre-trained NMT model and enables quick adaptation by switching datastores.
kNN-MT incorporates the symbolic datastore to assist the neural model (Khandelwal et al., 2021; Zheng et al., 2021; Jiang et al., 2021). However, the datastore usually stores all the target tokens in the parallel data, without considering the capability of the neural model. As a result, the datastore is usually huge in size and possibly redundant.
To understand the relationship between the datastore and the NMT model, this paper conducts investigations on the interpretability issue: *what* knowledge does the NMT model need? Intuitively, the pre-trained NMT model only needs knowledge that remedies its weaknesses. Thus, we propose to explore this issue from the point of *local correctness* (Section 3). Our local correctness includes two aspects, the correctness of translating a given entry (*entry correctness*) and, more importantly, the correctness of performing translation in a given neighborhood in the representation space (*neighborhood correctness*).
For the entry correctness, we check whether the NMT could make correct translation for the entry itself and accordingly split the datastore entries into two categories, namely *known* and *unknown*. Based on entry correctness, we examine neighborhood correctness to more comprehensively evaluate the NMT model's underlying capability. Specifically, we propose a *knowledge margin* metric to evaluate the maximum size of the neighborhood where the NMT could make correct translation. Intuitively, the NMT model may fail when the knowledge margin is small.
To verify our interpretation, we devise a datastore pruning algorithm PLAC (Pruning with LocAl Correctness), which simply removes entries with a higher knowledge margin value (Section 4). These entries are less useful for adaptation, because the NMT model translates well in their neighborhood.
We conduct experiments on six diverse target domains in two language pairs (Section 6). Compared with existing pruning baselines (Martins et al.,
2022; Wang et al., 2022), PLAC prunes more entries (up to 45%) in four OPUS domains' datastore without hurting translation performance. Through ablation study, we reveal that simply relying on entry correctness is not enough, showing that the novel metric knowledge margin for the neighborhood correctness could be the key to build a light and more explainable memory for kNN-MT domain adaptation.
## 2 Background
For NMT domain adaptation, kNN-MT constructs a datastore D based on the given target domain bilingual corpus C and use it to provide helpful target domain translation knowledge for the pretrained NMT model M. In this section, we briefly introduce kNN-MT and its advanced variant, adaptive kNN-MT (Zheng et al., 2021).
## 2.1 Building A Domain Specific Datastore
Given target domain bilingual corpus C, all translation pairs in C are fed into the frozen pre-trained NMT model for decoding with teacher-forcing
(Williams and Zipser, 1989). At decoding time step t, the hidden state from the last decoder layer h(x, y<t) is taken as key and the t-th target token ytis taken as value, resulting in a key-value pair.
For the entire corpus, the datastore D is consisted of key-value pairs:
$${\mathcal{D}}=\{(h(\mathbf{x},\mathbf{y}_{<t}),y_{t})\mid\forall y_{t}\in\mathbf{y},(\mathbf{x},\mathbf{y})\in{\mathcal{C}}\},\tag{1}$$
where y<t denotes previous tokens in the sequence y. Each entry in the datastore explicitly memorizes the following translation knowledge: generating the value token at the decoder hidden state key.
And the datastore covers all target language token occurrences.
## 2.2 Translating With The Datastore
During inference, given a source language sentence x, kNN-MT simultaneously leverages M
and D to generate target language translation y={y1, y2, · · · , y|y|}. More specifically, at decoding time step t, kNN-MT queries the datastore with the decoder hidden state h(x, y<t) generated by M. The k nearest neighbors of the query Nk = {(h j, yj)}
k 1 are retrieved, which are k entries with keys closest to the query according to squared-L
2 distance, d. These retrieved knowledge are converted into a distribution over the vocabulary:
$$\begin{array}{c}{{p_{k\mathrm{NN}}(y_{t}|{\bf x},{\bf y}_{<t})\propto}}\\ {{\sum_{(h^{j},y^{j})\in{\mathcal{N}}_{k}}{\bf1}_{y_{t}=y^{j}}\exp(\frac{-d(h^{j},h({\bf x},{\bf y}_{<t}))}{T}),}}\end{array}$$
where T is the temperature. Then, kNN-MT interpolates pkNN with the pre-trained NMT model's output distribution as the final translation distribution:
$$\begin{array}{c}{{p(y_{t}|{\bf x},{\bf y}_{<t})=\lambda\;p_{k\mathrm{NN}}(y_{t}|{\bf x},{\bf y}_{<t})}}\\ {{\qquad\qquad+\;(1-\lambda)\;p_{\mathrm{NMT}}(y_{t}|{\bf x},{\bf y}_{<t})}}\end{array}\quad(3)$$
The complete translation y can be generated by beam search.
## 2.3 Adaptive K**Nn-Mt**
For vanilla kNN-MT, the selection of hyperparameters, such as k or λ, highly affect the final translation performance, which is less stable across languages or domains. Adaptive kNN-MT
uses a lightweight meta-k neural network to dynamically determine the usage of retrieved entries, which avoids the tuning of hyper-parameters and achieves a more stable performance (Zheng et al.,
2021).
## 3 **What Knowledge Does The Nmt Model** Need?
Although less accurate, the pre-trained NMT model could perform translation without the datastore.
This fact suggests that the NMT model knows some bilingual knowledge of the target domain. However, the construction of datastore dismisses this point and results in a huge amount of entries being stored.
Intuitively, the pre-trained NMT model only needs knowledge that remedies its weaknesses. To find out these weaknesses and build more explainable memory, we start from investigating entry correctness. Based on this basic concept, we further study neighborhood correctness and find that it precisely reflects the NMT model's strengths and weaknesses.
## 3.1 Known V.S. Unknown For Entry Correctness
The capability of the NMT model in target domain is difficult to describe directly. However, as the datastore consists of entries constructed on training set, it is easier to check whether the NMT model could make correct translation for them.
This can be efficiently accomplished by an extra evaluation during the teacher-forcing decoding.
More specifically, at each time step t of the teacherforcing process, we not only record the hidden states h(x, y<t) and the correct target token yt, but also evaluate the prediction of the NMT model y′t
,
which is the target token with the highest probability pNMT(y′t|x, y<t) . Then we call an entry as a *known* entry if the NMT model could predict it correctly; and *unknown*, otherwise (Equation 4).
$$(h(\mathbf{x},\mathbf{y}_{<t}),y_{t}){\mathrm{~is~}}\begin{cases}{k n o w n,}&{{\mathrm{~if~}}y_{t}^{\prime}=y_{t}}\\ {u n k n o w n,}&{{\mathrm{~o.w.}}}\end{cases}\quad(4)$$
Obviously, the *unknown* entries in the datastore are important, because these are the points where the NMT model tends to make a mistake.
## 3.2 The Knowledge Margin Metric For Neighborhood Correctness
However, entry correctness alone could not fully reveal the NMT model's weaknesses. Because for known entries, the NMT model may still fail during inference where the context could be similar but different. Considering that the contextualized representations of similar context stay close in the representation space (Peters et al., 2018), we propose to investigate the NMT model's translation performance in a neighborhood.
We propose a metric called knowledge margin, denoted as km, to measure the neighborhood correctness. Given an entry (*h, y*), its neighborhood is defined by its k nearest neighbors2in the datastore Nk(h) = {(h j, yj)}
k 1
. The knowledge margin of the entry, i.e. km(h), is defined as:
$$\arg\max_{k}\left(h^{j},y^{j}\right)\text{is known,}\forall(h^{j},y^{j})\in\mathcal{N}_{k}(h).\tag{5}$$
Intuitively, km is the maximum size of the neighborhood of the entry h where the NMT could make
correct translation. If considering at most ¯k nearest neighbors of h, its knowledge margin will be a number between 0 and ¯k.
Please note that the definition of knowledge margin applies for any point in the representation space, because for each point (e.g. an actual query q during inference), its neighborhood Nk(q) could be defined by querying the datastore. This extension allows the investigation of the NMT model at any given point in the representation space.
## 3.3 Empirical Analysis
We now present an empirical analysis of the relationship between the NMT model and the datastore, and reveal the NMT model's weaknesses.
Settings We follow Zheng et al. (2021) and consider four domains in German-English OPUS
dataset (Tiedemann, 2012) as target domains3. Table 1 lists statistics of four domains4. For pretrained NMT model, we use the winner model of WMT'19 German-English news translation task 5(Ng et al., 2019). The datastore for each domain is constructed on the corresponding training set with the pre-trained NMT model.
| OPUS- | OPUS- | OPUS- | OPUS | |
|---------|---------|---------|---------|--------|
| Medical | Law | IT | Koran | |
| Train | 248,099 | 467,309 | 222,927 | 17,982 |
| Dev | 2,000 | 2,000 | 2,000 | 2,000 |
| Test | 2,000 | 2,000 | 2,000 | 2,000 |
Table 1: Number of sentences of the OPUS dataset.
"Train", "Dev", "Test" denote training, development, test set, respectively.
Entry Correctness We collect statistics about the two categories of entries and report results in Table 2. The results show that 56%∼73% (averaging 66.7%) of datastore entries are *known* by the pre-trained NMT model. This high ratio strongly indicates that a large amount of datastore entries may be redundant.
![3_image_0.png](3_image_0.png)
| OPUS- | OPUS- | OPUS- | OPUS | |
|-------------|-----------|----------------------|-----------|---------|
| Medical | Law | IT | Koran | |
| known | 5,070,607 | 14,803,149 2,514,757 | 294,094 | |
| unknown | 1,844,966 | 4,287,906 | 1,093,974 | 230,677 |
| |D| | 6,915,573 | 19,091,055 3,608,731 | 524,771 | |
| known ratio | 73.32% | 66.74% | 69.69% | 56.04% |
2048). The distributions on four OPUS domains show the same trends. Most *unknown* entries has a very low knowledge margin, e.g., around 90%
of *unknown* entries have a margin value between 0 and 4. In contrast, the distribution for *known* entries is more diverse. The results indicate that the neighborhood correctness is consistent with the entry correctness, but may provide more information for known entries.
To verify the relation between knowledge margin and NMT model's translation ability, we conduct experiments on the development set for each domain, where translation context are unseen. For each token ytin the dev set, we perform teacherforcing until time step t−1 and query the datastore for the neighborhood at time step t. We evaluate the knowledge margin of the query and the prediction accuracy of the NMT model.
Figure 2 shows the results. For tokens with higher margins, e.g. km ≥ 32, the prediction accuracy of the NMT model is higher than 95%. In contrast, for tokens with lower margins, e.g. *km <* 4, the accuracy is lower than 50%. This is a strong evidence that the NMT model could easily fail when knowledge margin is small.
In Table 3, we also show a translation example for such a condition, where knowledge margin of the current query is 0 and the NMT model fails to
![3_image_1.png](3_image_1.png)
## 4 **Building Explainable Memory Based On** Local Correctness
Because local correctness are good indicators for translation failures. It could also be interpreted as the importance of datastore entries. To verify this interpretation, we propose a pruning algorithm, i.e.
Pruning with LocAl Correctness (**PLAC**), to cut off entries with a high knowledge margin (Algorithm 1).
There are two steps in the algorithm. In the first step, each entry (*h, y*) in the datastore D is checked for their local correctness. If knowledge margin of
(*h, y*) is greater than or equal to the threshold kp, the entry is collected as the pruning candidates 7.
In the second step, these pruning candidates are randomly selected and get removed from D until the required pruning ratio is reached. Since our method does not need to train any additional neural networks, it can be easily implemented. The pruned
| Source sentence (x): Wie ist Cy@@ an@@ ok@@ it anzu@@ wenden ? Previous translation (y<t): How to use Cy@@ an@@ ok@@ No. Type Retrieved Keys: source (x) Retrieved Keys: target (y<t) | Retrieved Values | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|----------------------------------------------------------------------------------------------------------|----------------------------------|----|
| 1 | unknown | Wie wird Cy@@ an@@ ok@@ it ange@@ wendet ? | How is Cy@@ an@@ ok@@ | it |
| 2 | unknown | Sie erhalten Cy@@ an@@ ok@@ it als In@@ fusion | You will have Cy@@ an@@ ok@@ | it |
| in eine V@@ ene . | | | | |
| 3 | unknown | Wo@@ für wird Cy@@ an@@ ok@@ it ange@@ | What is Cy@@ an@@ ok@@ | it |
| wendet ? | | | | |
| 4 | unknown | Wel@@ ches Risiko ist mit Cy@@ an@@ ok@@ it | What is the risk associated with | it |
| verbundenn ? | Cy@@ an@@ ok@@ | | | |
| 5 | unknown | Die folgenden Neben@@ wirkungen wurden in Verbindung mit der Anwendung von Cy@@ an@@ ok@@ it berichtet . | The following un@@ desi@@ rable | it |
| effects have been reported in association with Cy@@ an@@ ok@@ | | | | |
| 6 | unknown | Warum wurde Cy@@ an@@ ok@@ it zugelassen ? | Why has Cy@@ an@@ ok@@ | it |
| 7 | unknown | Beson@@ dere Vorsicht bei der Anwendung von | Take special care with Cy@@ an@@ | it |
| Cy@@ an@@ ok@@ it ist erforderlich | ok@@ | | | |
| 8 | unknown | Wie wirkt Cy@@ an@@ ok@@ it ? | How does Cy@@ an@@ ok@@ | it |
| NMT's prediction (y ′ t ): ite Correct target token (yt): it | | | | |
Table 3: An example where the NMT model fails (sentence are tokenized into subwords). At the current time step, all retrieved entries are *unknown* for the NMT model, so knowledge margin is 0. The prediction of NMT is highly likely to be wrong. With these retrieved entries, the kNN-MT could make a correct prediction.
datastore can be used in different kNN-MT models, such as adaptive kNN-MT.
Algorithm 1 Datastore Pruning by PLAC
Input: datastore D, the *knowledge margin* threshold kp, the pruning ratio r Output: pruned datastore D
1: candidates ← ∅ ▷ step 1: collect 2: for each entry (*h, y*) in D do 3: if *km(h*) ≥ kp **then**:
4: candidates ← candidates ∪ (*h, y*)
5: **end if**
6: **end for**
7: **repeat** ▷ step 2: drop 8: randomly select entry (*h, y*) from *candidates* 9: remove (*h, y*) from D
10: **until** pruning ratio r is satisfied 11: **return** D
## 5 Experiment Setup
![4_Image_0.Png](4_Image_0.Png)
This section introduces general experiment setup for evaluating pruning effect. More implementation details can be found in Appendix C.
## 5.1 Data And Processing
We conduct datastore pruning for 6 different domains from 2 language pairs. Specifically, we take 4 OPUS domains for De-En experiments and 2 UM
domains8for Zh-En experiments (Tian et al., 2014),
8We split the original training set into training, development, test set because there is no development set provided in the original dataset and there exists an overlap between original training and test sets. Detailed description of UM
domains can be found in Appendix A.
which are all benchmark dataset for NMT domain adaptation research.
| UM-Law | UM-Thesis | |
|----------|-------------|---------|
| Train | 216,000 | 296,000 |
| Dev | 2,000 | 2,000 |
| Test | 2,000 | 2,000 |
Table 4: Detailed statistics of UM dataset. We report the sentence number of each subset. "Train", "Dev", "Test" denote training, development, test set respectively.
For preprocessing, we use *moses*9toolkit to tokenize German and English corpus and *jieba*10 to tokenize Chinese corpus. Byte pair encoding11
(BPE) is applied for subword segmentation.
## 5.2 Pre-Trained Nmt Model
For De-En tasks, we use the winner model of WMT'19 De-En news translation task, which is based on the Transformer architecture (Vaswani et al., 2017). For Zh-En tasks, we train a base Transformer model from scratch on CWMT'17 ZhEn Dataset12 (9 million sentence pairs), since we do not find any publicly available Zh-En pre-trained NMT model on the website.
The pre-trained NMT model is the unadapted general domain model for each language pair, which is the starting point for domain adaptation.
9https://github.com/moses-smt/mosesdecoder 10https://github.com/fxsjy/jieba 11https://github.com/rsennrich/subword-nmt 12http://nlp.nju.edu.cn/cwmt-wmt
| OPUS-Medical | OPUS-Law | OPUS-IT | OPUS-Koran | | | | | | | | | |
|----------------|--------------|-----------|--------------|-------|--------------|---------|--------------|--------|---------|-----|--------|---------|
| Ratio | BLEU↑ COMET↑ | Ratio | BLEU↑ COMET↑ | Ratio | BLEU↑ COMET↑ | Ratio | BLEU↑ COMET↑ | | | | | |
| Base | - | 39.73 | 0.4665 | - | 45.68 | 0.5761 | - | 37.94 | 0.3862 | - | 16.37 | -0.0097 |
| Finetune | - | 58.09 | 0.5725 | - | 62.67 | 0.6849 | - | 49.08 | 0.6343 | - | 22.40 | 0.0551 |
| Adaptive kNN | 0% | 57.98 | 0.5801 | 0% | 63.53 | 0.7033 | 0% | 48.39 | 0.5694 | 0% | 20.67 | 0.0364 |
| Random | 45% | 54.08∗ | 0.5677 | 45% | 58.69∗ | 0.6690∗ | 40% | 45.54∗ | 0.5314∗ | 25% | 20.36 | 0.0434 |
| Cluster | 45% | 53.31∗ | 0.5689 | 45% | 58.68∗ | 0.6779∗ | 40% | 45.80∗ | 0.5788 | 25% | 20.04∗ | 0.0410 |
| Merge | 45% | 54.65∗ | 0.5523∗ | 45% | 60.60∗ | 0.6776∗ | 40% | 45.83∗ | 0.5334∗ | 25% | 20.25∗ | 0.0365 |
| Known | 45% | 56.44∗ | 0.5691 | 45% | 61.61∗ | 0.6885∗ | 40% | 45.93∗ | 0.5563 | 25% | 20.35∗ | 0.0338 |
| All Known | 73% | 42.73∗ | 0.4926∗ | 66% | 51.90∗ | 0.6200∗ | 69% | 40.93∗ | 0.4604∗ | 56% | 17.76∗ | 0.0008∗ |
| PLAC (ours) | 45% | 57.66 | 0.5773 | 45% | 63.22 | 0.6953∗ | 40% | 48.22 | 0.5560 | 25% | 20.96 | 0.0442 |
For kNN methods, it also serves as the base for building the datastore.
## 5.3 Systems For Comparison
We report the performance of the following systems for reference: the pre-trained NMT model (Base),
the pre-trained model finetuned on each target domain (Finetune) (Luong and Manning, 2015), adaptive kNN-MT with full datastores built for each target domain on their training set (Adaptive kNN)
(Zheng et al., 2021). Finetuning and Adaptive kNN
are two popular alternatives for adaptation.
The following pruning methods are applied to the datastore of Adaptive kNN for comparison:
randomly pruning (**Random**), cluster-based pruning (**Cluster**) (Wang et al., 2022), merging similar entries (**Merge**) (Martins et al., 2022), randomly pruning *known* entries (**Known**), pruning all *known* entries (**All Known**). Among them, Cluster and Merge are pruning methods based on the context similarity of different entries (Wang et al., 2022; Martins et al., 2022).
We report case-sensitive detokenized BLEU (Papineni et al., 2002) calculated by *sacrebleu*13 and COMET (Rei et al., 2020) calculated by publicly available *wmt20-comet-da*14 model. For the prunign methods, statistical significance test
(Koehn, 2004) against the full datastore (Adaptive kNN) are conducted as well.
## 6 Experiment Results And Analysis 6.1 Safely Pruning With Plac
Experiment results on OPUS domains are presented in Table 5. For the reference, the pre-trained NMT model usually does not translate well on target domains. Finetuning and Adaptive kNN have
## Comparable Performances.
We perform datastore pruning with PLAC for different domains and report the largest pruning ratio without significant performance degradation on the test set.
Compared with using full datastore (Adaptive kNN), our method (**PLAC**) cutting off 25%-45%15 entries of the datastore while achieving comparable performance. On the two largest domains, "OPUSMedical" and "OPUS-Law", our method successfully prunes 45% datastore (millions of key-value pairs). Excellent pruning performance validates our analysis concerning with local correctness.
Cluster and Merge lead to a larger degradation of translation performance16, showing that entries with identical target tokens indeed have different importance in assisting the NMT model. Simply pruning all *known* entries results in a significant drop of performance (All Known). Pruning *known* entries to the same ratio as PLAC also lead to degradation (Known), although it outperforms Cluster and Merge. These comparisons indicates that the entry correctness only partially reflects entry importance, demonstrating the necessity of the neighborhood correctness analysis with knowledge margin.
The results on UM domains are presented in Table 6. The datastore could be pruned by 30% for
"UM-Law" and 15% for "UM-Thesis" Datastore without any sacrifice in translation performance.
The other findings are similar with those in GermanEnglish experiments.
| Base | - | 30.36 | 0.3857 | - | 13.13 | -0.0442 |
|--------------|-----|---------|----------|-----|---------|-----------|
| Finetune | - | 58.82 | 0.6375 | - | 16.86 | -0.0295 |
| Adaptive kNN | 0% | 58.64 | 0.6017 | 0% | 17.49 | -0.0146 |
| Random | 30% | 53.78∗ | 0.5661∗ | 15% | 16.14∗ | -0.0280∗ |
| Cluster | 30% | 49.65∗ | 0.5274∗ | 15% | 15.73∗ | -0.0419∗ |
| Merge | 30% | 56.51∗ | 0.5873∗ | 15% | 17.00∗ | -0.0296∗ |
| Known | 30% | 56.92∗ | 0.5762∗ | 15% | 17.25 | -0.0143 |
| All Known | 63% | 46.45∗ | 0.4720∗ | 47% | 15.33∗ | -0.0525∗ |
| PLAC (ours) | 30% | 58.65 | 0.6056 | 15% | 17.52 | -0.0122 |
![6_image_0.png](6_image_0.png)
## 6.2 How Knowledge Margin Affects Pruning Performance?
In this section, we examine how knowledge margin affects pruning performance and provide more insight into our proposed method. Figure 3 plots BLEU scores of adaptive kNN-MT models with pruned datastore under different pruning ratios on development sets. We can observe that trends are mostly similar in different domains. Pruning by PLAC achieves the best performance over the other baselines and the performance is more stable even with a higher pruning ratio.
Note that Known is a case where neighborhood correctness is dismissed during entry pruning. Although it outperforms Random, Cluster and Merge in most scenarios, its performance is still unstable.
When tuning the hyperparameter kp among {4, 8, 16, 32}, we can see a trade-off between BLEU
score and the pruning ratio. Large kp leads to a small sacrifice of BLEU score but a lower pruning ratio. Small kp allows us to prune more entries but causes significant BLEU scores decline after a specific threshold ratio. For example, when kp = 4, it is allowed to prune 55% "OPUS-Medical" datastore, but translation performance declines drastically after the pruning ratio reaches 50%. Finally, we choose the top-right point17 in each subfigure as the best-performed setting for each domain, which are used in other experiments.
## 6.3 **Datastore Entries With Lower Knowledge** Margin Are Indeed Valuable
In this section, we want to verify that entries with low knowledge margin are truly important for NMT
adaptation. For this purpose, we remove entries 17Hyper-parameter values of these points are reported in Appendix C.
from datastore with a reversed strategy, i.e. the knowledge margin of (*h, y*) is less than kp.
Table 7 shows pruning effect. We can see that pruning entries with reverse strategy suffers significant performance decline at even a small pruning ratio, demonstrating the importance of these entries for domain adaptation. We also show some cases for each domain in Table 8. We can see that target tokens of these valuable entries are more domainspecific, e.g. "dose" and "Executive".
Table 7: Translation performance difference (BLEU)
compared with Adaptive kNN using full datastore under different pruning ratios.
| OPUS-Law | 10% | 20% | 30% | 40% | 45% |
|-----------------|-------|-------|-------|-------|--------|
| reverse pruning | -1.91 | -4.00 | -6.19 | -8.71 | -10.38 |
| PLAC (ours) | +0.00 | -0.19 | +0.18 | -0.21 | -0.31 |
## 6.4 Plac Is Applicable To Different K**Nn-Mt** Variants
For more comprehensive evaluation, we plug our pruned datastore into different kNN-MT variants, i.e. vanilla kNN (Khandelwal et al., 2021), KSTER
(Jiang et al., 2021) and adaptive kNN. Experiment results on OPUS-Law domain show that our pruned datastore does almost no harm to the translation performance of different variants, demonstrating the effectiveness of PLAC.
| Domain | Source Sentence (x) | Target Sentence (y) | | | |
|------------------------------------|-----------------------------------------------|--------------------------------------------------------------------|-----------|----------------------------------|-----|
| OPUS-Medical | Die Höchst@@ do@@ sis sollte 30 mg - Tag nich | The maximum dose should not exce@@ ed 30 mg / day . | | | |
| überschrei@@ ten . | | | | | |
| OPUS-Law | Das Direkt@@ ori@@ um entscheidet über die | The Executive Board shall decide on the organisation of its | | | |
| Organisation seiner Sitz@@ ungen . | meetings . | | | | |
| OPUS-IT | Sie haben eventuell einen Programm@@ fehler | You may have encounter@@ ed a bu@@ g in the program . | | | |
| entdeckt . | | | | | |
| OPUS-Koran | Das ist eine schmerz@@ hafte P@@ ein . | Target sentence: That would be a grie@@ v@@ ous aff@@ li@@ ction . | | | |
| UM-Law | 保险公司 依法 接受 监督 检查 。 | Any | insurance | company shall accept supervision | and |
| inspection according to law . | | | | | |
| UM-Thesis | 中国 能源需求 及其 风险管理 研究 | The Research on Energy Demand and Its Risk Management in China | | | |
Table 8: Case study for remaining knowledge in different domain's pruned datastore. The underlined parts are target tokens of entries with small margin values.
| OPUS-Medical | OPUS-Law | OPUS-IT | OPUS-Koran | UM-Law | UM-Thesis | | | | | | | |
|----------------|------------|-----------|--------------|----------|-------------|-------|----|-------|-----|-------|-----|-----|
| Space | ∆ | Space | ∆ | Space | ∆ | Space | ∆ | Space | ∆ | Space | ∆ | |
| Full Datastore | 492 | - | 1,328 | - | 265 | - | 54 | - | 680 | - | 810 | - |
| PLAC (ours) | 279 | 43% | 739 | 44% | 166 | 37% | 45 | 17% | 479 | 30% | 690 | 15% |
Table 9: Memory Space (MB) comparsion between pruned datastore and full datastore. "Space" denotes the memory space taken by the index file and "∆" denotes the percentage of space saved by our method.
Table 10: Translation performance (BLEU) of different kNN-MT variants with full and pruned datastore on OPUS-Law domain's test set.
| kNN-MT Variants | Full | Pruned |
|---------------------------------------|--------|----------|
| Vanilla kNN (Khandelwal et al., 2021) | 61.34 | 61.24 |
| KSTER (Jiang et al., 2021) | 62.45 | 62.30 |
| Adaptive kNN (Zheng et al., 2021) | 63.53 | 63.22 |
## 6.5 Pruned Datastore Occupies Less Memory Space
In practice, the datastore must be loaded to CPU
and GPU memory during inference. So its size affects the efficiency. Since Faiss index is used to index and represent the datastore, we compare the size of index file before and after pruning (Table 9). For all the domains, our pruning method PLAC significantly reduces the memory occupation. The ratio of saved memory space is roughly identical with the PLAC pruning ratio. For the largest datastore, "OPUS-Law", the memory space can be reduced by 44%.
## 7 Related Work
Less attention have been paid to the research of interpretability of kNN-MT. To the best of our knowledge, we are the first to systematically study the relationship between the NMT model and the datastore. As for datastore pruning, Wang et al. (2022)
and Martins et al. (2022) prune the datastore based on the hypothesise that entries with similar translation are redundant. Actually, entries with similar translations may have different importance to the translation. Our analysis suggests one way to understand these differences.
## 8 Conclusion
It is interesting to explore how a neural model and a symbolic model works together. In this paper, we propose to analyze the local correctness of the neural model's predictions to identify the conditions where the neural model may fail. By introducing a knowledge margin metric to measure the local correctness, we find that the NMT model often fails when the knowledge margin is small. These results provide support for building a more explainable machine translation system.
Based on analyses, we can safely prune the datastore with the proposed PLAC method. Empirically, the datastore could be successfully pruned up to 45% while retaining translation performance. This results validate our earlier findings about the local correctness and translation failures.
Our method is general to different kNN-MT
variants and easy to implement. Future directions maybe using local correctness to explore more interpretability issue of NMT domain adaptation, e.g.
catastrophic forgetting.
## 9 Limitation
During inference, kNN-MT have to query the datastore at each decoding step, which is timeconsuming. Although up to 45% datastore entries can be safely pruned by our method, deploying a high-quality kNN-MT system with fast inference speed is still an open challenge.
## 10 Ethical Considerations
In kNN-MT works, the symbolic datastore helps adaptation but also introduce privacy concerns. Since kNN-MT explicitly saves all target language tokens in the datastore, there is a risk of privacy leakage. In the future, more efforts may be put into addressing this issue.
## Acknowledgement
We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by National Science Foundation of China
(No. 62176120), the Liaoning Provincial Research Foundation for Basic Research (No. 2022-KF-2602).
## References
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017.
An empirical comparison of domain adaptation methods for neural machine translation. In *Annual Meeting of the Association for Computational Linguistics*
(ACL).
Robert M French. 1999. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*.
Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime G
Carbonell. 2019. Domain adaptation of neural machine translation by lexicon induction. In Annual Meeting of the Association for Computational Linguistics (ACL).
Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, and Lei Li. 2021. Learning kernel-smoothed machine translation with retrieved examples. In Proceedings of the Conference on Empirical Methods in Natural Language Processing
(EMNLP).
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019.
Billion-scale similarity search with gpus. *IEEE*
Transactions on Big Data.
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations (ICLR).
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (ACL).
Minh-Thang Luong and Christopher D Manning. 2015.
Stanford neural machine translation systems for spoken language domains. In International Workshop on Spoken Language Translation (IWSLT).
Pedro Martins, Zita Marinho, and Andre Martins. 2022.
Efficient machine translation domain adaptation. In Proceedings of the Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge.
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*. Elsevier.
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission.
In *Proceedings of the Conference on Machine Translation (WMT)*.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* Annual Meeting of the Association for Computational Linguistics (ACL).
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In *Proceedings of the Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 2685–2702.
Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In *Proceedings of the* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
Liang Tian, Derek F Wong, Lidia S Chao, Paulo Quaresma, Francisco Oliveira, Yi Lu, Shuo Li, Yiming Wang, and Longyue Wang. 2014. Um-corpus:
A large english-chinese parallel corpus for statistical machine translation. In *Proceedings of international* conference on language resources and evaluation
(LREC).
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In *Proceedings of the Eighth International Conference on Language Resources and* Evaluation (LREC).
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems (NeurIPS)*.
Dexin Wang, Kai Fan, Boxing Chen, and Deyi Xiong.
2022. Efficient cluster-based k-nearest-neighbor machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics
(ACL).
Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural Computation*.
Yang Zhao, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2020. Knowledge graphs enhanced neural machine translation. In *International Joint Conference* on Artificial Intelligence (IJCAI).
Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021.
Adaptive nearest neighbor machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
## A Detailed Descriptions Of Target Domains
"OPUS-Medical" domain is made out of PDF documents from the European Medicines Agency.
"OPUS-Law" domain is a collection of the legislative text of the European Union. "OPUS-IT"
domain is constructed from localization files and documents of GNOME, KDE, PHP, Ubuntu, and OpenOffice. "OPUS-Koran" domain is a collection of Quran translations complied by the Tanzil project. "UM-Law" domain contains law statements from mainland China, Hong Kong, and Macau. "UM-Thesis" domain is composed of journal topics in the research area, including electronics, agriculture, biology, economy, etc.
## B Involved Scientific Artifacts
In this section, we list the artifact used in our project:
Moses (LGPL-2.1-License): It is a statistical machine translation system that allows you to automatically train translation models for any language pair.
Jieba (MIT-License): it is a library for chinese word segmentation.
Subword-nmt (MIT-License): Subword-nmt is a package containing preprocessing scripts to segment text into subword units.
Fairseq (MIT-license): It is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.
Faiss (MIT-license): It is a library for efficient similarity search and clustering of dense vectors.
For the sake of ethic, our use of these artifacts is consistent with their intended use.
## C Implementation Details
We implement adaptive kNN-MT with Zheng et al. (2021)'s released code and script18 based on *fairseq*19 (Ott et al., 2019). Due to the large space of hyper-parameters, we follow Zheng et al.
(2021) to set the number of retrieved entries (ka)
as 8 when training adaptive kNN-MT models for 18https://github.com/zhengxxn/adaptive-knn-mt 19https://github.com/pytorch/fairseq most experiments, and report pruning performance under different ka in Appendix D. During inference, we set beam size as 5 and length penalty as 1.0.
For implementing PLAC, the hyper-parameter kp in Algorithm 1 implicitly determines the maximum number of entries that are allowed to be pruned. So we tune kp among the subset of {4, 8, 16, 32} when given different pruning ratio r.
After buiding the datastore, we follow previous kNN-MT works (Khandelwal et al., 2021; Zheng et al., 2021) and use Faiss20 index (Johnson et al.,
2019) to represent the datastore and accelerate nearest neighbors search.
In Table 11, we report hyperparameters to reproduce our main results in Table 5 and 6. In our experiments, it takes at most 1.5 GPU hours to train adaptive kNN-MT models on a single NVIDIA Titan RTX.
| Target Domain | kp | r | ka | T |
|-----------------|------|-----|------|-----|
| OPUS-Medical | 8 | 45% | 8 | 10 |
| OPUS-Law | 16 | 45% | 8 | 10 |
| OPUS-IT | 4 | 40% | 8 | 10 |
| OPUS-Koran | 4 | 25% | 8 | 100 |
| UM-Law | 4 | 30% | 8 | 100 |
| UM-Thesis | 4 | 15% | 8 | 100 |
Table 11: Hyperparameters for pruning datastore and training adaptive kNN-MT models.
## D Pruning Effect Is Insensitive To Hyperparameter Ka
To demonstrate the reliability of our pruned datastore, after pruning datastore, we train adaptive kNN-MT models with different hyperparameter ka and evaluate their translation performance (BLEU)
on "OPUS-Law" domain's test set (Table 12). Results show that our pruning method enjoys consistent performance under different ka.
| OPUS-Law | ka = 4 | ka = 8 | ka = 16 | ka = 32 |
|--------------|----------|----------|-----------|-----------|
| Adaptive kNN | 63.31 | 63.53 | 63.56 | 63.33 |
| PLAC (ours) | 62.93 | 63.22 | 63.18 | 63.22 |
Table 12: Pruning performance under different ka.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 9
✓ A2. Did you discuss any potential risks of your work?
section 10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix B
✓ B1. Did you cite the creators of artifacts you used?
section 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
appendix B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? appendix B
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
appendix C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
xu-etal-2023-measuring | Measuring Your {ASTE} Models in The Wild: A Diversified Multi-domain Dataset For Aspect Sentiment Triplet Extraction | https://aclanthology.org/2023.findings-acl.178 | Aspect Sentiment Triplet Extraction (ASTE) is widely used in various applications. However, existing ASTE datasets are limited in their ability to represent real-world scenarios, hindering the advancement of research in this area. In this paper, we introduce a new dataset, named DMASTE, which is manually annotated to better fit real-world scenarios by providing more diverse and realistic reviews for the task. The dataset includes various lengths, diverse expressions, more aspect types, and more domains than existing datasets. We conduct extensive experiments on DMASTE in multiple settings to evaluate previous ASTE approaches. Empirical results demonstrate that DMASTE is a more challenging ASTE dataset. Further analyses of in-domain and cross-domain settings provide some promising directions for future research. | # Measuring Your Aste Models In The Wild: A Diversified Multi-Domain Dataset For Aspect Sentiment Triplet Extraction Electronics
Ting Xu♠ , Huiyun Yang♣, Zhen Wu♠ , Jiaze Chen♣, Fei Zhao♠, Xinyu Dai♠
♠National Key Laboratory for Novel Software Technology, Nanjing University
♣ByteDance
{xut, zhaof}@smail.nju.edu.cn, {wuz, daixinyu}@nju.edu.cn
{yanghuiyun.11, chenjiaze}@bytedance.com I hate the curly cord . Other than that , they re great . Very light weight , perfect for recording and mixing . [([ia], great, POS), (curly cord, hate, NEG), (mixing, perfect, POS), (recording, perfect, POS), (weight, Very light, POS)]
## Abstract
Aspect Sentiment Triplet Extraction (ASTE) is widely used in various applications. However, existing ASTE datasets are limited in their ability to represent real-world scenarios, hindering the advancement of research in this area. In this paper, we introduce a new dataset, named DMASTE, which is manually annotated to better fit real-world scenarios by providing more diverse and realistic reviews for the task. The dataset includes various lengths, diverse expressions, more aspect types, and more domains than existing datasets. We conduct extensive experiments on DMASTE in multiple settings to evaluate previous ASTE approaches. Empirical results demonstrate that DMASTE is a more challenging ASTE dataset. Further analyses of in-domain and cross-domain settings provide promising directions for future research. Our code and dataset are available at https://github.com/NJUNLP/DMASTE.
## 1 Introduction
Aspect sentiment triplet extraction (ASTE; Peng et al., 2020), a fine-grained task in sentiment analysis (Hussein, 2018), has attracted considerable interest recently (Peng et al., 2020; Xu et al., 2020).
The objective of this task is to extract the sentiment triplet, comprising of an aspect term, an opinion term, and a sentiment polarity, from a given review.
As depicted in Figure 1, an example of the sentiment triplet is ("curly cord", "hate", NEG), representing a *negative* sentiment toward the aspect term
"curly cord" using the opinion term *"hate"*. The ASTE task requires a deep understanding of linguistic forms and structures (e.g., aspect terms are usually nouns or verbs used as subjects or objects in a sentence), as well as the ability to identify the relationships between the various linguistic components (e.g., how to pair the aspect terms and opinion terms) in a given text.
Prior ASTE methods (Yan et al., 2021; Xu et al.,
2021) have achieved promising results on exist2837
POS POS POS I hate the curly cord . Other than that , they' re great . Very light weight , perfect for recording and mixing .
NEG
POS
Triplets: (**curly cord**, *hate*, NEG); (**NULL**, *great*, POS);
(**weight**, *Very light*, POS); (**mixing**, *perfect*, POS); (**recording**, *perfect*, POS).
Figure 1: An example of the ASTE task. The terms highlighted in blue are aspect terms. The terms in orange are opinion terms. The words in green are sentiment polarities. "NULL" denotes the implicit aspect.
ing academic datasets (Peng et al., 2020; Xu et al.,
2020; Wu et al., 2020), greatly promoting the development and application of ASTE. However, the datasets employed in these studies remain comparatively uncomplicated, leading to disparities between these datasets and real-world settings in terms of various factors, such as length, expression diversity, domain distribution, etc. For instance, most reviews in existing datasets are of short length, with an average of 16 words per review, while reviews in real-world scenarios are longer (an average of more than 50 words). Additionally, expressions used in these datasets are typically simple and straightforward, with limited diversity in lexicality and syntactic. Furthermore, existing datasets typically contain two domains, i.e., restaurant and laptop, with very limited domain distributions. In a nutshell, these gaps hide the complexity of real-world scenarios, and therefore, impede the exploration to fully understand and address the challenges presented in real-world ASTE tasks.
In order to bridge the gap and better simulate real-world scenarios, we create a new dataset, named Diversified Multi-domain ASTE
(DMASTE), which is manually annotated to provide a more diverse and realistic set of reviews for the task. As Table 1 and Figure 2 shows, the key
![1_image_0.png](1_image_0.png)
Dataset #D #R #W/#R #T #IA #T/#I POS #n-gram DP #n-gram #vocab
DP 2 6037 16.49 9309 0 1.54 14.25/16.05/16.52 13.51/14.24/14.43 6512 DW 2 6009 16.46 10390 0 1.73 14.22/16.02/16.48 13.48/14.21/14.40 6484
DX 2 5989 16.43 10252 0 1.71 14.20/16.00/16.46 13.46/14.19/14.38 6467
DMASTE 8 7524 59.68 28233 11945 3.75 36.67/52.11/57.92 35.08/40.20/41.93 19226
characteristics of DMASTE can be summarized as follows: (1) **Various lengths**: DMASTE covers reviews of various lengths, ranging from 1 to 250 words, with an average of 59 words per review. (2)
Diverse expressions: more part-of-speech and dependency n-grams show a wide variety of lexical bundles and syntactic in DMASTE, which can better represent the complexity and diversity of realworld scenarios. (3) **More aspect types**: DMASTE
includes triplets annotated with both implicit and explicit aspect terms, providing a more comprehensive understanding of the target being discussed. (4)
More domains: DMASTE covers eight domains, enabling more comprehensive research in ASTE
like single-source and multi-source domain adaptation. To summarize, these characteristics make DMASTE a better benchmark to verify the ability of ASTE approaches in real-world scenarios.
To thoroughly investigate the challenges introduced by DMASTE and explore promising directions for future ASTE research, we implement several representative methods and empirically evaluate DMASTE under multiple settings:
- In-domain results show that the performance of current models declines significantly on DMASTE. And analysis reveals that long reviews, complex sentences, and implicit aspect terms make DMASTE *a challenging dataset*.
- In the single-source domain adaptation setting, we observe a positive correlation between transfer performance and domain similarity.
But simply learning domain-invariant features may lead to the loss of task-specific knowledge, which suggests that reducing domain discrepancy while keeping the task-specific knowledge can be a future direction.
- We observe the *negative transfer* (Rosenstein et al., 2005) in the multi-source domain adaptation setting, and find the negative transfer occurs mainly in dissimilar source-target pairs.
This indicates that *the domain similarity may* be a useful guideline for domain selection in future multi-source cross-domain ASTE research.
Overall, the results of DMASTE on multiple settings provide a deeper understanding of the challenges and future directions for ASTE research.
We believe this work will bring a valuable research resource and benchmark for the community.
## 2 Related Work 2.1 Aste Datasets
Current ASTE datasets (Peng et al., 2020; Xu et al., 2020; Wu et al., 2020) share a common origin and are constructed through similar processes. Specifically, they originate from SemEval Challenge datasets (Pontiki et al., 2014, 2015, 2016), which provide aspect terms and corresponding sentiments for reviews in the domains of restaurant and laptop.
Based on the datasets, Fan et al. (2019) annotate the aspect-opinion pairs. To provide more detailed information about the review text, researchers resort to extracting sentiment triplets from the review, i.e.,
aspect term, opinion term, and sentiment polarity.
Peng et al. (2020) and Wu et al. (2020) construct the ASTE datasets by aligning the aspect terms between the datasets of Fan et al. (2019) and original SemEval Challenge datasets. As noted by Xu et al. (2020), the dataset of Peng et al. (2020) does not contain cases where one opinion term is associated with multiple aspect terms. Xu et al. (2020)
subsequently refine the dataset and release a new version.
However, all of these datasets contains reviews of limited diversity from only two domains. Additionally, they all require aspect terms to align the aspect-sentiment pair and aspect-opinion pair, thus they do not include implicit aspect terms (Poria et al., 2014). Our dataset, DMASTE, addresses these limitations by providing a more diverse set of reviews covering more domains and annotate triplets with both implicit and explicit aspect terms, making it better suited for real-world scenarios 1.
## 2.2 Aste Methods
Corresponding solutions for ASTE can be divided into three categories: tagging-based (Li et al., 2019; Peng et al., 2020; Xu et al., 2020; Zhang et al.,
2020; Wu et al., 2020; Xu et al., 2021), MRCbased (Chen et al., 2021; Mao et al., 2021) and generation-based (Zhang et al., 2021b; Yan et al., 2021; Fei et al., 2021; Mukherjee et al., 2021).
The tagging-based method employs a sequence or grid tagging framework to extract the aspect and opinion terms, then combines them to predict the sentiment. The MRC-based method constructs a specific query for each factor in the triplet and extracts them through the answer to the query. The generation-based method transforms the ASTE task into a sequence generation problem and employs sequence-to-sequence (seq2seq) models. Then it decodes the triplets through a specifically designed algorithm. In this paper, we employ some representative methods in three categories and explore
## 3 Dataset
To construct a dataset that is more representative of real-world scenarios, we manually annotate a new dataset, named Diversified Multi-domain ASTE
(DMASTE). In this section, we first present a detailed description of the data collection and annotation process. Then we demonstrate the superiority of DMASTE, through a comparison with previous datasets in terms of key statistics and characteristics.
## 3.1 Collection
The data collection process of DMASTE is carried out in three stages: (1) We select the Amazon dataset (Ni et al., 2019) as our source of data due to its large volume of reviews from various regions around the world, which aligns with the goal of creating a dataset that is more representative of real-world scenarios. (2) We select four of the most popular domains from the Amazon dataset (Appendix A), and randomly sample a portion of the data for annotation. (3) To further enable comprehensive exploration of ASTE like domain adaptation settings, we additionally sample four additional domains and annotate a smaller portion of the data for testing in domain adaptation settings.
## 3.2 Annotation
Simplified Annotation Guidelines 2. Following Peng et al. (2020); Xu et al. (2020); Wu et al.
(2020), we annotate *(aspect term, opinion term,*
sentiment polarity) triplets for each review, which may include multiple sentences. An *aspect term* refers to a part or an attribute of products or services. Sometimes, the aspect term may not appear explicitly in the instance, i.e., implicit aspect terms
(Poria et al., 2014). We keep the triplets with implicit aspect terms. An *opinion term* is a word or phrase that expresses opinions or attitudes toward the aspect term. A *Sentiment polarity* is the sentiment type of the opinion term, which is divided into three categories: positive, negative, and neutral. It is worth noting that one aspect term can be associated with multiple opinion terms and vice versa.
Annotation Process. In order to ensure the quality of the annotation, we employed 14 workers (an2Due to the space limitation, we present detailed labeling guidelines in Appendix A.2.
| Domain | Electronics | Fashion | Beauty | Home | Book | Pet | Toy | Grocery |
|----------|---------------|-----------|----------|--------|--------|-------|-------|-----------|
| Train | 1395 | 851 | 535 | 1050 | 0 | 0 | 0 | 0 |
| Dev | 200 | 121 | 77 | 152 | 159 | 167 | 173 | 173 |
| Test | 399 | 245 | 154 | 301 | 325 | 340 | 354 | 353 |
| Term | Aspect terms | Opinion terms | Triplets |
|--------|----------------|-----------------|------------|
| IAA | 0.666 | 0.664 | 0.593 |
notators) to perform the annotation and 4 workers
(verifiers) to perform the quality-assurance sampling. Both groups are compensated based on the number of annotations. The annotation process is carried out using a train-trial-annotate-check procedure. (1) *Train*: All workers are trained on the task of annotation. (2) *Trial*: All workers try to annotate a small portion of the data to familiarize themselves with the task and to receive feedback on their annotations. Workers are required to reach 95% accuracy in labeling before proceeding to the next phase. Following previous work in data annotation for ABSA (Barnes et al., 2018), we evaluate the inter-annotator agreement (IAA) using the AvgAgr metric (Esuli et al., 2008). The IAA scores of annotated aspect terms, opinion terms, and sentiment triplets at the trial stage are shown in Table 3. These scores are slightly lower than previous ABSA datasets (Barnes et al., 2018), which can be attributed to the inherent complexity of DMASTE.
(3) *Annotate*: The annotators are responsible for annotating the data on a daily basis. (4) *Check*: The verifiers sampled 20% of the annotations each day to ensure accuracy. If the accuracy of an annotator is found to be below 95%, the data annotated by that worker for that day would be re-done until meeting the accuracy requirement. Otherwise, the annotations are accepted. To avoid false positives by the verifiers, we introduce an appeals mechanism. Please refer to the appendix A.2 for detailed information.
## 3.3 Statistics And Characteristics
To provide a deeper understanding of our proposed dataset, we present a series of statistics and characteristics in this section.
Various Lengths. Figure 2a illustrates the comparison of review length between Xu et al. (2020)
and DMASTE. We find that DMASTE contains reviews of more varied lengths. And Table 1 demonstrates that the average length of DMASTE is 3.6 times that of Xu et al. (2020).
Diverse Expressions. We quantify the expression diversity by counting the vocabulary size, ngrams in part-of-speech (POS) sequences and dependency parsing (DP) trees 3. Figure 2b shows the comparison of POS 2-grams between Xu et al.
(2020) and DMASTE. The results indicate that DMASTE contains reviews with diverse expressions. And Table 1 shows that the number of ngrams of POS and DP trees in DMASTE is significantly higher than those of other datasets, with the number of 4-grams being about 3 times that of other datasets. Besides, the vocabulary size of DMASTE
is about 3 times that of other datasets. These statistics indicate that the reviews in DMASTE are more diverse and complex.
More Aspect Types. Table 1 shows that DMASTE not only contains more explicit aspect terms than existing datasets but also includes annotations of implicit aspect terms, which constitute a large proportion. This provides a more comprehensive understanding of the target being discussed.
More Domains. Table 1 illustrates that our dataset comprises four times the number of domains compared to existing datasets. Furthermore, the first four domains presented in Table 2, which are characterized by a larger amount of annotated data, represent leading fields in e-commerce platforms (Appdenix A), providing a more realistic representation of popular topics. Additionally, we include an additional four domains with less annotated data to enable a more comprehensive analysis of the domain adaptation setting.
In summary, DMASTE is a more realistic and diversified dataset, providing a suitable testbed to verify the ability of ASTE methods in real-world scenarios.
## 4 Experiment Settings
To thoroughly understand DMASTE and provide some promising directions for future ASTE research, we conduct experiments in multiple settings. This section will first provide an overview of the different experimental setups, then introduce the evaluation metric, and finally, describe the models employed in the experiments.
## 4.1 Setups
We conduct a series of experiments under comprehensive training and testing setups:
- *In-domain*: we train and test the models with data from the same domain.
- *Single-source Cross-domain*: we train the models with data from a single-source domain and test them on a different target domain.
- *Multi-source Cross-domain*: we train the models with data from multiple source domains and test them on a different target domain.
In the cross-domain setting, we regard Electronics, Fashion, Beauty, Home as the source domain, and Book, Pet, Toy, Grocery as the target domain. More training details are shown in Appendix B.1.
## 4.2 Evaluation Metric
Following Xu et al. (2021), we employ the F1 score to measure the performance of different approaches.
All the experimental results are reported using the average of 5 random runs.
## 4.3 Models
This section presents the various approaches we evaluate on the DMASTE. We first introduce representative models for ASTE and employ them as our baseline models. Then for the single-source cross-domain setting, we utilize current methods for ASTE and integrate some of them with the adversarial training (Ganin et al., 2016), which is a widely-used technique in domain adaptation.
Baseline Models For ASTE. We implement several representative models from various frameworks, including tagging-based, MRC-based, and generation-based frameworks.
- Span-ASTE (Xu et al., 2021): a tagging-based method. It explicitly considers the span interaction between the aspect and opinion terms.
- BMRC (Chen et al., 2021): a MRC-based method. It extracts aspect-oriented triplets and opinion-oriented triplets. Then it obtains the final results by merging the two directions.
- BART-ABSA (Yan et al., 2021): a generationbased method. It employs a pointer network and generates indices of the aspect term, opinion term, and sentiment polarity sequentially.
- GAS (Zhang et al., 2021b): a generationbased method. It transforms the ASTE tasks into a text generation problem.
Baseline Models For Single-source Crossdomain Setting. We incorporate Span-ASTE and BMRC with adversarial training (AT), a common strategy in domain adaptation. Specifically, we apply a domain discriminator on different features for each method:
- BMRC+AT: We apply a domain discriminator on the token and [CLS] features. In this way, we can learn discriminative features by classifiers of the ASTE task and domain-invariant features by the domain discriminator induced by adversarial training.
- Span-ASTE+AT: As the extraction in this method is based on the prediction of the span and pair representation, which are derived from token representation, we apply adversarial learning to the token representations, similar to the BMRC+AT model.
## 5 Results
This section presents thorough analyses of the challenges of the DMASTE dataset and ASTE task, with the aim of better understanding these challenges and suggesting promising directions for future research. For each setting, we first show experimental results. Then we perform comprehensive analyses to investigate the challenges of the dataset and task, highlighting the limitations of current approaches. Finally, we provide promising directions for future research in this area.
## 5.1 In-Domain
The overall in-domain experimental results are shown in Table 4. We first explore the limitations of baseline models to better understand challenges introduced by DMASTE. Then we compare two representative models to find promising directions for future research.
| Method | Electronics | Beauty | Fashion | Home | Average |
|-----------|---------------|------------|------------|------------|-----------|
| BMRC | 41.95±0.34 | 38.57±0.97 | 44.87±0.69 | 41.18±0.66 | 41.64 |
| BART-ABSA | 43.38±1.37 | 41.13±1.14 | 43.89±0.82 | 40.56±1.04 | 42.24 |
| GAS | 47.10±0.64 | 44.32±0.52 | 47.80±1.07 | 47.22±1.13 | 46.61 |
| Span-ASTE | 47.86±0.74 | 46.46±0.66 | 50.38±0.68 | 49.14±0.41 | 48.46 |
Table 4: F1 scores of in-domain ASTE on DMASTE, and the best results are highlighted in bold font. Span-ASTE
is significantly better than other methods with p < 0.05.
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
F1 lengths.
Challenges of DM**ASTE**. Results of the indomain experiments, reported in Table 4, show a performance drop compared with results on existing datasets (Span-ASTE gets a 59.38 score for the Laptop domain in Xu et al. (2020) and 47.86 for the Electronics domain in DMASTE). To further investigate the challenges of DMASTE, we further analyze the performance under different review lengths, sentence complexity, and aspect types.
- **Length.** Figure 3a illustrates the relationship between the review length and the model performance on DMASTE. Results show that the performance of the models decreases as the length of the review increases. This indicates long reviews present a significant challenge for current models.
- **Sentence Complexity.** We quantify the sentence complexity by calculating the number of 2-grams in the part-of-speech (POS) sequence of the review text. Then we analyze the relationship between the extraction performance and sentence complexity. As shown in Figure 3b, we observe a decline in performance with an increase in sentence complexity. This highlights the challenges posed by the diversified expression presented in DMASTE.
- **Aspect Types.** In Figure 3c, we analyze the performance of models on triplets with implicit and explicit aspect terms. Results
demonstrate that the extraction of implicit aspect terms is more challenging than that of explicit aspect terms. The inclusion of both implicit and explicit aspect terms in DMASTE makes it more challenging for ASTE.
We can conclude that long reviews, complex sentences and implicit aspect terms make DMASTE a challenging dataset.
Model Comparison. Prior works have demonstrated that GAS (Zhang et al., 2021b) outperforms Span-ASTE (Xu et al., 2021) on the dataset of Xu et al. (2020). However, as shown in Table 4, Span-ASTE outperforms GAS on DMASTE. We conduct further analysis and discover that the reversal in performance can be attributed to the long instance lengths and complex sentence patterns presented in DMASTE. As illustrated in Figure 3, GAS achieves comparable results when the reviews are short and simple, but its performance declines more sharply when encountering long and complex reviews in comparison to Span-ASTE. This can be attributed to the generation-based nature of GAS, which is prone to forgetting the original text and misspellings or generating phrases that do not appear in the review when dealing with long reviews. In contrast, the tagging-based nature of Span-ASTE only identifies the start and end tokens for the aspect and opinion terms, which is less affected by the complexity and length of the
| Domain | BMRC | BART-ABSA | GAS | Span-ASTE | BMRC+AT | Span-ASTE+AT |
|----------------------|------------|-------------|------------|-------------|------------|----------------|
| Electronics→ Book | 33.74±0.64 | 35.43±0.61 | 35.57±0.76 | 40.36±1.59 | 32.56±1.57 | 39.58±0.70 |
| Beauty→ Book | 30.01±1.29 | 30.24±0.83 | 30.96±0.99 | 38.58±1.58 | 28.56±0.88 | 38.79±0.89 |
| Fashion→ Book | 31.71±1.37 | 32.45±1.77 | 36.26±0.64 | 39.88±0.74 | 30.80±1.59 | 39.60±0.69 |
| Home→ Book | 31.93±1.79 | 33.48±1.15 | 35.91±0.75 | 39.45±0.73 | 31.31±0.45 | 39.35±0.93 |
| Electronics→ Grocery | 39.68±0.70 | 40.39±1.28 | 39.16±1.25 | 45.36±1.12 | 40.15±0.55 | 43.90±0.47 |
| Beauty→ Grocery | 34.46±0.85 | 34.22±0.59 | 36.22±0.50 | 40.32±0.81 | 34.91±1.00 | 40.30±1.64 |
| Fashion→ Grocery | 37.18±0.50 | 37.27±1.08 | 40.13±0.38 | 43.41±0.83 | 37.56±0.29 | 42.29±0.79 |
| Home→ Grocery | 39.03±1.78 | 38.56±1.14 | 42.51±0.37 | 43.74±1.02 | 38.83±0.75 | 41.78±1.79 |
| Electronics→ Toy | 42.39±0.88 | 42.49±1.13 | 43.55±0.52 | 47.23±0.38 | 41.38±1.20 | 47.43±0.67 |
| Beauty→ Toy | 35.66±0.94 | 33.87±1.05 | 37.95±0.63 | 41.19±0.60 | 35.98±1.14 | 40.86±0.74 |
| Fashion→ Toy | 41.13±1.12 | 40.08±1.15 | 42.78±0.36 | 46.83±0.94 | 40.74±1.20 | 46.09±0.87 |
| Home→ Toy | 40.26±1.22 | 40.81±1.27 | 44.16±0.73 | 47.60±1.09 | 40.42±0.51 | 46.47±0.75 |
| Electronics→ Pet | 37.39±0.60 | 36.88±1.07 | 38.17±0.89 | 41.04±0.48 | 36.70±0.43 | 40.09±0.77 |
| Beauty→ Pet | 32.80±1.07 | 32.07±0.76 | 32.55±0.57 | 36.41±0.40 | 32.76±0.54 | 35.38±0.83 |
| Fashion→ Pet | 35.97±0.84 | 34.92±0.85 | 36.13±0.74 | 40.57±0.57 | 36.07±0.81 | 38.78±0.81 |
| Home→ Pet | 37.64±1.03 | 37.26±1.59 | 39.38±0.70 | 41.42±0.51 | 37.24±0.97 | 40.64±1.05 |
| Average | 36.31 | 36.28 | 38.21 | 42.09 | 36.00 | 41.33 |
reviews. This analysis provides insights for future generation-based methods: implementing a comparison algorithm between the generated output and the original input text and making modifications if they are mismatched.
## 5.2 Single-Source Cross-Domain
We present the overall single-source cross-domain experimental results in Table 5 (additional training details of adversarial training are shown in Appendix B.2). We then conduct a correlation analysis to investigate the factors that impact the transfer performance. Additionally, we analyze current domain adaptation strategies and provide insights for future research in this area.
Model Performance v.s. Domain Similarity.
Table 5 reveals that performance varies significantly when transferring from different source domains. For instance, the F1 score of Span-ASTE
on Home→ Toy is 47.60, which is 6.41 higher than that on Beauty→Toy. To gain insights into the factors that impact the transfer performance, we conduct the Pearson Correlation Analysis (Benesty et al., 2009) on the relationship between the model performance and domain similarity based on SpanASTE. Specifically, we fix the number of training data for each source domain to alleviate the impact of data volume. Following Liu et al. (2021),
we measure the domain similarity by computing vocabulary overlaps, using the top 1k most frequent words in each domain excluding stopwords.
Results in Figure 4 show that there is a positive correlation (with a 0.52 Pearson correlation coefficient) between model performance and domain similarity. This indicates that large domain discrepancy is a huge challenge for the single-source cross-domain ASTE task. Therefore, *reducing the* domain discrepancy is a promising way to improve the transfer performance in cross-domain ASTE.
With AT v.s. Without AT. We compare the performance of BMRC and BMRC+AT, as well as Span-ASTE and Span-ASTE+AT in Table 5. Results indicate that adversarial training has a negative impact on the performance of crossdomain ASTE. To investigate the cause of performance degradation, we visualize the representations learned from the models when transferring from the Electronics to the Pet domain in Figure 5. Compared with Span-ASTE, the features of different categories in Span-ASTE+AT are less discriminable, especially on the x-axis. We attribute the performance drop to feature collapse (Tang and Jia, 2020) induced by adversarial training. This occurs when the model focuses on learning domaininvariant features while ignoring the discriminability of each category. This issue is particularly pronounced in the ASTE task, as it requires finegrained discrimination for three factors. Future research in cross-domain ASTE could focus on developing methods that can learn domain-invariant features while maintaining the discriminability of each category.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
## 5.3 Multi-Source Cross-Domain
In this section, we conduct multi-source crossdomain experiments with the number of source domains varying from 2 to 4. The results are shown in Table 6. Our results reveal instances of negative transfer. For example, F + H → Book > E + F + H → Book.
Negative transfer indicates that transferring from some domains could harm the learning of the target domain (Guo et al., 2018). To further investigate the phenomenon of negative transfer, we conduct an analysis on the relationship between domain similarity and transfer performance. We further compare the results with the domain similarity in Figure 4b. The results indicate that half of the negative transfers are observed when transferring from the source domains with the least similarity to the target domain (e.g., F + H → Toy > B + F + H →
Toy. Adding the Beauty domain leads to negative transfer and it is the least similar domain with Toy).
This suggests that domain similarity can serve as a useful guideline for selecting source domains in future multi-source cross-domain research.
## 6 Conclusion And Future Directions
We propose DMASTE, a manually-annotated ASTE dataset collected from Amazon. Compared with existing ASTE datasets, DMASTE contains reviews of various lengths, diverse expressions, more aspect types and covers more domains, which indicates DMASTE is a suitable testbed to verify the ability of ASTE methods in real-world scenarios. We explore the dataset in multiple scenarios, i.e., in-domain, single-source cross-domain, and multi-source cross-domain and provide some promising directions for future research. For the in-domain setting, we compare the results between existing datasets and DMASTE and find that the long reviews, complex sentences, and implicit aspect terms make DMASTE a challenging dataset.
For the single-source cross-domain setting, we observe that domain similarity and cross-domain performance are positively correlated. Furthermore, analysis of adversarial training shows that simply learning domain-invariant features may lead to feature collapse and result in the loss of task-specific
| Domain | Book | Grocery | Toy | Pet | Average |
|-----------|------------|------------|------------|------------|-----------|
| B + E | 40.94±1.14 | 45.94±0.56 | 48.07±0.75 | 41.70±0.79 | 44.16 |
| E + F | 40.59±0.86 | 46.14±0.98 | 50.10±0.83 | 41.90±0.79 | 44.68 |
| E + H | 41.08±1.50 | 45.34±0.50 | 48.97±0.69 | 42.70±0.41 | 44.52 |
| B + F | 41.13±0.51 | 44.60±0.73 | 46.81±0.91 | 41.03±0.59 | 43.39 |
| B + H | 41.31±1.18 | 44.31±0.84 | 47.12±0.87 | 41.67±0.93 | 43.60 |
| F + H | 42.40±0.31 | 45.97±0.56 | 49.11±0.74 | 43.12±0.70 | 45.15 |
| B + E + F | 41.34±0.73 | 46.28±0.78 | 49.44±1.08 | 41.12±0.58 | 44.80 |
| B + E + H | 41.21±0.48 | 45.39±0.49 | 49.33±0.32 | 43.48±1.12 | 44.85 |
| E + F + H | 41.44±0.71 | 45.39±0.80 | 50.24±0.57 | 43.59±1.06 | 45.17 |
| B + F + H | 41.73±0.38 | 45.40±0.63 | 48.55±0.71 | 43.46±0.19 | 44.79 |
| ALL | 41.83±0.57 | 46.07±0.47 | 50.16±0.49 | 43.62±1.32 | 45.42 |
knowledge. Therefore, it is important to design appropriate methods to reduce the domain discrepancy while preserving fine-grained task features for ASTE tasks. In multi-source domain adaptation, we find that most of the negative transfer comes from dissimilar source-target pairs, pointing out that domain similarity can be a domain selection guideline for future research. In conclusion, we hope that our dataset DMASTE and analyses will contribute to the promotion of ASTE research.
## Limitations
We analyze the limitations of this study from the following perspectives:
- The ASTE task extracts the sentiment triplets from a review, while the Aspect Sentiment Quad Prediction (ASQP) task adds an aspect category based on the triplets and provides more comprehensive information. Defining the aspect category for each domain is also hard work. Future work can take the aspect category into consideration.
- All the models are evaluated by F1 score, in which only exact matching can be considered correct. This metric can not differentiate between partially matching and completely mismatching and is not the best choice for a challenging dataset like DMASTE. Future work can include some partially matching metrics for this task.
- There is no specifically designed method for cross-domain ASTE. But we analyze the challenges of this task in detail. We are planning to design a new method for cross-domain ASTE
based on the analysis results.
## Acknowledgments
This work is done during Ting Xu's internship at ByteDance. We would like to thank the anonymous reviewers for their insightful comments. Zhen Wu is the corresponding author. Ting Xu would like to thank Siyu Long for his constructive suggestions.
This work is supported by National Natural Science Foundation of China (No. 62206126, 61936012, and 61976114).
## References
Jeremy Barnes, Toni Badia, and Patrik Lambert. 2018.
MultiBooked: A corpus of Basque and Catalan hotel reviews annotated for aspect-level sentiment classification. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson correlation coefficient.
In *Noise reduction in speech processing*, pages 1–4.
Springer.
Hongjie Cai, Rui Xia, and Jianfei Yu. 2021. Aspectcategory-opinion-sentiment quadruple extraction with implicit aspects and opinions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 340–350, Online.
Association for Computational Linguistics.
Shaowei Chen, Yu Wang, Jie Liu, and Yuelin Wang.
2021. Bidirectional machine reading comprehension for aspect sentiment triplet extraction. In *Proceedings Of The AAAI Conference On Artificial Intelligence*, volume 35, pages 12666–12674.
Andrea Esuli, Fabrizio Sebastiani, and Ilaria Urciuoli.
2008. Annotating expressions of opinion and emotion in the Italian content annotation bank. In *Proceedings of the Sixth International Conference on*
Language Resources and Evaluation (LREC'08),
Marrakech, Morocco. European Language Resources Association (ELRA).
Zhifang Fan, Zhen Wu, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling.
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509–2518.
Hao Fei, Yafeng Ren, Yue Zhang, and Donghong Ji.
2021. Nonautoregressive encoder-decoder neural framework for end-to-end aspect-based sentiment triplet extraction. *IEEE Transactions on Neural Networks and Learning Systems*.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016.
Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–
2030.
Jiang Guo, Darsh Shah, and Regina Barzilay. 2018.
Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694–4703, Brussels, Belgium. Association for Computational Linguistics.
Doaa Mohey El-Din Mohamed Hussein. 2018. A survey on sentiment analysis challenges. *Journal of King* Saud University-Engineering Sciences, 30(4):330–
338.
Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pages 6714–6721.
Zihan Liu, Yan Xu, Tiezheng Yu, Wenliang Dai, Ziwei Ji, Samuel Cahyawijaya, Andrea Madotto, and Pascale Fung. 2021. Crossner: Evaluating crossdomain named entity recognition. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 13452–13460.
Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A
joint training dual-mrc framework for aspect based sentiment analysis. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13543–13551.
Rajdeep Mukherjee, Tapas Nayak, Yash Butala, Sourangshu Bhattacharya, and Pawan Goyal. 2021. PASTE: A tagging-free decoding framework using pointer networks for aspect sentiment triplet extraction. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9279–9291, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019.
Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188–197, Hong Kong, China. Association for Computational Linguistics.
Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A
near complete solution for aspect-based sentiment analysis. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 34, pages 8600–8607.
Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016. Semeval2016 task 5: Aspect based sentiment analysis. In *International workshop on semantic evaluation*, pages 19–30.
Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015.
Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486–
495.
Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In *Proceedings of the 8th* International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics.
Soujanya Poria, Erik Cambria, Lun-Wei Ku, Chen Gui, and Alexander Gelbukh. 2014. A rule-based approach to aspect extraction from product reviews.
In Proceedings of the second workshop on natural language processing for social media (SocialNLP),
pages 28–37.
Michael T Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G Dietterich. 2005. To transfer or not to transfer. *NIPS*.
Hui Tang and Kui Jia. 2020. Discriminative adversarial domain adaptation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 5940–5947.
Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020. Grid tagging scheme for aspect-oriented fine-grained opinion extraction.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2576–2585, Online.
Association for Computational Linguistics.
Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 4755–4766, Online. Association for Computational Linguistics.
Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020.
Position-aware tagging for aspect sentiment triplet extraction. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 2339–2349, Online. Association for Computational Linguistics.
Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 2416–2429, Online.
Association for Computational Linguistics.
Chen Zhang, Qiuchi Li, Dawei Song, and Benyou Wang.
2020. A multi-task learning framework for opinion triplet extraction. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 819–828, Online. Association for Computational Linguistics.
Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021a. Aspect sentiment quad prediction as paraphrase generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9209–
9219, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021b. Towards generative aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2:
Short Papers), pages 504–510.
Yue Zhang, Quan Guo, and Parisa Kordjamshidi. 2021c.
Towards navigation by reasoning over spatial configurations. In Proceedings of Second International Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics, pages 42–52, Online. Association for Computational Linguistics.
## A Dataset A.1 Dataset Details
Domain Selection Process. First, we select the most popular four domains based on the reports of Top selling products on Amazon and the Internet.
For these four domains, we annotate more data to enable research on ASTE with more realistic and representative reviews. Then, to enable more comprehensive research like the cross-domain setting, we randomly select another four domains and annotate fewer data for testing in the cross-domain setting.
Sampled Reviews. We sample two reviews for each domain and demonstrate them in Table 7. The first column displays the review text. And the second column shows the extracted triplets (aspect terms, opinion terms, sentiment polarity).
Annotator Compensation. As described in Section 3.2, we hire 18 workers in the annotation and follow strict quality control to ensure the quality of the annotation. We ensure the privacy right of workers is respected in the annotation process. All workers have been paid above the local minimum wage and agree to use the dataset for research purposes.
License. DMASTE will be publicly avaliable under the terms of CC BY-NC-SA 4.0 License.
The dataset is for academic use, which is consistent with its origination Amazon (Ni et al., 2019)
dataset.
## A.2 Annotation Guidelines
The purpose of this annotation guideline is to provide a consistent and accurate method for annotating reviews. The task of the annotator is to identify triplets comprising of the following types:
- **Aspect Term.** An Aspect term refers to a part or an attribute of products or services.
Sometimes, the aspect term may not appear explicitly in the instance, i.e., implicit aspect
(Poria et al., 2014). We keep the triplets with implicit aspects.
- **Opinion Term.** An opinion term is a word or phrase that expresses opinions or attitudes towards aspects.
- **Sentiment Polarity.** Sentiment polarity is the sentiment type of the opinion term, which
is divided into three categories: positive, negative, and neutral.
Annotation Guidelines for Aspect Terms. Aspect terms can be divided into three types: (1) a part of the product, (2) an attribute of the product or store, (3) an attribute of a part of the product.
Note, if the review expresses opinions toward some targets which is not explicitly mentioned, we annotate them as implicit aspect (NULL). If we have two aspect terms split with "and", ",", etc, label them separately. Examples of aspect terms: battery, price, outlook, taste, customer support.
## Annotation Guidelines For Opinion Terms.
Opinion terms usually expressed opinions or attitudes. (1) Label as much information as possible about "opinion of the product, user experience, an emotion about buying the product", etc. (2) The information selected should not change the original semantics. (3) Opinion terms are usually adjectives/adverbs, and they can be a phrase or a single word. (4) "definitely" or "very" alone are not enough to be labeled as an opinion term. (5) If we have two opinion terms split with "and", ",", etc, label them separately. Examples of opinion terms:
so good, expensive, love, unfriendly.
Annotation Guidelines for Sentiment Polarities.
Sentiment polarities can be divided into three types:
(1) positive: the review expresses a positive attitude toward a specific aspect, (2) negative: the review expresses a negative attitude toward a specific aspect, (3) neutral: the review expresses no obvious positive or negative attitudes but it expresses an attitude. Examples of sentiment polarities: (price, expensive, negative), (battery life, long, positive),
(outlook, just okay, neutral).
Abandon Cases. If the review is just some meaningless word, e.g. hhhh, ahhhh, oooooo, then abandon it.
Annotations Steps. When annotating the reviews, please follow the steps below:
1. Start annotating the reviews from left to right one by one using appropriate order from aspect terms, opinion terms to sentiment polarities.
2. Please make sure to select the right text boundary for aspect terms and opinion terms before clicking the "Mark this" button.
3. Please make sure to select a suitable category for sentiment polarities.
4. After double-checking the current triplet, click the "complete" button to proceed to the next triplet.
5. If the review contains more than one triplet, go through the same steps to annotate them and click "Submit" to proceed to the next review.
6. If the review satisfies the condition of abandon case, click "abandon" and proceed to the next review.
1. If the verifier agrees with the annotation of the annotator, the annotation is added to the dataset. Otherwise, the verifier will annotate the data and send it to the annotator as feedback. If the annotator accepts the feedback, the annotation from the verifier is added to the dataset.
2. If the annotator disagrees with the feedback of the verifier, the annotator will appeal. The corresponding data will be discussed by all verifiers and annotators until they reach an agreement. After that, the new annotation is added to the dataset.
## B.2 **Detailed Results For Adversarial Training** B Experiments B.1 Training Details
{1, 10, 30, 50, 100, 500, 700, 800, 1000, 1500} for Span-ASTE + AT and BMRC + AT, respectively.
For each value, we conduct experiments with 5 random seeds and set α by the F1 score on the development set. We set α = 10 for Span-ASTE+AT
and α = 800 for BMRC+AT. The parameter search costs about 1000 GPU hours.
We search the hyper-parameter α for adversarial learning on the development set. Detailed experiment results are shown in Table 8 and Table 9. We can observe that adversarial training is parametersensitive.
Appeal Process. During the check phase, the verifiers are not always right. Specifically, we introduce an Appeal process to ensure the high quality of dataset in the check phase. The detailed process is as follows:
From the above process, verifiers sometimes play a role in annotators and they work together to ensure the high quality of dataset. We will add the above details in the revision.
We utilize the pretrained model provided by HuaggingFace and run all the experiments on NVIDIA
A100 GPU with pytorch. For the hyper-parameters of the baseline models, we follow the original settings in their paper (Chen et al., 2021; Yan et al., 2021; Zhang et al., 2021c; Xu et al., 2021). For adversarial training, we follow the implementation in Ganin et al. (2016). One hyper-parameter in this method is α, the ratio of training the generator to the discriminator. We search α in {1, 3, 5, 7, 10, 15, 20, 30, 50, 100} and
| Electronics Good case . It has minimum padding that makes the phone feel secure but too much to be burdensome . The card slots are convenient . | (card slots, convenient, POS); (case, Good, POS); (padding, minimum, POS). |
|----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| They work very well . The connections are nice and tight and | (connections, nice, POS); (connections, tight, POS); (length |
| I loose no quality over the length of the cord . | of the cord, loose no quality over, POS); (work, very well, POS). |
| Beauty This smells so good exactly like the body lotion and spray mist . It is a soft femine fragrance not too overpowering . I would highly recommend ! | (NULL, would highly recommend, POS); (fragrance, not too overpowering, POS); (fragrance, soft femine, POS); (smells, so good, POS). |
| Revision products are awesome . I switch away from this in the winter months since it s not as moisturizing as an emollient based cleanser . | (Revision, awesome, POS); (NULL, not as moisturizing, NEG). |
| Home I think i just started a new hobby , this Victorinox is pretty | (Victorinox, pretty awesome, POS). |
| awesome . I might have to get a few more designs . Too big and cumbersome to be useful . If design were downsized a couple of inches , they would be perfect . Much too big for my salad bowls . | (NULL, Much too big, NEG); (NULL, Too big, NEG); (NULL, cumbersome, NEG). |
| Fashion the fabric is very , very thin . It should be underwear , not a | (NULL, see through, NEG); (fabric, very thin, NEG); (fits, |
| regular shirt . fits too snug and is see through | too snug, NEG). |
| What can I say that you don t already know ? Classic Chucks . One of the best , most versatile , and most durable sneakers out there . | (Chucks, Classic, POS); (sneakers, One of the best, POS); (sneakers, most durable, POS); (sneakers, most versatile, POS). |
| Book It was a good book , haven t read it before bow . | Very |
| interesting , with good suspense and humor mixed in ! 10 / 10 | (NULL, Very interesting, POS); (book, good, POS); (humor, good, POS); (suspense, good, POS). |
| Challenging to read but worth it . Persevere to the end of the | (NULL, Challenging to read, NEU); (NULL, worth, POS). |
| book . Ask God to open your mind and heart . Grocery These are the best Slim Jims every , I have tried them , but | (Slim Jims, best, POS); (taste, Greatest, POS); (texture, Greatest, POS). |
| the taste , the texture of the Honey BBQ is the Greatest . A good almond extract , and I like that it s organic . I took | (almond extract, good, POS); (organic, like, POS); (price, |
| off one star for high price . Should not cost that much . | high, NEG). |
| Pet This is a good prefilter . The clear plastic adapters are a little | (clear plastic adapters, a little brittle, NEG); (prefilter, good, |
| brittle so be careful when attaching to your filter input . | POS). |
| Bought these as cat treats . Have to break them up , but my | (NULL, goes crazy for, POS); (tub, lasting forever, POS). |
| cat goes crazy for them , and this tub is lasting forever . Toy Super simple dynamics , and a great game for friends , family | (NULL, Easy to learn, POS); (dynamics, Super, POS); (dynamics, simple, POS); (game, great, POS); (variety of play, |
| , kids , etc . ! Easy to learn , and variety of play is good . | good, POS). |
| My 1 year old loved this for his birthday . Such a fun , easy | (NULL, loved, POS); (toy, easy, POS); (toy, fun, POS). |
| toy . I would buy this again and again . Table 7: Sampled reviews for each domain in DMASTE. | |
α 1 3 5 7 10 15 20 30 50 100
E→K 33.79 37.46 38.36 38.74 38.74 39.46 35.68 38.99 35.17 35.70
B→K 31.55 35.20 37.95 36.63 37.83 36.77 35.37 34.99 36.47 28.41
F→K 32.15 39.83 39.74 39.93 39.95 39.38 38.88 39.86 40.13 38.93
H→K 35.15 35.64 40.12 38.12 39.96 38.10 38.45 37.40 32.99 38.49
E→G 39.86 44.71 44.86 45.11 44.39 45.66 44.23 44.59 44.46 45.48
B→G 18.92 40.90 40.69 42.16 41.01 40.39 40.49 41.41 40.51 41.49
F→G 38.23 43.98 43.36 43.88 44.89 42.65 44.40 44.23 43.97 43.78
H→G 27.58 44.18 45.00 43.74 44.38 44.46 44.25 44.15 43.52 44.12
E→T 33.21 47.82 48.65 48.48 47.23 47.95 47.57 48.26 48.64 47.33
B→T 30.05 41.54 41.12 43.13 42.65 42.03 41.76 41.35 41.52 42.45
F→T 33.63 46.47 47.12 47.65 47.16 47.73 48.00 47.31 46.53 47.08
H→T 42.09 48.27 48.24 48.30 48.44 48.93 46.19 47.01 48.41 48.91
E→P 33.51 41.18 41.95 41.19 41.43 41.64 41.81 42.01 42.36 42.06
B→P 18.03 35.96 37.05 35.71 36.92 36.08 36.26 36.72 37.15 36.10
F→P 35.44 39.30 39.87 39.98 40.00 40.07 40.24 39.61 40.41 40.40
H→P 35.41 41.28 41.55 42.24 41.35 42.51 41.41 37.80 41.25 41.48
AVE 32.41 41.48 42.23 42.19 **42.27** 42.11 41.56 41.61 41.47 41.39
α 1 10 30 50 100 500 700 800 1000 1500
E→K 0.75 2.22 6.18 16.90 6.61 28.08 31.86 31.67 30.64 30.23
B→K 0.30 1.04 9.57 17.53 26.56 27.93 28.62 27.62 29.46 29.88
F→K 0.22 7.66 8.05 14.82 27.14 30.34 32.09 30.72 31.21 31.09
H→K 0.37 9.81 11.22 18.96 22.74 26.66 27.92 31.23 28.52 28.93
E→G 2.43 12.42 24.97 19.98 35.43 38.71 39.14 39.48 39.05 38.13
B→G 2.13 20.84 33.99 32.90 32.63 33.92 33.84 32.88 32.89 32.78
F→G 2.42 37.73 27.40 34.71 38.03 38.98 37.99 37.56 38.62 38.20
H→G 4.55 30.18 32.95 37.50 37.72 37.62 37.86 38.48 37.65 37.85
E→T 4.57 31.09 35.18 38.12 42.73 43.35 42.74 42.86 43.01 42.59
B→T 1.65 7.45 23.93 33.70 35.53 36.41 35.39 35.46 35.90 35.73
F→T 4.10 37.50 38.43 40.38 41.23 42.66 41.65 42.26 42.21 42.58
H→T 5.11 35.05 41.77 43.45 43.22 44.54 44.43 44.27 43.79 44.72
E→P 10.47 29.64 37.53 36.99 38.14 37.62 37.78 38.59 38.03 37.65
B→P 3.15 26.35 30.14 32.04 31.64 31.55 32.29 32.44 33.25 32.20
F→P 8.24 33.79 36.40 37.27 36.35 36.72 37.20 36.79 36.82 37.19
H→P 17.51 32.01 39.06 37.24 37.80 38.30 38.33 38.87 39.24 39.29
AVE 4.25 22.17 27.30 30.78 33.34 35.84 36.20 **36.32** 36.27 36.19
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 for Limitations.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section Abstract for Abstract and Section 1 for Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 For Dataset.
✓ B1. Did you cite the creators of artifacts you used?
Section 3 for Dataset and Appendix A for Dataset.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A for Dataset.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A for Dataset.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3 for Dataset and Appendix A for Dataset.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 for Dataset and Appendix A for Dataset.
## C ✓ **Did You Run Computational Experiments?**
Section 4 for Experiment Settings, Section 5 for Results, and Appendix B for Experiments.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B for Experiments.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 for Experiment Settings and Appendix B for Experiments.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 for Experiment Settings, Section 5 for Results, and Appendix B for Experiments.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B for Experiments.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3 For Dataset And Appendix A For Dataset.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix A for Dataset.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix A for Dataset.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
omarov-kondrak-2023-grounding | Grounding the Lexical Substitution Task in Entailment | https://aclanthology.org/2023.findings-acl.179 | Existing definitions of lexical substitutes are often vague or inconsistent with the gold annotations. We propose a new definition which is grounded in the relation of entailment; namely, that the sentence that results from the substitution should be in the relation of mutual entailment with the original sentence. We argue that the new definition is well-founded and supported by previous work on lexical entailment. We empirically validate our definition by verifying that it covers the majority of gold substitutes in existing datasets. Based on this definition, we create a new dataset from existing semantic resources. Finally, we propose a novel context augmentation method motivated by the definition, which relates the substitutes to the sense of the target word by incorporating glosses and synonyms directly into the context. Experimental results demonstrate that our augmentation approach improves the performance of lexical substitution systems on the existing benchmarks. | # Grounding The Lexical Substitution Task In Entailment
Talgat Omarov and **Grzegorz Kondrak**
Alberta Machine Intelligence Institute Department of Computing Science University of Alberta, Edmonton, Canada
{omarov,gkondrak}@ualberta.ca
## Abstract
Existing definitions of lexical substitutes are often vague or inconsistent with the gold annotations. We propose a new definition which is grounded in the relation of entailment; namely, that the sentence that results from the substitution should be in the relation of mutual entailment with the original sentence. We argue that the new definition is well-founded and supported by previous work on lexical entailment. We empirically validate our definition by verifying that it covers the majority of gold substitutes in existing datasets. Based on this definition, we create a new dataset from existing semantic resources. Finally, we propose a novel context augmentation method motivated by the definition, which relates the substitutes to the sense of the target word by incorporating glosses and synonyms directly into the context.
Experimental results demonstrate that our augmentation approach improves the performance of lexical substitution systems on the existing benchmarks.
## 1 Introduction
Lexical substitution is the task of finding appropriate replacements for a target word in a given context sentence. This task was first introduced as an application-oriented alternative to word sense disambiguation (WSD) that does not depend on a predefined sense inventory (McCarthy, 2002).
Lexical substitution has been applied in various tasks, such as word sense induction (Amrami and Goldberg, 2018), lexical relation extraction (Schick and Schütze, 2020), and text simplification (AlThanyyan and Azmi, 2021).
Lexical substitution continues to be an important area of research in NLP. For instance, it can be used to probe the ability of NLP models to capture contextual meaning, as substitutes can vary depending on the sense of the word. Furthermore, professional writers often need good substitutes in a specific context, which cannot be found by simply looking them up in a thesaurus.
Many definitions used in the literature to describe lexical substitution are either vague or inconsistent with the evaluation datasets. For example, Hassan et al. (2007) and Roller and Erk
(2016) leave the criteria for lexical substitution to the discretion of human annotators. Studies such as Sinha and Mihalcea (2009, 2014) and Hintz and Biemann (2016) require substitutes to be synonyms, which creates a discrepancy with established lexical substitution benchmarks that allow annotators to provide slightly more general terms (hypernyms)
(McCarthy, 2002; Kremer et al., 2014). For example, while the two words are not synonyms, *vehicle* can be considered as a valid substitute for car if the context clearly refers to a car. Most prior work requires substitutes to preserve the meaning of the original sentence (McCarthy and Navigli, 2007; Giuliano et al., 2007; Szarvas et al., 2013a,b; Kremer et al., 2014; Melamud et al., 2015; Garí Soler et al., 2019; Zhou et al., 2019; Lacerra et al., 2021; Michalopoulos et al., 2022; Seneviratne et al., 2022; Wada et al., 2022). However, as we show in this work, not all gold substitutes necessarily preserve the meaning of the sentence taken in isolation.
We propose a definition of lexical substitution that is more precise and well-founded. Our aim is not only to address the inconsistency in the literature but also to align the task definition with established evaluation datasets. We draw on insights from natural language inference (NLI), which provides a framework for understanding the semantic relationship between sentences and words. According to our definition, the sentence that results from a lexical substitution must be in the relation of mutual entailment with the original sentence. For example, *position* is a suitable substitute for *post* in the sentence "I occupied a *post* in the treasury" because the two sentences entail each other. The entailment criterion takes into account the implicit 2854 background knowledge (Dagan et al., 2005), which allows lexical substitution to generalize over simple synonym replacement, encompassing a wider range of semantic relations, such as hypernymy and meronymy (Geffet and Dagan, 2005).
The classification of the entailment relation between two sentences requires the identification of the target word's sense. For example, *position* is a proper substitute for *post* only if it is used in the sense corresponding to "job in an organization".
Based on this observation, we develop an augmentation method that helps to ground the substitutes by incorporating glosses and synonyms of the target word's sense directly into the context. Since the word sense is latent, the method leverages a WSD system to account for the probabilities of each candidate sense.
We show the effectiveness of the proposed definition and our augmentation method through experiments on existing lexical substitution datasets.
Our analysis indicates that the proposed definition encompasses gold substitutes that could not previously be explained by existing definitions. Furthermore, our empirical evaluation shows that our augmentation method improves the performance on the lexical substitution benchmarks by up to 4.9 F1 points, surpassing the previous state-of-the-art models in certain settings.
The main contributions of this paper are as follows.
1. We propose a task formulation for lexical substitution that is grounded in entailment and show its suitability for existing datasets.
2. We construct a new dataset for lexical substitution, which demonstrates the applicability of our theoretical definition.
3. By facilitating the identification of the latent word senses, our method improves results on existing lexical substitution benchmarks.
## 2 Related Work On Lexical Substitution
In this section, we review the available datasets and provide a brief overview of the prior work.
## 2.1 Datasets
The first English lexical substitution dataset was created by McCarthy and Navigli (2007) for SemEval-2007 Task 10. The dataset, which we refer to as SE07, consists of 2003 context sentences with one target word per sentence. The authors instructed the annotators to provide substitutes that preserve the original meaning of the sentence.
Biemann (2012) constructed Turk Bootstrap Word Sense Inventory (TWSI), which encompasses a sense inventory induced by lexical substitutes for 1,012 common English nouns. It was created by annotating 25,851 sentences with lexical substitutes using Amazon Mechanical Turk.
Kremer et al. (2014) introduced CoInCo, an "allword" lexical substitution dataset, in which all content words in a corpus are annotated with substitutions. According to the authors, the all-word setting provides a more realistic distribution of target words and their senses. It is important to note that both McCarthy and Navigli (2007) and Kremer et al. (2014) explicitly allowed annotators to provide phrases or more general words when they could not think of a good substitute.
The SWORDS dataset (Lee et al., 2021) is based on the CoInCo dataset but uses a slightly different annotation approach. Instead of relying on annotators to come up with substitutes from their memory, they were provided with a list of candidate substitutes from a thesaurus and CoInCo for a given target word. The dataset contains 1,250 context sentences, each with a single target word.
The task of lexical substitution is not limited to the English language, and datasets have also been created for other languages, including Italian
(Toral, 2009), and German (Cholakov et al., 2014);
the latter dataset includes sense annotations (Miller et al., 2016). In addition, a cross-lingual dataset from SemEval-2010 Task 2 (Mihalcea et al., 2010)
combines English target words and sentences with Spanish gold substitutes. While multilingual and cross-lingual tasks are beyond the scope of this paper, our proposed grounding of lexical substitution in entailment is also applicable in those settings.
## 2.2 Methods
Numerous methods have been proposed for lexical substitution. Early methods retrieve candidate substitutes from lexical resources such as WordNet
(Miller, 1995). Approaches that rank candidate substitutes are based on web queries (Zhao et al., 2007; Martinez et al., 2007; Hassan et al., 2007), ngram models (Giuliano et al., 2007; Yuret, 2007; Dahl et al., 2007; Hawker, 2007; Hassan et al., 2007),
latent semantic analysis (Giuliano et al., 2007; Hassan et al., 2007), delexicalized features (Szarvas et al., 2013a), and word embeddings (Melamud et al., 2015, 2016; Roller and Erk, 2016).
Pre-trained neural language models (NLMs) and their contextualized embedding representations have greatly advanced the state of the art in lexical substitution. Garí Soler et al. (2019) use contextual embeddings from ELMo (Peters et al., 2018)
to calculate the similarity between the target and candidate substitutes. To fix the bias toward the target word, Zhou et al. (2019) apply a dropout embedding policy that partially masks the target word's BERT embedding. Arefyev et al. (2020)
propose combining a masked language model probability score with a contextual embedding-based proximity score. Lacerra et al. (2021) propose training a supervised sequence-to-sequence model that takes a context sentence containing a target word as input, and outputs a comma-separated list of substitutes. Wada et al. (2022) employ contextualized and decontextualized embeddings (the average contextual representation of a word in multiple contexts). Yang et al. (2022) inject information about the target word into context and use BERT to generate initial candidates. Furthermore, they train RoBERTa on the Multi-Genre Natural Language Inference corpus (Williams et al., 2018) to further refine the ranking by semantic similarity scores.
Similar to our method, two recent proposals leverage knowledge from WordNet to improve the quality of substitutes retrieved from pretrained neural language models. Michalopoulos et al. (2022)
inject synonyms by linearly interpolating their contextual embeddings, while we insert synonyms and glosses directly into the context. Seneviratne et al.
(2022) and the other approach of Michalopoulos et al. (2022) use knowledge from WordNet only at the ranking stage after candidates had been generated from an NLM. In contrast, our approach injects WordNet information into the NLM's input from the beginning, which may produce more relevant candidates initially.
## 3 Entailment-Based Lexical Substitution
In this section, we provide background information about entailment, present the theoretical formulation of the proposed definition, and demonstrate its suitability through empirical validation.
## 3.1 Entailment
A premise (P) *entails* a hypothesis (H) if a human reader of P would infer that H is most likely true (Dagan et al., 2005). Entailment is denoted as P |= H. For example, the premise "the water is boiling" entails the hypothesis "the water is hot". This definition of entailment assumes a common human understanding of language, as well as common background knowledge. Entailment is a directional relation, which means that P |= H does not imply H |= P. For example, "I own a car" entails "I own a vehicle" but not the other way around.
However, if P |= H and H |= P then H and P
are semantically equivalent: P ≡ H (MacCartney, 2009).
Lexical entailment is a subset of textual entailment that specifically examines the relationship between a premise and a hypothesis where the two differ by a single word or phrase (Kroeger, 2018). It has previously been established that words in context often entail their synonyms, hypernyms, and, in some cases, holonyms (Geffet and Dagan, 2005).
## 3.2 Lexical Substitution Definition
We anchor our definition of lexical substitution in textual entailment. Let Ct be a context sentence that contains a target word t, and let Cw be the same context sentence where t is replaced with a word or phrase w. We define w as a lexical substitute for t in Ctif and only if Ct and Cw entail each other:
$$\operatorname{LexSub}(C_{t},w)\Leftrightarrow C_{t}\vDash C_{w}\wedge C_{w}\vDash C_{t}$$
This binary definition can be adapted to the task of substitute generation by considering a finite set of all words and short phrases. Specifically, the output of the generation task would consist of all candidate substitutions that satisfy the above condition.
While entailment is recognized as an important substitutability criterion within the NLI community
(Geffet and Dagan, 2004; Zhitomirsky-Geffet and Dagan, 2009), it has been largely overlooked in lexical substitution. A notable exception is Giuliano et al. (2007), who recognize the significance of the relationship between lexical substitution and entailment. Although their mutual textual entailment criterion is similar to ours, we disagree with their conclusion that the mutual equivalence requirement restricts substitutes to synonyms only.
Next, we show that this criterion not only extends beyond word synonymy, but also naturally allows for the integration of common-sense reasoning and knowledge about the world.
## 3.3 Semantic Equivalence
In this section, we explicitly spell out our assumptions about the relationship between lexical substitution and the criterion of meaning preservation.
The first proposition states that all contextual synonyms are good substitutes.
Proposition 1. If t and w express the same concept in C then w *is a lexical substitute for* t in C.
Proof. When we replace a target word with another word that expresses the same concept in a given context, the truth conditions of the sentence do not change. This is because the truth conditions are determined by the relationships between concepts that are expressed in the sentence. Therefore, the mutual entailment between Cw and Ct must hold, which by our definition implies that w is a lexical substitute for t in the context C.
If words express the same concept in some context, they must belong to the same wordnet synset
(Hauer and Kondrak, 2020). A wordnet is a lexical ontology in which words are grouped into sets of synonyms (synsets), each representing a distinct concept (Miller, 1995). The suitability of contextual synonyms with lexical substitution provides a theoretical basis for the use of wordnets to generate substitutes (McCarthy and Navigli, 2007).
The implication in Proposition 1 is unidirectional; that is, not all substitutes must be synonyms.
Proposition 2. If w *is a lexical substitute for* t in C then t and w do not necessarily represent the same concept in C .
As evidence that the reverse implication does not hold, we provide a counter-example. Consider the following sentence from the SWORDS dataset:
"Those hospitals were not for us. They were for an expected *invasion of Japan."* where the word planned is among the gold substitutes for the target word *expected*. While the verbs *expect* and *plan* are not synonyms, this particular substitution is correct considering the broader historical context of World War II, which has been provided in previous sentences. From the point of view of the US military, the invasion was both planned and expected. Thus, although the two words do not express the same concept, the corresponding sentences entail each other.
Taken together, these two propositions imply that synonymy within a narrow context is a sufficient but not a necessary condition for mutual entailment between the sentences. Thus, mutual
![3_image_0.png](3_image_0.png)
entailment provides a more flexible criterion for substitution than contextual synonymy. The mutual entailment criterion captures the nuances of lexical substitution better than the definitions based on strict meaning preservation because it takes into account both context and background knowledge.
This is essential to identify a wider range of substitutions in scenarios such as the ones described above. Furthermore, this definition may facilitate the job of annotators by breaking down lexical substitution into two concrete entailment conditions, which are easier to reason about.
## 3.4 Empirical Validation
To validate our proposed definition, we perform a manual analysis of a random sample of 50 gold substitutes from the SWORDS dataset which are labeled "acceptable" (i.e., high quality). Our objective is to assess whether these substitutes are adequately covered by our definition. We provide a detailed description of our manual analysis procedure and examples in Appendix A.
The summary of our manual analysis is presented in Table 1. It shows that our definition successfully covers 41 (82%) of gold substitutes. All 9 substitutes that are not covered by our definition are also not covered by the existing definition of meaning preservation. This finding matches our Proposition 1, which implies that a word that is not a lexical substitute (i.e., mutual entailment does not hold), cannot express the same concept (i.e. there is a difference in meaning). We conclude that those 9 instances represent annotation errors (rows 1-9 in Table 5).
We also observe that the 6 substitutes that are not covered by the existing definition of meaning preservation are covered by our definition (rows 10-15 in Table 5). For example, consider the context "Energy Secretary Bill Richardson went to Baghdad in 1995 while a representative for New Mexico," where *elected official* is a gold substitute for *representative*. The new sentence induced by the substitution does not preserve the original meaning because not every elected official is a congress representative. However, the sentence provides enough historical context to validate the substitution. This observation matches our Proposition 2, which states that lexical substitutes need not represent the same concept.
## 3.5 Dataset Induced By Entailment
Based on Proposition 1, we use synonyms from existing semantic resources to construct a new lexical substitution dataset, which we refer to as WNSub.1 This is because replacing target words with synonyms is guaranteed to generate sentences that satisfy the mutual entailment criterion.
To generate the WNSub dataset, we use SemCor
(Miller et al., 1994), the largest corpus manually annotated with WordNet senses. The sense annotations are crucial for our dataset, as contextual synonyms are defined in relation to word senses rather than word lemmas. For example, for the sentence "can your insurance company aid *you in* reducing administrative costs?" we retrieve substitutes *help* and *assist* from the WordNet synset that corresponds to the annotated sense of the target word aid. In total, we obtain 146,303 sentences with 376,486 substitutes.
Although contextual synonyms do not necessarily capture all aspects of lexical substitution, WNSub can be used for pre-training supervised systems, in combination with other datasets. We verify this claim experimentally in Section 5.3.
## 4 Sense-Based Augmentation Method
In this section, we describe our sense-based augmentation method for lexical substitution. Our approach is based on the observation that knowing the sense of the target word is key to deciding whether a substitution induces an entailment relation between the two sentences. For example, *position* is a proper substitute for *post* in some context only if the latter is used in the sense corresponding to
"job in an organization". We posit that inserting sense glosses directly into the context will help lexical substitution systems identify substitutes that are mutually entailed by the original context. Our hypothesis is supported by prior findings that this technique works well for semantic tasks such as WSD (Huang et al., 2019) and idiomaticity detection (Hauer et al., 2022).
1Dataset and code available at https://github.com/
talgatomarov/wnsub Our method is based on two stand-alone modules: a WSD system and a lexical substitution generation system. The method is sufficiently flexible to incorporate new systems as the state of the art on those two tasks continues to improve. The only requirement is that these systems output probabilities for each candidate sense or substitute.
The formula below is used to combine the probabilities from the two systems. Figure 1 shows an example of soft constraint augmentation. Let Ct be a context sentence containing the target word t, w be a candidate substitute, and s ∈ *senses*(t) be a candidate sense for t in Ct. Under the assumption that the substitutes depend on the sense of the target word, the conditional probability P(w|Ct) can be derived by marginalizing the senses out:
$$P(w|C_{t})=\sum_{s\in s e n s(t)}P(w|C_{t},s)\times P(s|C_{t})$$
In the equation above, we model P(s|Ct) using a WSD system, and obtain P(w|Ct, s) from a lexical substitution system that operates on the context augmented with sense information.
Motivated by the work of Luan et al. (2020),
we experiment with two types of constraint: hard and soft. In the hard-constraint approach, a WSD
system is used to identify the most likely sense of the target word, which is effectively assigned the probability of 1.0. Next, the glosses and synonyms corresponding to this sense are retrieved from a lexical resource and inserted in parentheses after the target word. This augmented context is then passed to a lexical substitution system, which generates substitutes along with their substitute probabilities.
In the soft-constraint approach, for each possible sense of the target word, a WSD system first computes its probability, the context is augmented with glosses and synonyms of that sense, and finally a lexical substitution system generates and assigns final probabilities to candidate substitutes using the formula above.
Soft constraint allows grounding of lexical substitutes in the target word senses, while taking into account the probability of each candidate sense. We posit that considering all candidate senses and their probabilities should work better than committing to a single most likely sense, by improving robustness against WSD errors. In addition, in some cases, the context itself may not provide enough information to reliably disambiguate the sense of the target word. We verify this hypothesis experimentally in the next section.
![5_image_0.png](5_image_0.png)
## 5 Experiments
In this section, we investigate the effectiveness of our dataset and augmentation method in improving the performance of lexical substitution systems.
The experiments were conducted on a machine with two NVIDIA GeForce RTX 3090 video cards.
## 5.1 Evaluation Datasets And Metrics
We evaluate our methods using test splits from two benchmarks: the SemEval 2007 Task 10 (SE07)
(McCarthy and Navigli, 2007) and SWORDS (Lee et al., 2021). Each benchmark has its own set of evaluation metrics, which we outline here.
The SE07 benchmark uses *best* and oot metrics, which measure the quality of the system's top-1 and top-10 predictions, respectively. These metrics assign weights to gold substitutes based on how frequently annotators selected them. The benchmarks also use *mode* variations of *best* and oot, which evaluate performance against a single gold substitute chosen by the majority of annotators, provided that such a majority exists. We consider the *mode* metrics theoretically problematic because they disregard instances without an annotation majority, and because many instances could involve multiple equally valid substitutes, The SWORDS benchmark uses F
10 scores, the harmonic mean of precision and recall, calculated with respect to the system's *top 10 predictions* and acceptable (F
10 a
) or *conceivable* (F
10 c) gold substitutes. A candidate is labeled as *conceivable* if it was selected by at least one annotator and *acceptable* if selected by at least half of the annotators.
Furthermore, the benchmark includes two evaluation settings: lenient and strict. In the lenient setting, any system-generated substitutes that are not in SWORDS are removed. In the strict setting, all system-generated substitutions are considered.
The lenient settings were originally proposed to compare against "oracle" baselines whose predictions are guaranteed to be in SWORDS. We posit that the lenient setting provides an unreliable basis for measuring lexical substitution performance in real-world scenarios because systems are not provided with a predefined vocabulary of possible words that can occur during testing.
All existing evaluation metrics require a ranking mechanism to select top-k system predictions, which is problematic for two reasons. First, there is a lack of clarity on objective criteria for ranking substitute words. For example, in the sentence
"the FBI said *that explicit conversations about the* scheme had been recorded", it is debatable whether disclosed is a better substitute for *said* than *declared*. Second, the existing metrics reward systems for generating a specific number of candidates, regardless of how many substitutes actually exist.
This may result in an inaccurate evaluation of the system's ability to generate correct substitutes.
Despite these limitations, our method builds upon existing systems that have been optimized using these metrics, and therefore we use them for the evaluation. However, we posit that it would be beneficial for future lexical substitution systems to consider metrics that do not depend on substitution ranking, such as the standard F1 score calculated with respect to all predicted substitutes.
## 5.2 Comparison Systems
On the SE07 dataset, we compare against KU
(Yuret, 2007), supervised learning (Szarvas et al.,
2013a), BERT for lexical substitution (Zhou et al., 2019), GeneSis (Lacerra et al., 2021), LexSubCon
(Michalopoulos et al., 2022), and CILex (Seneviratne et al., 2022). The reported results are from the last two papers.
On the SWORDS dataset, we compare against GPT-3 with "in-context" learning (Brown et al., 2020), a commercial lexical substitution system Word-Tune2, and a BERT baseline which produces substitutes according to the masked language modeling head (Devlin et al., 2019). The results of these models are reported by Lee et al. (2021). We also include the results of Yang et al. (2022).
## 5.3 Wnsub Experiments
The objective of the experiments with WNSub (Section 3.5) is to determine whether the dataset could enhance the performance of supervised sequenceto-sequence lexical substitution models when used as a pre-training dataset.
The first model is our own implementation of a simple supervised sequence-to-sequence (seq2seq)
model. It takes a context where the target word is tagged with two brace tokens, and generates a substitute word or phrase as a prediction. We use beam search to generate multiple likely substitutes. Our underlying seq2seq model is *bart-large* (Lewis et al., 2020). We utilize the same set of hyperparameters for both pre-training and fine-tuning.
Specifically, we train our model for 19,000 steps with a batch size of 64 and a learning rate of 4e-5.
The second model is GeneSis (Lacerra et al.,
2021), also a sequence-to-sequence model. Unlike our model, GeneSis filters out words that are not in WordNet, and it incorporates a fallback strategy in the oot setting. When the model generates fewer than 10 substitutes, additional words are retrieved from WordNet, and ranked using NLM embeddings. To assess the model's performance based solely on annotated data, we disable both lexicon filtering and fallback strategy. We use their default settings for both pre-training and fine-tuning.
In order to evaluate the contribution of the WNSub dataset, we compare a baseline approach with a WNSub pre-training approach. In the baseline approach, we train the systems on existing datasets, specifically the CoInCo and TWSI datasets, following the methodology of Lacerra et al. (2021). In the pre-training approach (+ WNSub), we first pretrain the systems on WNSub, and then fine-tune on the union of the CoInCo and TWSI datasets. Our evaluation is on the SE07 test set only, as SWORDS
includes instances from CoInCo.
The results in Table 2 indicate that pre-training on the WNSub dataset improves the results of both supervised models. The only exception is GeneSis in the oot setting, in which there is no penalty for attempting to fill all 10 candidate substitutes, even if some of them are incorrect. However, when evaluated using the standard F1 score that considers all predictions, pre-training does improve GeneSis' performance from 26.8 to 27.7 points. This suggests that the F1 metric may better reflect the quality of the systems when they are not forced to produce a fixed number of substitutes.
| Models | best | oot |
|-------------------------------------------|--------|-------|
| Yuret (2007) | 12.9 | 46.2 |
| Szarvas et al. (2013a) | 15.9 | 48.8 |
| Zhou et al. (2019) | 20.3 | 55.4 |
| GeneSis (2021) | 21.6 | 52.4 |
| Michalopoulos et al. (2022) | 21.1 | 51.3 |
| Seneviratne et al. (2022) | 23.3 | 56.3 |
| WNSub experiments seq2seq baseline | 9.7 | 44.0 |
| + WNSub | 10.7 | 44.8 |
| GeneSis* | 19.2 | 34.3 |
| + WNSub | 19.6 | 34.1 |
| Augmentation experiments LexSubGen (2020) | 21.7 | 55.1 |
| + soft constraint | 21.9 | 57.9 |
| Wada et al. (2022) | 21.8 | 58.0 |
| + soft constraint | 22.0 | 58.4 |
Table 2: Results on the SE07 test set. *With disabled vocabulary filtering and fallback strategy.
## 5.4 Augmentation Experiments
We evaluate the effectiveness of our sense-based augmentation method (Section 4) on both SE07 and SWORDS test sets, using two different lexical substitution systems. We retrieve synonyms and glosses for the target word from WordNet 3.0 via NLTK (Bird et al., 2009).
As our base WSD system, we use ConSec3
(Barba et al., 2021). The model jointly encodes the context containing the target word and all possible sense definitions, and extracts the span of the definition that best fits the target word. ConSec also leverages the senses assigned to nearby words to improve performance. Since the original implementation outputs only predicted senses, we changed the source code to capture the probability scores for all candidate senses.
As our primary base lexical substitution system, we use LexSubGen4(Arefyev et al., 2020). Their best-performing model injects the target word information by combining the substitute probability from XLNet (Yang et al., 2019) with the contextual embedding similarity of the substitute to the target word.
To test the generalizability of our approach, we also apply our augmentation method to the model of Wada et al. (2022). Their model is based on the similarity of contextualized and decontextualized 3https://github.com/SapienzaNLP/consec 4https://github.com/Samsung/LexSubGen
| 10 | | |
|--------------------|------|------|
| Models | F | |
| GPT-3 | 22.7 | 36.3 |
| WordTune | 22.8 | 33.6 |
| BERT | 19.2 | 30.3 |
| Yang et al. (2022) | 18.3 | 28.7 |
| LexSubGen (2020) | 19.4 | 29.9 |
| + soft constraint | 21.5 | 34.8 |
| Wada et al. (2022) | 24.5 | 39.9 |
| + soft constraint | 24.7 | 42.5 |
embeddings, which represent the average contextual representation of a word in multiple contexts.
The results on SE07 in Table 2 show that our approach leads to improvements over both base models. In the oot setting, the result of 57.9 represents a 5% relative gain, while the result of 58.4 is higher than any reported in prior work.
Similarly, the results on SWORDS in Table 3 demonstrate consistent improvements over both base systems in the strict evaluation settings. The results in the last row represent the new state of the art on the SWORDS dataset.
## 5.5 Ablation And Analysis
Table 4 presents the results of an ablation study on the SWORDS dataset, which we conducted to assess the impact of various components of our augmentation method. Removal of both synonyms and glosses simultaneously is equivalent to the LexSubGen baseline shown in the first row. Our principal model, soft constraint, is in the row 3. The results in rows 2 and 3 show that hard constraint is less effective than soft constraint. This is because the former relies on a single most likely sense, which makes it less robust to WSD errors. The results in rows 4 and 5 indicate that glosses provide more information than synonyms. Overall, the ablation study provides further evidence that augmentation improves lexical substitution systems.
We also performed a manual error analysis on a randomly selected sample of 20 instances from SWORDS. We did not find any instances where the augmentation results in missed substitutes, as compared to the base model. On the other hand, we found one instance where the augmentation helps to identify two gold substitutes, *overlook* and *neglect*,
as substitutes for *miss*. We note that these three verbs share a WordNet synset which is glossed as
"leave undone or leave out."
Models F
10
a F
10
c
LexSubGen 19.4 29.9
+ hard constraint 21.2 34.2
+ soft constraint **21.5 34.8**
- gloss 20.6 32.7 - synonyms 21.1 33.6
## 6 Conclusion
We consider the new entailment-based definition and formalization of lexical substitution as the principal contribution of this paper. The new WNSub dataset and the context augmentation method are inspired by our theoretical analysis. The experiments demonstrate that both innovations lead to performance improvements on the standard lexical substitution benchmarks, which we interpret as empirical validation of the theoretical approach.
In the future, we plan to explore the generalizability of our approach to other languages, as well as cross-lingual lexical substitution.
## 7 Limitations
Our augmentation approach is model-agnostic, meaning that it can be applied to any lexical substitution model. However, this also means that it inherits any limitations of the underlying model.
For example, in the case of LexSubGen, it can only produce single-token words as substitutes which might prevent it from generating valid longer words or phrases as substitutes that are present in the gold annotations. Additionally, the substitutes are also limited by the vocabulary of the pre-trained language model that LexSubGen uses.
Another limitation of our method is that it relies on the presence of target words in a lexical resource, such as WordNet, together with their synonyms and glosses. If this sense-specific information is missing from the lexical resource, it cannot be used to improve the performance of a lexical substitution system.
Our entailment criterion for lexical substitution is defined for the binary classification task, rather than for generation or ranking tasks. However, if a probabilistic model is used to determine the probability of mutual entailment between sentences, this score can be utilized to rank substitutes if necessary.
As explained in Section 3.2, the binary definition can also be adapted to the generation task by iterating over candidate substitutes.
## 8 Ethics Statement
It is important to acknowledge that our approach utilizes a large language model trained on data from the internet, which may contain inherent biases.
Therefore, it is crucial to exercise caution when applying this model in applications such as writing assistance, where it may have a direct impact on individuals or groups.
We also have considered ethical considerations in the construction and use of our evaluation dataset.
The dataset we used was automatically constructed from publicly available datasets and lexical resources. To the best of our knowledge, the original datasets do not contain offensive content. The names included in the datasets are from texts that are already publicly available. We did not use the help of third-party annotators to produce any additional data. The datasets we used did not include any license agreements or terms of use. The only requirement was to cite the dataset papers, which we have done in Section 3.5. Additionally, we intend to release our dataset publicly to encourage further research and development in the field of lexical substitution.
## Acknowledgements
This research was supported by the Natural Sciences and Engineering Research Council of Canada
(NSERC), and the Alberta Machine Intelligence Institute (Amii).
## References
Suha S. Al-Thanyyan and Aqil M. Azmi. 2021. Automated text simplification: A survey. ACM Comput.
Surv., 54(2).
Asaf Amrami and Yoav Goldberg. 2018. Word sense induction with neural biLM and symmetric patterns.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 4860–4867, Brussels, Belgium. Association for Computational Linguistics.
Nikolay Arefyev, Boris Sheludko, Alexander Podolskiy, and Alexander Panchenko. 2020. Always keep your target in mind: Studying semantics and improving performance of neural lexical substitution. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1242–1255, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Edoardo Barba, Luigi Procopio, and Roberto Navigli.
2021. ConSeC: Word sense disambiguation as continuous sense comprehension. In *Proceedings of the*
2021 Conference on Empirical Methods in Natural Language Processing, pages 1492–1503, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Chris Biemann. 2012. Turk bootstrap word sense inventory 2.0: A large-scale resource for lexical substitution. In *Proceedings of the Eighth International* Conference on Language Resources and Evaluation
(LREC'12), pages 4038–4042, Istanbul, Turkey. European Language Resources Association (ELRA).
Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural Language Processing with Python*, 1st edition.
O'Reilly Media, Inc.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Kostadin Cholakov, Chris Biemann, Judith EckleKohler, and Iryna Gurevych. 2014. Lexical substitution dataset for German. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1406–1411, Reykjavik, Iceland. European Language Resources Association (ELRA).
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The pascal recognising textual entailment challenge. In *Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment*, MLCW'05, page 177–190, Berlin, Heidelberg.
Springer-Verlag.
George Dahl, Anne-Marie Frassica, and Richard Wicentowski. 2007. SW-AG: Local context matching for English lexical substitution. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 304–307, Prague, Czech Republic. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Aina Garí Soler, Anne Cocos, Marianna Apidianaki, and Chris Callison-Burch. 2019. A comparison of context-sensitive models for lexical substitution. In Proceedings of the 13th International Conference on Computational Semantics - Long Papers, pages 271–282, Gothenburg, Sweden. Association for Computational Linguistics.
Maayan Geffet and Ido Dagan. 2004. Feature vector quality and distributional similarity. In *COLING*
2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 247–253, Geneva, Switzerland. COLING.
Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment.
In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05),
pages 107–114, Ann Arbor, Michigan. Association for Computational Linguistics.
Claudio Giuliano, Alfio Gliozzo, and Carlo Strapparava.
2007. FBK-irst: Lexical substitution task exploiting domain and syntagmatic coherence. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 145–148, Prague, Czech Republic. Association for Computational Linguistics.
Samer Hassan, Andras Csomai, Carmen Banea, Ravi Sinha, and Rada Mihalcea. 2007. UNT: SubFinder:
Combining knowledge sources for automatic lexical substitution. In *Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval2007)*, pages 410–413, Prague, Czech Republic. Association for Computational Linguistics.
Bradley Hauer, Seeratpal Jaura, Talgat Omarov, and Grzegorz Kondrak. 2022. UAlberta at SemEval 2022 task 2: Leveraging glosses and translations for multilingual idiomaticity detection. In *Proceedings of* the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 145–150, Seattle, United States. Association for Computational Linguistics.
Bradley Hauer and Grzegorz Kondrak. 2020. Synonymy = translational equivalence. *arXiv preprint* arXiv:2004.13886.
Tobias Hawker. 2007. USYD: WSD and lexical substitution using the Web1T corpus. In *Proceedings* of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 446–453, Prague, Czech Republic. Association for Computational Linguistics.
Gerold Hintz and Chris Biemann. 2016. Language transfer learning for supervised lexical substitution. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 118–129, Berlin, Germany. Association for Computational Linguistics.
Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In *Proceedings*
of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3509–3514, Hong Kong, China. Association for Computational Linguistics.
Gerhard Kremer, Katrin Erk, Sebastian Padó, and Stefan Thater. 2014. What substitutes tell us - analysis of an
"all-words" lexical substitution corpus. In *Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics*,
pages 540–549, Gothenburg, Sweden. Association for Computational Linguistics.
Paul Kroeger. 2018. *Analyzing meaning: An introduction to semantics and pragmatics*. Language Science Press.
Caterina Lacerra, Rocco Tripodi, and Roberto Navigli.
2021. GeneSis: A Generative Approach to Substitutes in Context. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10810–10823, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mina Lee, Chris Donahue, Robin Jia, Alexander Iyabor, and Percy Liang. 2021. Swords: A benchmark for lexical substitution with improved data coverage and quality. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4362–4379, Online. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Yixing Luan, Bradley Hauer, Lili Mou, and Grzegorz Kondrak. 2020. Improving word sense disambiguation with translations. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4055–4065, Online. Association for Computational Linguistics.
Bill MacCartney. 2009. *Natural language inference*.
Ph.D. thesis, Stanford University.
David Martinez, Su Nam Kim, and Timothy Baldwin.
2007. MELB-MKB: Lexical substitution system based on relatives in context. In *Proceedings of the* Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 237–240, Prague, Czech Republic. Association for Computational Linguistics.
Diana McCarthy. 2002. Lexical substitution as a task for WSD evaluation. In *Proceedings of the ACL-02* Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, pages 089–115. Association for Computational Linguistics.
Diana McCarthy and Roberto Navigli. 2007. SemEval2007 task 10: English lexical substitution task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 48–53, Prague, Czech Republic. Association for Computational Linguistics.
Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016.
context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61, Berlin, Germany. Association for Computational Linguistics.
Oren Melamud, Omer Levy, and Ido Dagan. 2015. A
simple word embedding model for lexical substitution. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1–7, Denver, Colorado. Association for Computational Linguistics.
George Michalopoulos, Ian McKillop, Alexander Wong, and Helen Chen. 2022. LexSubCon: Integrating knowledge from lexical resources into contextual embeddings for lexical substitution. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1226–1236, Dublin, Ireland. Association for Computational Linguistics.
Rada Mihalcea, Ravi Sinha, and Diana McCarthy. 2010.
SemEval-2010 task 2: Cross-lingual lexical substitution. In *Proceedings of the 5th International Workshop on Semantic Evaluation*, pages 9–14, Uppsala, Sweden. Association for Computational Linguistics.
George A. Miller. 1995. Wordnet: A lexical database for english. *Commun. ACM*, 38(11):39–41.
George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identification.
In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994.
Tristan Miller, Mohamed Khemakhem, Richard Eckart de Castilho, and Iryna Gurevych. 2016. Senseannotating a lexical substitution data set with ubyline.
In *Proceedings of the Tenth International Conference* on Language Resources and Evaluation (LREC'16),
pages 828–835, Portorož, Slovenia. European Language Resources Association (ELRA).
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237,
New Orleans, Louisiana. Association for Computational Linguistics.
Stephen Roller and Katrin Erk. 2016. PIC a different word: A simple model for lexical substitution in context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1121–1126, San Diego, California.
Association for Computational Linguistics.
Timo Schick and Hinrich Schütze. 2020. Rare words:
A major problem for contextualized embeddings and how to fix it by attentive mimicking. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8766–8774.
Sandaru Seneviratne, Elena Daskalaki, Artem Lenskiy, and Hanna Suominen. 2022. CILex: An investigation of context information for lexical substitution methods. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 4124–
4135, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Ravi Sinha and Rada Mihalcea. 2009. Combining lexical resources for contextual synonym expansion. In Proceedings of the International Conference RANLP2009, pages 404–410, Borovets, Bulgaria. Association for Computational Linguistics.
Ravi Sinha and Rada Mihalcea. 2014. Explorations in lexical sample and all-words lexical substitution.
Natural Language Engineering, 20(1):99–129.
György Szarvas, Chris Biemann, and Iryna Gurevych.
2013a. Supervised all-words lexical substitution using delexicalized features. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1131–1141, Atlanta, Georgia. Association for Computational Linguistics.
György Szarvas, Róbert Busa-Fekete, and Eyke Hüllermeier. 2013b. Learning to rank lexical substitutions.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1926–1932, Seattle, Washington, USA. Association for Computational Linguistics.
Antonio Toral. 2009. The lexical substitution task at evalita 2009. In *Proceedings of EVALITA Workshop,*
11th Congress of Italian Association for Artificial Intelligence, Reggio Emilia, Italy.
Takashi Wada, Timothy Baldwin, Yuji Matsumoto, and Jey Han Lau. 2022. Unsupervised lexical substitution with decontextualised embeddings. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 4172–4185, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Xi Yang, Jie Zhang, Kejiang Chen, Weiming Zhang, Zehua Ma, Feng Wang, and Nenghai Yu. 2022. Tracing text provenance via context-aware lexical substitution. *Proceedings of the AAAI Conference on* Artificial Intelligence, 36(10):11613–11621.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc.
Deniz Yuret. 2007. KU: Word sense disambiguation by substitution. In *Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval2007)*, pages 207–214, Prague, Czech Republic. Association for Computational Linguistics.
Shiqi Zhao, Lin Zhao, Yu Zhang, Ting Liu, and Sheng Li. 2007. HIT: Web based scoring method for English lexical substitution. In *Proceedings of the* Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 173–176, Prague, Czech Republic. Association for Computational Linguistics.
Maayan Zhitomirsky-Geffet and Ido Dagan. 2009. Bootstrapping distributional feature vector quality. *Computational Linguistics*, 35(3):435–461.
Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. BERT-based lexical substitution.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3368–
3373, Florence, Italy. Association for Computational Linguistics.
## A Manual Dataset Analysis
In this section, we describe our manual analysis procedure. It consists of the following steps.
1. We randomly select 50 gold substitutes along with their corresponding contexts and target words.
2. For each sampled gold substitute, we generate a new sentence by replacing the original target with the gold substitute.
3. For each generated sentence pair, we check the following criteria:
(a) Whether the original sentence entails the new sentence.
(b) Whether the new sentence entails the original sentence.
(c) Whether the new sentence fully preserves the meaning of the original sentence.
To identify textual entailment, we follow the definition outlined in Section 3.1. We verify the meaning preservation criterion by assessing whether the target word and its substitute candidate represent the same concept within the given context.
This analysis, which is summarized in Section 3.4, allows us to compare our definition, which is based on mutual entailment, with the existing definition of meaning preservation. The results of our analysis are presented in Table 5.
| Context Ct | Substitute w | Cw |= Ct | Ct |= Cw | Meaning preserved |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------------|------------|---------------------|
| I am glad to be out of the favor-trading scene for half a minute | moment | No | No | No |
| It didn't seem like we had a lot of holes to fill. It's good, it | award | No | Yes | No |
| gives us something we didn't have and we didn't lose much. Walking out of the church , a little gust of cold air caught me | icy | No | Yes | No |
| by surprise. "It's a long way to anywhere worth going," he said. | declare | No | Yes | No |
| Taste , hearing and touch became a single blur , and I do not | uncovered | No | No | No |
| know if my eyes were open. My favorite thing about her is her straightforward honesty | uncomplicated | No | No | No |
| and that her favorite food is butter. I had almost forgotten the body lying with broken neck on the | damage | Yes | No | No |
| cathedral's hard tiles A black hallway opened into a space like a cathedral. The vault rose into obscurity above me, and a massive window stood ahead of me. | large church | Yes | No | No |
| A black hallway opened into a space like a cathedral. The vault rose into obscurity above me, and a massive window stood ahead of me. | house of god | Yes | No | No |
| "Excuse me," I said, ignoring Nepthys' warning look, | mention | Yes | Yes | No |
| Please, walk this way. | proceed | Yes | Yes | No |
| They were for (an expected invasion of Japan) | planned | Yes* | Yes | No |
| Energy Secretary Bill Richardson went to Baghdad in 1995 | elected | offi | | |
| cial | Yes | Yes* | No | |
| while a representative for New Mexico. Then I felt a tug on the back of my shirt and noticed that Amy | see | Yes | Yes | No |
| was following me. This story might be interesting. Does it have anything to do | scalp | Yes | Yes | No |
| with why your head is shaved? I swear. They all thought I was Steve Martin . | vow | Yes | Yes | Yes |
| ...many clinical psychologists already receive inadequate training | insufficient | Yes | Yes | Yes |
| Now, will you tell me how you know my family? | have | knowl | | |
| edge of | Yes | Yes | Yes | |
| It's okay, you can trust him. | alright | Yes | Yes | Yes |
| ...you know some way to locate the undead, don't you ? | have | Yes | Yes | Yes |
| But in some areas, the seabass are being overfished. | location | Yes | Yes | Yes |
| The Persian Gulf War destroyed much of the country's medical | devastate | Yes | Yes | Yes |
| infrastructure That was very kind of her. | exceedingly | Yes | Yes | Yes |
| ...considers prescriptive authority a logical extension of psychologists' role as health-care providers | rational | Yes | Yes | Yes |
| ...we simply want to discover whether this individual is in fact, | find | Yes | Yes | Yes |
| a vampire. But they liked the way (Jose) has played and they're giving | enjoy | Yes | Yes | Yes |
| him a chance. Karnes had his own Jeep, and went to the beach | head | Yes | Yes | Yes |
| Ochoa has played in the majors for five different teams starting | commence | Yes | Yes | Yes |
| in 1995 The new plant is part of IBM 's push to gain a strong lead in | formidable | Yes | Yes | Yes |
| chip-making. He ran down a hallway and slipped behind one of the doors | doorway | Yes | Yes | Yes |
| "What would convince you to part with it?" She considered | think over | Yes | Yes | Yes |
| this , looking him over. One expert, whose job is so politically sensitive that he spoke | cite | Yes | Yes | Yes |
| on condition that he wouldn't be named or quoted, said . . . We've had genies , indentured sorcerers , even golems and the | intermittent | Yes | Yes | Yes |
| occasional elf. RxP opponents charge the APA with pushing its prescriptionprivileges agenda without adequately assessing support for it in the field. | sufficiently | Yes | Yes | Yes |
| Comey said Tokhtakhounov had three residences in Italy | state | Yes | Yes | Yes |
| It pulled back around his fingertips, which bore things that | object | Yes | Yes | Yes |
| might have been nails or claws. Hall is to return to Washington on April 22 | arrive back | Yes | Yes | Yes |
| Moreover , he said , technology now exists for stealing corporate secrets. | in addition | Yes | Yes | Yes |
| 2866 | | | | |
| 35 thin fingers waved lazily like seaweed. | narrow | Yes | Yes | Yes |
|----------------------------------------------------------------------------------------------------------------|---------------|-------|-------|-------|
| The door took us to the bottom of a flight of wooden stairs. | bring | Yes | Yes | Yes |
| It's exhausting to talk to those people. | folk | Yes | Yes | Yes |
| I bet my friend can tell you everything you need to know. | feel the necessity for | Yes | Yes | Yes |
| That's a question you learn not to ask here. | in this place | Yes | Yes | Yes |
| If he got your girl, she's probably dead! | most likely | Yes | Yes | Yes |
| Rep. Tony Hall, D-Ohio, urges the UN to allow a freer flow of | transmission | Yes | Yes | Yes |
| food and medicine into Iraq. "I have made it a policy of mine never to serve seabass," said | market | Yes | Yes | Yes |
| Hahn. "I refuse to sell it." "You idiots! You woke it up?" | blockhead | Yes | Yes | Yes |
| She will have reunions of sorts with her famous kitchen in the | forthcoming | Yes | Yes | Yes |
| next few weeks. Still unresolved is Sony's effort to hire producers J. Peters and | give job to | Yes | Yes | Yes |
| P. Guber to run the studio. Ochoa will join the club today in Anaheim before tonight's | enter | Yes | Yes | Yes |
| game against the Yankees. | | | | |
| Table 5: The table contains a random sample of 50 substitutes from the SWORDS dataset. The target words are in | | | | |
Table 5: The table contains a random sample of 50 substitutes from the SWORDS dataset. The target words are in bold. * denotes that the specified entailment holds if we assume relevant background knowledge.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
We used ChatGPT for the assistance purely with the language of the paper (paraphrasing and polishing our original ideas). We thoroughly checked that the generated output does not contain any new ideas.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 3.5 describes the dataset we constructed. Section 5 describes the datasets and models we used.
✓ B1. Did you cite the creators of artifacts you used?
Sections 3.5 and 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 8
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 8
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 8
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3.5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5.1
## C ✓ **Did You Run Computational Experiments?** Section 5
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sections 5.3 and 5.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 5.3 and 5.4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5.3 and 5.4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
xin-etal-2023-operator | Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers | https://aclanthology.org/2023.findings-acl.180 | There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc. From a different perspective, we can consider an efficiency method as an operator applied on a model. Naturally, we may construct a pipeline of operators, i.e., to apply multiple efficiency methods on the model sequentially. In this paper, we study the plausibility of this idea, and more importantly, the commutativity and cumulativeness of efficiency operators. We make two interesting observations from our experiments: (1) The operators are commutative{---}the order of efficiency methods within the pipeline has little impact on the final results; (2) The operators are also cumulative{---}the final results of combining several efficiency methods can be estimated by combining the results of individual methods. These observations deepen our understanding of efficiency operators and provide useful guidelines for building them in real-world applications. | # Operator Selection And Ordering In A Pipeline Approach To Efficiency Optimizations For Transformers
Ji Xin, Raphael Tang, Zhiying Jiang, Yaoliang Yu, and **Jimmy Lin**
David R. Cheriton School of Computer Science University of Waterloo
{ji.xin,r33tang,zhiying.jiang,yaoliang.yu,jimmylin}@uwaterloo.ca
## Abstract
There exists a wide variety of efficiency methods for natural language processing (NLP)
tasks, such as pruning, distillation, dynamic inference, quantization, etc. From a different perspective, we can consider an efficiency method as an *operator* applied on a model.
Naturally, we may construct a pipeline of operators, i.e., to apply multiple efficiency methods on the model sequentially. In this paper, we study the plausibility of this idea, and more importantly, the *commutativity* and *cumulativeness* of efficiency operators. We make two interesting observations from our experiments:
(1) The operators are commutative—the order of efficiency methods within the pipeline has little impact on the final results; (2) The operators are also cumulative—the final results of combining several efficiency methods can be estimated by combining the results of individual methods. These observations deepen our understanding of efficiency operators and provide useful guidelines for building them in real-world applications.
## 1 Introduction
Natural language processing (NLP) tasks nowadays heavily rely on complex neural models, especially large-scale pre-trained language models based on the transformer architecture (Vaswani et al., 2017), such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and GPT (Radford et al., 2019). Despite being more accurate than previous models, transformer-based models are typically slow to execute, making it a non-trivial challenge to apply them in real-world applications. For example, it takes a BERT-base model about 200 ms per query to perform a simple sequence classification task on a commercial CPU, which can be too slow in many scenarios. Therefore, model efficiency has become an increasingly important research direction in the transformer era.
A wide variety of efficiency methods have been individually studied for transformers, like pruning (McCarley et al., 2019), distillation (Sanh et al.,
2019), dynamic inference (Xin et al., 2020; Kim and Cho, 2021), and quantization (Shen et al.,
2020), just to name a few. There has also been work on applying multiple efficiency methods together as a pipeline (Kim and Hassan, 2020; Lin et al.,
2021; Cui et al., 2021), but the construction of such pipelines has not been methodically studied. For a desired accuracy–efficiency tradeoff, it remains unclear how to choose components for the pipeline among numerous possibilities. Furthermore, even with a chosen set of efficiency methods, it is unclear whether we need to exhaustively examine all possible orders to find the best one.
In this paper, we study how to effectively construct a pipeline of efficiency methods, and we do this by exploring their properties. Conceptually, we consider each efficiency method as an *operator* applied on a model. We conduct experiments with the RoBERTa model (Liu et al., 2019) and the following components in our efficiency pipelines: distillation, structured pruning, quantization, early exiting, and dynamic length inference. We empirically study two important properties of efficiency operators: (1) Commutativity: does swapping the order of operators affect the final accuracy–efficiency tradeoff of the model? (2) Cumulativeness: how do the two core metrics of efficiency methods, time savings and accuracy drops, compound across multiple operators? Under the condition of our experiments, we show that, for commutativity, the difference between various orderings of the same set of components is usually small and negligible in practice. For cumulativeness, we show that time 2870
![1_image_0.png](1_image_0.png)
saving and accuracy drop are both cumulative to the extent that we can estimate the performance of a new pipeline by combining the results of individual components. The observation of these properties provides the foundation for us to build new pipelines and estimate their performance without having to carry out time-consuming experiments.
Our main contributions in this paper include: (1)
In Section 3, we propose a conceptual framework to treat efficiency methods as operators and efficiency optimization processes as pipelines; (2) In Sections 4 to 5, we demonstrate the properties of operators by experiments. Finally we will conclude the paper and discuss its limitations.
## 2 Related Work And Background
In this section, we first introduce related work, background, and modeling choices for individual efficiency methods chosen for our experiments. We then discuss related work for applying multiple efficiency methods.
Applying transformers for NLP tasks typically involves three stages: pre-training, fine-tuning, and inference (Radford et al., 2019; Devlin et al., 2019).
In this paper, we assume the availability of a pretrained RoBERTa model and study different ways of fine-tuning it to achieve better tradeoffs between inference accuracy and efficiency. *Training* henceforth refers to fine-tuning in this paper.
## 2.1 Knowledge Distillation
Knowledge distillation (Hinton et al., 2015) improves efficiency by distilling knowledge from a large and costly *teacher* model to a small and efficient *student* model. The teacher model's output is used as the supervision signal for the student model's training. In the case of transformers, there are two types of distillation, namely task-agnostic and task-specific, depending on whether the student model is trained for a specific task. These two types correspond to the pre-training stage and the fine-tuning stage.
Previously, Tang et al. (2019) perform taskspecific distillation from a fine-tuned BERT model into non-transformer architectures such as LSTMs, aligning predicted logits of the teacher and the student. Patient knowledge distillation (Sun et al.,
2019) performs task-specific distillation, where the students are transformer models with smaller depth and width; furthermore, they align not only predicted logits but also intermediate states of both models. DistilBERT (Sanh et al., 2019) and TinyBERT (Jiao et al., 2020) perform both task-agnostic and task-specific distillation: first the student model learns from a pre-trained teacher; then it can either be directly fine-tuned like a pre-trained model or learn from another fine-tuned teacher as a student.
In this paper, we focus on *task-specific distillation*, which corresponds to fine-tuning (Figure 1a). We initialize the student model with a DistilRoBERTaBASE (Sanh et al., 2019) backbone that comes from task-agnostic distillation. The student model has the same width as the teacher RoBERTa but only half the number of layers. In addition to the most common loss function (teacher supervising student), which is a soft cross-entropy between output logits of the teacher and the student, we introduce two other parts for the loss function: (1) mean squared error (MSE) between the teacher's and the student's embedding layers' outputs; (2) MSE between the teacher's and the student's final transformer layers' outputs. It has been shown in related work that adding objectives to align intermediate states of the teacher and the student helps with distillation (Sun et al., 2019; Sanh et al., 2019). We simply use a ratio of 1 : 1 : 1 for these three parts of the loss function.
## 2.2 Structured Pruning
Pruning removes unimportant parts of the model and increases the sparsity level of the model. A specific category of pruning, *structured pruning* (Han et al., 2015; Anwar et al., 2017; Gordon et al.,
2020), removes high-level units of the model, such as a layer, an attention head, or an entire row/column in the weight matrix of a feed-forward network (FFN). Model sparsity induced by structured pruning can directly translate to faster execution, and therefore we focus on structured pruning in the paper.
Previously, Michel et al. (2019) show that reducing attention heads *after* training/fine-tuning does not significantly degrade the model's effectiveness and argue that in a lot of cases, the number of attention heads can be reduced. MobileBERT (Sun et al., 2020) reduces the intermediate dimension of a transformer layer's FFN by using a funnel-like structure to first shrink the intermediate layer size and then recover it at the end of the layer. McCarley et al. (2019) improves BERT efficiency for question answering by reducing both attention heads and intermediate dimensions.
In this paper, we follow the work by McCarley et al. (2019); Kim and Hassan (2020) and choose two aspects of the model and prune them separately:
the number of attention heads and the intermediate dimension of the fully connected layer within a transformer layer (Figure 1b). We calculate the importance of attention heads and intermediate dimensions with a first-order method: run inference for the entire dev set and accumulate the first-order gradients for each attention head and intermediate dimension. We then remove the least important attention heads and intermediate dimensions, according to the desired sparsity level, and then rewire the model connections so it becomes a smaller but complete model. After pruning, we perform another round of knowledge distillation from the original model to the pruned model as described in the previous subsection, which further improves the pruned model's accuracy without sacrificing efficiency.
## 2.3 Dynamic Inference
Dynamic inference (Teerapittayanon et al., 2016; Graves, 2016; Dehghani et al., 2019) accelerates inference by reducing the amount of computation adaptively, depending on the nature of the input example. We discuss two types of dynamic inference in this section.
2.3.1 Dynamic Depth: Early Exiting For dynamic depth, early exiting (Xin et al.,
2020; Liu et al., 2020) converts the original finetuned model into a multi-output one, and dynamically chooses the number of layers used for the inference of each example, based on model confidence (Schwartz et al., 2020), model patience (Zhou et al., 2020), or the prediction of an external controlling module (Xin et al., 2021).
Early exiting training We first modify a finetuned model by adding extra classifiers to intermediate transformer layers (Figure 1c). In order to use these extra classifiers, we further train the model before inference. The additional training is done by minimizing the sum of loss functions of all classifiers, and the loss function has the same form for each classifier: the cross entropy between ground truth labels and the classifier's prediction logits. A
special case to notice here is that the training of distillation and pruning needs to be adjusted *after* adding early exiting.
- Distillation after early exiting. When we initialize the student model (e.g., from TinyBERT), we also add early exiting classifiers to it. For training, the i th layer of the student model uses the prediction from the 2i th layer of the teacher model as supervision.
- Pruning after early exiting. When we prune the transformer layers, we do not change the classifiers. For the additional round of distillation, each layer of the student model uses the prediction from its corresponding layer of the teacher model as supervision.
Early exiting inference The early exiting model produces an output probability distribution at each layer's classifier. If the confidence of a certain layer's output exceeds a preset *early exiting threshold*, the model immediately returns the current output; otherwise, inference continues at the next layer, and so forth until the final layer. In this way, when the model is confident enough at an early layer, we no longer need to execute the remaining layers, thereby saving inference computation.
## 2.3.2 Dynamic Sequence Length
Pre-trained language models come with a fixed input sequence length (e.g., 512 for RoBERTa)
that aligns with the design of positional embeddings (Devlin et al., 2019). Inputs longer than the fixed length are truncated and shorter inputs are padded with zero vectors. This fixed length, while being useful for tasks with long inputs, is often unnecessarily large for most downstream applications and leads to a waste of computation.
Previously, PoWER-BERT (Goyal et al., 2020)
shrinks the sequence length gradually as inference progresses into deep layers, eventually reducing the sequence length to 1 at the final layer for sequence-level prediction. Length-Adaptive Transformer (Kim and Cho, 2021) extends the idea to token-level prediction by first reducing the sequence length and then recovering missing tokens' outputs.
In this paper, we use a simple method for length reduction: for each batch, we dynamically set the input sequence length to the maximum length of inputs within the batch. This reduces the number of zero paddings in input sequences and reduces unnecessary computation. Different from previous methods, dynamic sequence length does not affect the model's accuracy.
## 2.4 Quantization
Quantization (Lin et al., 2016; Shen et al., 2020) improves model efficiency by using fewer bits to store and process data. The idea itself is straightforward, but implementation can be highly hardware dependent. Since we run inference on CPUs, we first export the trained model to ONNX1and then run it with 8-bit quantization, following Fastformers (Kim and Hassan, 2020).
## 2.5 Applying Multiple Efficiency Methods
With all the individual efficiency methods available, there has been work on applying multiple ones together. For example, Cui et al. (2021); Aghli and Ribeiro (2021); Park and No (2022) combine pruning and distillation for model compression and acceleration. Phuong and Lampert (2019) explore using distillation to improve the training of early exiting models. Lin et al. (2021) propose a bag of tricks to accelerate the inference stage of neural machine translation models. Fastformers (Kim and Hassan, 2020) propose a pipeline consisting of several components which together provide more than 100×
acceleration. Despite the success of combining efficiency methods, it remains underexplored how to build an efficiency pipeline in order to achieve the best accuracy–efficiency tradeoffs. We aim to tackle this problem in our paper.
## 3 Experimental Design
In this section, we introduce the detailed design and setups for our experiments. Since the experiments are exploratory rather than SOTA-chasing, we focus on providing a fair comparison.
## 3.1 Conceptual Framework
In our experiments, we work with *pipelines* consisting of multiple efficiency *operators* that are applied to the model sequentially. We represent a pipeline with a string of bold capital letters, where each letter represents an efficiency operator and the order of these letters represents their order.
The operators include: Distillation, Structured Pruning, Early Exiting, Dynamic Length, and Quantization. For example, the string "**DEPLQ**"
represents a pipeline of sequentially applying the following operators to a fine-tuned model: (1) distill it into a student model; (2) add early exiting classifiers to it and train; (3) apply structured pruning to make each layer "thinner" and distill from the unpruned model; and (4) use dynamic length and quantization for the final inference. Additionally, we use O to represent an "empty" pipeline, i.e.,
directly applying the Original fine-tuned model.
Not all combinations of operators constitute a meaningful pipeline. Among the operators discussed in this paper, D, P, and E require additional training steps, while Q and L are directly applicable right before inference. Therefore, D, P, and E (Group I) should always appear before Q and L (Group II) in the pipeline. Moreover, applying
![4_image_0.png](4_image_0.png)
D after P does not make sense, since D initializes a small student, and the efficiency brought by the pruning step cannot be passed over to the student.
With these constraints, the number of meaningful pipelines is significantly reduced.
## 3.2 Datasets And Implementation
We conduct experiments with the RoBERTa-base model (Liu et al., 2019) on four sequence classification tasks: MRPC (Dolan and Brockett, 2005),
SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2016; Wang et al., 2018), and QQP (Sharma et al.,
2019). Our implementation of efficiency methods are adopted from Transformers (Wolf et al., 2020),
Fastformers (Kim and Hassan, 2020), and DeeBERT (Xin et al., 2020). We train all the models with an NVIDIA Tesla T4 GPU. We evaluate them with an AMD Ryzen 5800X CPU, which provides more stable measurements for inference latency.
Wall-clock runtime is used as the efficiency metric.
## 3.3 Settings For Pipelines And Operators
Before experimenting with pipelines, we explore the optimal setting (e.g., learning rate, batch size)
for each individual operator and use the same setting in the pipelines. This is a realistic approach since it is impractical to search for the optimal setting for every component in every new pipeline. For training the RoBERTa model, including original fine-tuning, distillation, and training with early exiting, we use the same hyperparameters as in the Transformers library (Wolf et al., 2020): learning rate is set to 10−5; batch size is set to 8; all training procedures consist of 10 epochs with no early stopping. For pruning, we prune the number of attention heads from 12 to 8 and the intermediate dimension from 3072 to 1536. The reason for this is that in our preliminary experiments, the above
## 4 Operator Commutativity And Order
Given a set of operators, we naturally wonder about the best order to apply them. Although this question seems formidable due to the exponentially large number of possible orderings, we show that the question is actually simpler than expected: on the one hand, we have eliminated a number of invalid orderings as described in Section 3; on the other, we show that operators are commutative in the remaining ordering candidates.
## 4.1 Commutative Properties Of Operators
In this subsection, we discuss operator commutativity separately for the two groups.
Group I We show the results of swapping the order of operators from Group I in Figure 2. Since early exiting is involved, which means the model can achieve different tradeoffs between accuracy and inference time, we present each ordering as a *tradeoff curve*, where points are drawn by varying the early exiting threshold of confidence. We can see that when we use the same set of operators (same color), different orderings have similar tradeoff curves, in most cases.
Exceptions exist, however, in the E+P combination on the MRPC dataset. We hypothesize that this is due to training randomness, since MRPC
has smallest size of all. In order to study randomness, we repeat the experiment with additional random seeds and show in Figure 3 the results on MRPC. We can see that (1) the gap between the mean curves is smaller than the gap between curves corresponding to using a single seed; (2) the mean curve of each ordering lies within the 95% confidence interval (95% CI) of other orderings. This
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
shows that the differences between tradeoff curves of different orderings can, at least partly, be attributed to training randomness.
To further quantify the degree of dissimilarity between different orderings, we define and calculate the *distance* between tradeoff curves. The distance between two tradeoff curves is defined as the maximum accuracy (y-axis) difference at the same inference time (x-axis) point. We compare distances between tradeoff curves (1) generated by the same operator order but with different random seeds; and (2) generated by different operator orders. We show the results in Table 1. We can see that while tradeoff curves generated by the same operator order tend to have a smaller average distance, the difference between same/different orders is typically small and the one-standard-deviation (1-SD)
intervals of both sides always overlap. Although we are unable to find a suitable significance test since the distances are not independent, the above analysis shows that the difference of distances between curves from same/different orders is likely not significant. More importantly, as shown in Figure 3, the gap between mean curves of different orderings is smaller than the deviation caused by different random seeds. Therefore, in practice, we can regard the operators as commutative.
Group II The two operators, Q and L, are independent of each other, and therefore their order can be arbitrarily swapped (i.e., they are strictly commutative by definition). We show the results of applying Q and/or L at the end of different pipelines in Table 2. We do not report the accuracy of +L
since using dynamic length does not change the model's accuracy. Based on the above discussion, when we have a set of components to apply, it suffices to simply pick a
| Dataset | Order | D+E | P+E | D+P+E |
|-----------|-------------|-------------|-------------|-------------|
| MRPC | Same | 1.57 ± 0.69 | 2.31 ± 0.85 | 1.53 ± 0.62 |
| Diff. | 1.74 ± 0.40 | 4.12 ± 1.10 | 2.59 ± 0.97 | |
| SST-2 | Same | 1.30 ± 0.39 | 1.64 ± 0.46 | 1.49 ± 0.53 |
| Diff. | 1.48 ± 0.46 | 1.84 ± 0.62 | 1.98 ± 0.82 | |
| QNLI | Same | 2.24 ± 1.20 | 4.40 ± 2.49 | 3.41 ± 2.51 |
| Diff. | 3.93 ± 0.82 | 4.58 ± 2.39 | 4.91 ± 2.44 | |
| QQP | Same | 2.38 ± 1.36 | 2.11 ± 0.86 | 2.30 ± 1.05 |
| Diff. | 3.64 ± 1.14 | 3.30 ± 1.27 | 4.56 ± 1.71 | |
reasonable order from the candidate space, rather than extensively searching for the optimal setting.
## 5 Operator Cumulativeness And Predictability Of Pipelines
In order to choose components for an efficiency pipeline, an important question is whether time savings and accuracy drops of individual operators are cumulative. In this subsection, we show that they are indeed cumulative to the degree that accuracy–efficiency tradeoffs of a new pipeline can be estimated, simply by combining the results of individual operators.
We first discuss operators from Group I. In Figure 4, we show how we can estimate the tradeoff curve of a new pipeline based on the results of its constituents, using the two larger and more stable datasets, QQP and QNLI. For example, in the top-right subfigure, we show the estimation for the tradeoff curves of pipelines comprising E, D, and P, based on the results of individually applying each of these operators.
| Dataset | Pipeline | Accuracy (%) | Time (ms per example) | | | | | |
|-----------|---------------------|----------------|-------------------------|-------|------|------------|------|------|
| Raw | +Q (relative diff.) | Raw | +Q | +L | +QL | +QL (est.) | | |
| O | 92.7 | 92.5 (−0.2%) | 170.7 | −50% | −83% | −94% | −92% | |
| D | 89.2 | 88.8 (−0.4%) | 85.5 | −49% | −82% | −94% | −91% | |
| P | 91.0 | 89.0 (−2.2%) | 122.4 | −64% | −86% | −94% | −95% | |
| DP | 88.9 | 87.9 (−1.1%) | 59.3 | −62% | −84% | −94% | −94% | |
| MRPC | O | 93.7 | 93.5 (−0.2%) | 170.8 | −50% | −86% | −97% | −93% |
| D | 92.3 | 92.3 (−0.0%) | 85.5 | −49% | −86% | −97% | −93% | |
| P | 92.4 | 91.7 (−0.8%) | 126.7 | −66% | −89% | −97% | −96% | |
| DP | 92.0 | 90.9 (−1.2%) | 62.9 | −65% | −88% | −97% | −96% | |
| SST-2 | O | 92.3 | 92.1 (−0.2%) | 174.2 | −51% | −83% | −95% | −92% |
| D | 91.3 | 90.7 (−0.7%) | 86.9 | −50% | −82% | −95% | −91% | |
| P | 91.5 | 91.4 (−0.1%) | 121.5 | −64% | −86% | −95% | −95% | |
| DP | 89.8 | 89.6 (−0.2%) | 62.6 | −65% | −85% | −95% | −95% | |
| QNLI | O | 88.6 | 88.3 (−0.3%) | 172.3 | −51% | −86% | −96% | −93% |
| D | 87.9 | 87.7 (−0.2%) | 88.2 | −51% | −85% | −97% | −93% | |
| P | 88.5 | 88.5 (−0.0%) | 118.3 | −63% | −87% | −97% | −95% | |
| DP | 87.6 | 87.6 (−0.0%) | 58.8 | −62% | −86% | −97% | −95% | |
| QQP | | | | | | | | |
The idea for estimating accuracy drops is based on the following *cumulativeness assumption*. Suppose R is a pipeline and A* is the accuracy for a pipeline *, the assumption is:
$$\begin{array}{r c l}{{A_{\mathbf{R+D}}}}&{{=}}&{{\frac{A_{\mathbf{D}}}{A_{\mathbf{0}}}\times A_{\mathbf{R}},}}\\ {{A_{\mathbf{R+P}}}}&{{=}}&{{\frac{A_{\mathbf{P}}}{A_{\mathbf{0}}}\times A_{\mathbf{R}}.}}\end{array}$$
In other words, our assumption is that adding D or P to any pipeline should result in similar relative accuracy drops. We can therefore estimate the accuracy of ED, EP, and EDP (and other orders of the same set of operators) as follows: (1) calculate accuracy drops of D and P relative to O; (2) multiply the relative accuracy drops to points on E's tradeoff curve.
The idea for estimating time savings is also similar, but additional modifications are necessary:
- When we add P to E, since they work on reducing different dimensions of the model
(width and depth), the time savings are independent and directly cumulative:
$$T_{\mathbf{E+P}}={\frac{T_{\mathbf{P}}}{T_{\mathbf{0}}}}\times T_{\mathbf{E}},$$
- The other is $\sigma_{\rm max}=\sigma_{\rm max}$.
where similarly, T* is the inference time for a pipeline *.
(1) $\binom{2}{2}$ .
- When we add D to E, we need to consider the fact that both D and E reduce the number of layers. Therefore, our estimation is based on interpolating the following two extreme cases.
When the early exiting threshold is extremely large and the model uses all layers for inference, the relative time saving will be close to TD/TO; when the early exiting threshold is extremely small and the model exits after the first layer, adding D provides no extra time saving. The final time saving estimation for E+D is therefore the following interpolation:
$$T_{\mathbf{E+D}}=t_{\mathbf{E}}+(T_{\mathbf{E}}-t_{\mathbf{E}})\times{\frac{T_{\mathbf{D}}}{T_{\mathbf{O}}}},\quad\quad(4)$$
where tE is the minimum value of time in the tradeoff curve of E (i.e., the point where we early exit after only one layer).
- When we add both P and D to E, we combine the above two estimations:
$$T_{\mathbf{E+Dp}}=\left(t_{\mathbf{E}}+(T_{\mathbf{E}}-t_{\mathbf{E}})\times{\frac{T_{\mathbf{D}}}{T_{\mathbf{O}}}}\right)\times{\frac{T_{\mathbf{P}}}{T_{\mathbf{O}}}}.\,\,\,(5)$$
We use the above ideas to estimate tradeoff curves of new pipelines and show the results in Figure 4. From the figure, we can see that the estimation curves (orange) align well with the measured
![7_image_0.png](7_image_0.png)
curves (green), across different datasets and operator sets. This shows that individual components from Group I are cumulative with each other under these settings.
For operators from Group II, we refer to Table 2.
We see that on the same dataset, Q leads to similar accuracy drops when added to any pipeline, especially on the larger and more stable datasets, QNLI and QQP. Time savings, on the other hand, are trickier:
- L provides consistent time savings for all pipelines, showing that it is cumulative with any operator from Group I.
- L and Q are also cumulative with each other, as evidenced by the fact that the measured time savings of +QL align well with the estimation of +QL, which is simply multiplying the respective savings of Q and L.
- Q, however, is cumulative only with D and E,
but not P—it saves more time for pipelines with P. This is because quantization's acceleration is different for different types of operations, and pruning changes the proportion of each type of operations within a transformer layer, while distillation or early exiting does not. When we estimate the tradeoff of a pipeline containing both Q and P, PQ needs
to be treated as a compound operator, and it is cumulative with others. This also applies to other operators that change the connection within a transformer layer.
Empirically, the observation that operators are cumulative facilitates future experiments on efficiency pipelines: for pipelines that are computationally expensive to train and evaluate, simply measuring the performance of their components can provide us with a reliable estimation of the pipeline's behavior. Therefore, choosing efficiency methods for a pipeline according to desired accuracy–
efficiency tradeoffs becomes easy calculation once the measurement of individual operators is finished.
On the theoretical side, the cumulativeness observation also makes it easier to analyze the contribution of each component, i.e., how much time each operator saves and how much accuracy each sacrifices. The Shapley value (Shapley, 1997) of each component, for instance, can be approximated by simply using the standalone estimation (Fréchette et al., 2016).
## 6 Conclusion
In this paper, we propose a conceptual framework to consider efficiency methods as operators applied on transformer models and study the properties of these operators when used as pipelines. We observe that, under the condition of our experiments,
(1) efficiency operators are commutative: changing their order has little practical impact on the final efficiency–accuracy tradeoff; (2) efficiency operators are cumulative: a new pipeline's performance can be estimated by aggregating time savings and accuracy drops of each component. These observations facilitate future construction, evaluation, and application of efficiency pipelines, and also provide an interesting direction to better understand efficiency pipelines.
## Limitations
There exist so many different transformer models and efficiency methods that it is extremely difficult to conduct exhaustive experiments for all of them.
Although our experiments demonstrate nice properties for efficiency operators, the observations are restricted to our experimental setup. Considering the huge space of all combinations of transformer models, efficiency methods, and datasets, our experiments provide understanding for an important but small subspace, and it is possible that the conclusions no longer hold when we explore further.
We hope that our discoveries can inspire more future research, both empirical and theoretical, to push further the frontier of our understanding of the space.
## Acknowledgements
We thank anonymous reviewers for their constructive suggestions. This research is supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada.
## References
Nima Aghli and Eraldo Ribeiro. 2021. Combining weight pruning and knowledge distillation for cnn compression. In *2021 IEEE/CVF Conference on* Computer Vision and Pattern Recognition Workshops (CVPRW), pages 3185–3192.
Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung.
2017. Structured pruning of deep convolutional neural networks. *ACM Journal on Emerging Technologies in Computing Systems (JETC)*, 13(3):1–18.
Baiyun Cui, Yingming Li, and Zhongfei Zhang. 2021.
Joint structured pruning and dense knowledge distillation for efficient transformer model compression.
Neurocomputing, 458:56–69.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2019. Universal transformers. In *International Conference on* Learning Representations.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
Alexandre Fréchette, Lars Kotthoff, Tomasz Michalak, Talal Rahwan, Holger Hoos, and Kevin LeytonBrown. 2016. Using the Shapley value to analyze algorithm portfolios. *Proceedings of the AAAI Conference on Artificial Intelligence*, 30(1).
Mitchell Gordon, Kevin Duh, and Nicholas Andrews.
2020. Compressing BERT: Studying the effects of weight pruning on transfer learning. In *Proceedings* of the 5th Workshop on Representation Learning for NLP, pages 143–155, Online. Association for Computational Linguistics.
Saurabh Goyal, Anamitra Roy Choudhury, Saurabh Raje, Venkatesan Chakaravarthy, Yogish Sabharwal, and Ashish Verma. 2020. PoWER-BERT: Accelerating BERT inference via progressive word-vector elimination. In *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*,
pages 3690–3699. PMLR.
Alex Graves. 2016. Adaptive computation time for recurrent neural networks. *arXiv preprint* arXiv:1603.08983.
Song Han, Jeff Pool, John Tran, and William Dally.
2015. Learning both weights and connections for efficient neural network. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu.
2020. TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 4163–4174, Online. Association for Computational Linguistics.
Gyuwan Kim and Kyunghyun Cho. 2021. Lengthadaptive transformer: Train once with length drop, use anytime with search. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6501–6511, Online. Association for Computational Linguistics.
Young Jin Kim and Hany Hassan. 2020. FastFormers:
Highly efficient transformer models for natural language understanding. In Proceedings of SustaiNLP:
Workshop on Simple and Efficient Natural Language Processing, pages 149–158, Online. Association for Computational Linguistics.
Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. 2016. Fixed point quantization of deep convolutional networks. In *Proceedings of The 33rd* International Conference on Machine Learning, volume 48 of *Proceedings of Machine Learning Research*, pages 2849–2858, New York, New York, USA. PMLR.
Ye Lin, Yanyang Li, Tong Xiao, and Jingbo Zhu. 2021.
Bag of tricks for optimizing transformer efficiency.
In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 4227–4233, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. 2020. FastBERT: a selfdistilling BERT with adaptive inference time. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6035–
6044, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*.
JS McCarley, Rishav Chakravarti, and Avirup Sil. 2019.
Structured pruning of a BERT-based question answering model. *arXiv preprint arXiv:1910.06360*.
Paul Michel, Omer Levy, and Graham Neubig. 2019.
Are sixteen heads really better than one? In *Advances in Neural Information Processing Systems*,
volume 32. Curran Associates, Inc.
Jinhyuk Park and Albert No. 2022. Prune your model before distill it. In *European Conference on Computer Vision*, pages 120–136. Springer.
Mary Phuong and Christoph Lampert. 2019.
Distillation-based training for multi-exit architectures. In *2019 IEEE/CVF International Conference* on Computer Vision (ICCV), pages 1355–1364.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. *OpenAI*
Blog.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. 2020. The right tool for the job: Matching model and instance complexities. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 6640–6651, Online.
Association for Computational Linguistics.
Lloyd S. Shapley. 1997. A value for n-person games.
Classics in game theory, 69.
Lakshay Sharma, Laura Graesser, Nikita Nangia, and Utku Evci. 2019. Natural language understanding with the Quora Question Pairs dataset. *arXiv* preprint arXiv:1907.01041.
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-BERT: Hessian based ultra low precision quantization of bert. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 34, pages 8815–8821.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019.
Patient knowledge distillation for BERT model compression. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323–4332, Hong Kong, China. Association for Computational Linguistics.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT:
a compact task-agnostic BERT for resource-limited devices. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 2158–2170, Online. Association for Computational Linguistics.
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from bert into simple neural networks. *arXiv preprint arXiv:1903.12136*.
Surat Teerapittayanon, Bradley McDanel, and HsiangTsung Kung. 2016. BranchyNet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 2464–2469. IEEE.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP*, pages 353–355, Brussels, Belgium.
Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing:
System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. DeeBERT: Dynamic early exiting for accelerating BERT inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2246–2251, Online. Association for Computational Linguistics.
Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin.
2021. BERxiT: Early exiting for BERT with better fine-tuning and extension to regression. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:
Main Volume, pages 91–104, Online. Association for Computational Linguistics.
Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. BERT loses patience: Fast and robust inference with early exit.
In *Advances in Neural Information Processing Systems*, volume 33, pages 18330–18341. Curran Associates, Inc.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Limitations section on page 9.
✓ A2. Did you discuss any potential risks of your work?
The Limitations section on page 9.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction (Section 1)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3.2
✓ B1. Did you cite the creators of artifacts you used?
Section 3.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
I've never seen such things in previous papers.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 3.2
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
number of parameters: everyone knows the total computational budget (e.g., GPU hours), and computing infrastructure used: nobody cares The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
alghamdi-etal-2023-aramus | {A}ra{MUS}: Pushing the Limits of Data and Model Scale for {A}rabic Natural Language Processing | https://aclanthology.org/2023.findings-acl.181 | Developing monolingual large Pre-trained Language Models (PLMs) is shown to be very successful in handling different tasks in Natural Language Processing (NLP). In this work, we present AraMUS, the largest Arabic PLM with 11B parameters trained on 529GB of high-quality Arabic textual data. AraMUS achieves state-of-the-art performances on a diverse set of Arabic classification and generative tasks. Moreover, AraMUS shows impressive few-shot learning abilities compared with the best existing Arabic PLMs. | # Aramus: Pushing The Limits Of Data And Model Scale For Arabic Natural Language Processing
Asaad Alghamdi1,∗ Xinyu Duan2,∗ Wei Jiang2 Zhenhai Wang2 **Yimeng Wu**3 Qingrong Xia2 Zhefeng Wang2 Yi Zheng2 Mehdi Rezagholizadeh3 **Baoxing Huai**2 Peilun Cheng1 **Abbas Ghaddar**3 1 AI Cognitive Team, Tonomus 2 Huawei Cloud Computing Technologies Co., Ltd.
3 Huawei Technologies Co., Ltd.
{asaad.alghamdi,eddie.chengpeilun}@neom.com
{duanxinyu,jiangwei160,wangzhenhai1,yimeng.wu,xiaqingrong,wangzhefeng, zhengyi29,mehdi.rezagholizadeh,huaibaoxing,abbas.ghaddar}@huawei.com
## Abstract
Developing monolingual large Pre-trained Language Models (PLMs) is shown to be very successful in handling different tasks in Natural Language Processing (NLP). In this work, we present AraMUS, the largest Arabic PLM
with 11B parameters trained on 529GB of highquality Arabic textual data. AraMUS achieves state-of-the-art performances on a diverse set of Arabic classification and generative tasks.
Moreover, AraMUS shows impressive few-shot learning abilities compared with the best existing Arabic PLMs.
## 1 Introduction
Scaling-up Pre-trained Language Models (PLMs) has led to astonishing performance gains on a vast variety of Natural Language Processing (NLP)
tasks (Du et al., 2021; Zoph et al., 2022; Smith et al., 2022). It has also opened new perspectives for studying the opportunities and limitations of large PLMs (Raffel et al., 2019; Dale, 2021; Bommasani et al., 2021), as well as their social and ethical impacts (Bender et al., 2021; Weidinger et al., 2021; Tamkin et al., 2021; Rae et al., 2021a; Susnjak, 2022).
Although for some languages such as English and Chinese, several PLMs with even more than hundred billions of parameters have been developed (Rae et al., 2021b; Chowdhery et al., 2022; Zeng et al., 2021; Sun et al., 2021), little or no progress has been made on this direction for many other languages including Arabic.1 While there have recently been few attempts to develop multibillion parameters Arabic PLMs (Nagoudi et al.,
2022a; Antoun et al., 2021b; Lakim et al., 2022),
∗ Equal contribution 1Arabic is among top 10 most popular languages in the world with 420M native speakers, and more than 25 popular dialects (Guellil et al., 2021).
still, their performances and abilities have not been well investigated. The largest well-studied Arabic PLM has no more than 370M parameters (Nagoudi et al., 2022b; Ghaddar et al., 2022).
In this work, we introduce AraMUS, an 11B parameter encoder-decoder T5 (Raffel et al., 2019)
model, which is pre-trained on 529GB of highquality Arabic text (filtered out of 8.8TB). To the best of our knowledge, AraMUS is the largest Arabic PLM in terms of pre-training data and model size. Furthermore, it is the first time a multi-billion parameter Arabic PLM is *systematically* evaluated, against the existing state-of-the-art models, on a diversified set of discriminative and generative task models. More precisely, AraMUS achieves new state-of-the-art performances of 79.8% on the ALUE (Seelawi et al., 2021) benchmark, which is a collection of 8 discriminative tasks. In addition, it significantly outperforms the best available encoder-decoder models on multiple generative tasks. Finally, AraMUS shows remarkable abilities to maintain its performance under few-shot settings.
## 2 Related Work
Recently, there has been a growing body of the literature on very large-scale English PLMs by thoroughly studying different aspects of their scaling. These efforts can be summarized into scaling their pre-training data (Hoffmann et al., 2022)
and model size (Dale, 2021; Rae et al., 2021b; Smith et al., 2022), designing efficient architectures (Zoph et al., 2022; Chowdhery et al., 2022)
and pre-training objectives (Bajaj et al., 2022; Tay et al., 2022), democratizing their access (Zhang et al., 2022), and making them useful in real-world applications (Ouyang et al., 2022; Qu et al., 2023).
Besides English, there have been multiple attempts to develop multilingual (Scao et al., 2022), as well as non-Anglocentric (Zeng et al., 2021; Sun et al.,
2021; Shin et al., 2022) multi-billion PLMs.
Unfortunately, the development of Arabic PLMs does not follow the same pace as that of English.
The earliest released Arabic PLMs (Antoun et al.,
2020; Safaya et al., 2020) were based on the BERT*base* (as well as *-large*) architecture (Devlin et al.,
2018) and pre-trained on less than 100GB of unfiltered data. Successive works tried to improve Arabic BERT-base models performance by scaling up the pre-training data up to 197GB and 167GB of unfiltered Arabic text for MARBERT (Abdul-Mageed et al., 2021) and CAMeLBERT (Inoue et al., 2021)
respectively. In addition, other works focused on developing Arabic PLMs to support other architectures like AraElectra (Antoun et al., 2021a),
AraGPT (Antoun et al., 2021b), AraT5 (Nagoudi et al., 2022b), and AraBART (Eddine et al., 2022)
which are equivalent to English ELECTRA (Clark et al., 2020), GPT (Radford et al., 2018), T5 (Raffel et al., 2019), and BART (Lewis et al., 2019)
respectively.
Recently, Ghaddar et al. (2022) developed stateof-the-art Arabic BERT (JABER and SABER) and T5 models (AT5S and AT5B) by improving the pre-training data quantitatively and qualitatively.
More precisely, they pre-trained Arabic BERTbase/large and T5-small/base models on 115GB
of high-quality Arabic text data (filtered out of 514GB). AraGPT-Mega (Antoun et al., 2021b), Jasmine (Nagoudi et al., 2022a), NOOR (Lakim et al.,
2022) are the only existing multi-billion Arabic PLMs. These are decoder-only GPT models with 1.5B, 6.7B, and 10B parameters respectively. However, these aforementioned works suffer from the absent (e.g. in AraGPT, NOOR) or limited (e.g.
Jasmine) comprehensive evaluation on NLP endtasks. Moreover, some of these models (such as NOOR and Jasmine) are not publicly available for custom evaluations.2 Evaluation is a key factor for understanding the strengths and limitations of these models, without which the progress of the Arabic NLP field is hindered.
## 3 Aramus 3.1 Pre-Training Data
We mainly leverage all (up to July 2022) of the 90 Common Crawl 3 monthly web scrapes in order to collect massive amount of Arabic textual data. This 2We refer the reader to Appendix B.2 for detailed positioning of AraMUS against each of these three models.
3https://commoncrawl.org is significantly larger compared to JABER (Ghaddar et al., 2022), NOOR (Lakim et al., 2022), and Jasmine (Nagoudi et al., 2022a), which use 10, 21, and 71 monthly CC shards, respectively. Then, we apply aggressive noise filtering and deduplication, which give rise to 529GB of high-quality Arabic text data. Nagoudi et al. (2022a) introduced the closest comparable pre-training corpus size to us with 413GB (22% smaller than ours) of Arabic text data. Our data mainly differs in using 2.5 times more CC data, while they used 3.8 times more dialect data than ours. We refer the reader to Appendix A.1 for technical details regarding the pre-training data collection.
## 3.2 Model And Implementation
AraMUS follows the same encoder-decoder architecture and configuration as T5-xxl (Raffel et al.,
2019) model with 64k vocabulary size. We choose encoder-decoder T5 architecture because it was found to deliver a good balance between the performance of the discriminative and generative tasks (Raffel et al., 2019; Tay et al., 2022), compared to encoder-only BERT (discriminative tasks focused) and decoder-only GPT (Radford et al.,
2019) (generative tasks focused). AraMUS has 11B parameters in total, which makes it the largest existing Arabic T5 model. It was pre-trained using 128 NVIDIA A100 GPUs for 2 months. Technical details regarding implementation and hyperparameters used for pre-training are listed in Appendix A.2.
## 3.3 Evaluation Protocol
We assess AraMUS by performing extensive finetuning experiments on a diverse set of NLP tasks.
On one side, we experiment on 8 tasks from the well-established ALUE benchmark (Seelawi et al.,
2021), which includes one regression (SVREG),
one multi-label classification (SEC), 4 singlesentence (MDD, FID, OOLD, and OHSD) and 2 sentence-pair (MQ2Q and XNLI) classification tasks. On the generative tasks side, we evaluate on Question Answering (QA), Question Generation
(QG), and Text Summarization (TS).
We compare AraMUS with state-of-the-art Arabic PLMs in the literature, including ARBERT,
MARBERT, JABER (BERT-base), SABER, ALM1.0 (BERT-large), AT5B and AraT5-base (T5-base).
The experimental protocol is designed to ensure the diversity of the tasks, and the public availability of models. Most importantly, we make sure that
| Model | #Params | MQ2Q | MDD | SVREG | SEC | FID | OOLD | XNLI | OHSD | Avg. |
|-------------|-----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| BERT-models | | | | | | | | | | |
| ARBERT | 163M | 74.7±0.1 | 62.5±0.2 | 83.5±0.6 | 43.9±0.6 | 85.3±0.3 | 90.5±0.5 | 70.8±0.5 | 81.9±2.0 | 74.1±0.6 |
| MARBERT | 163M | 69.1±0.9 | 63.2±0.3 | 88.0±0.4 | 47.6±0.9 | 84.7±0.4 | 91.8±0.3 | 63.3±0.7 | 83.8±1.4 | 73.9±0.7 |
| JABER | 135M | 75.1±0.3 | 65.7±0.3 | 87.4±0.7 | 46.8±0.8 | 84.8±0.3 | 92.2±0.5 | 72.4±0.7 | 85.0±1.6 | 76.2±0.7 |
| SABER | 369M | 77.7±0.4 | 67.4±0.2 | 89.3±0.3 | 49.0±0.5 | 86.1±0.3 | 93.4±0.4 | 75.9±0.3 | 88.9±0.3 | 78.5±0.3 |
| T5-models | | | | | | | | | | |
| AT5B | 296M | 73.7±0.1 | 64.7±0.2 | 78.1±2.4 | 43.8±0.7 | 83.1±0.5 | 90.0±0.4 | 72.2±0.4 | 81.2±2.1 | 73.3±0.9 |
| AraT5-base | 289M | 70.5±2.1 | 63.6±0.2 | 80.8±1.3 | 44.0±0.6 | 82.3±0.4 | 90.5±0.4 | 72.5±1.5 | 78.3±1.4 | 73.0±1.0 |
| AraMUS | 11B | 80.7±0.1 | 68.0±0.2 | 89.8±0.3 | 49.6±0.7 | 86.6±0.4 | 93.8±0.4 | 82.9±0.2 | 88.2±1.0 | 79.9±0.2 |
Table 1: DEV set performances and standard deviations over 5 runs on the ALUE benchmark.
Model #Params MQ2Q MDD SVREG SEC FID OOLD XNLI OHSD Avg. **DIAG**
JABER 135M 93.1 64.1 70.9 31.7 85.3 91.4 73.4 79.6 73.7 24.4
ALM-1.0 350M 94.5 65.1 70.1 35.3 86.0 91.7 77.7 85.7 75.8 30.2 SABER 369M 93.3 66.5 79.2 38.8 86.5 93.4 76.3 84.1 77.3 26.2 AraT5-base 282M 91.3 63.8 65.9 30.5 82.3 88.8 68.2 77.9 71.1 15.4
AraMUS 11B 95.2 67.5 80.4 41.6 87.2 95.5 83.2 87.4 79.8 **42.0**
datasets are of high quality, open-sourced, and supported by a well-established evaluation protocol. Our goal is to have a fair comparison between models, as well as the credibility and reproducibility of the results. A detailed description of fine-tuning datasets, evaluation metrics, baselines, and implementation details are available in Appendix B.
## 3.4 Results
Table 1 shows the dev set results of the eight ALUE
tasks with their average scores and standard deviations of 5 runs. The baseline results are directly brought from (Ghaddar et al., 2022) and they are directly comparable with AraMUS since we follow the same evaluation protocol. Table 2 shows the test set performances of the state-of-the-art models on the ALUE leaderboard.
As we expect, AraMUS outperforms all other baseline models on both dev and test sets and achieves a new state-of-the-art performances on ALUE. While our average ALUE result is 1.4%
better than the best baseline, SABER, the latter outperforms AraMUS on the OHSD dataset. On the other hand, AraMUS significantly outperforms SABER by 2.5% on average and 3.3% on OHSD
when comparing results on the leaderboard test. Interestingly, this is roughly a similar performance gap (2.1%) on the English GLUE (Wang et al., 2018) between the English T5-xxl (Raffel et al.,
2019) (11B parameters) and the well-trained English Roberta-large (Liu et al., 2019) model.
Moreover, we observe a huge gap of 13.8% between AraMUS and SABER on the ALUE diagnostic set. DIAG was specifically designed to evaluate models' abilities to capture complex linguistic phenomena in Arabic (Seelawi et al., 2021).
These observations clearly indicate that scaling the model with more data and parameters greatly improves the robustness and generalization abilities of Arabic PLMs. It is worth mentioning that our results are in contrast with previous observations reported in (Nagoudi et al., 2022b; Ghaddar et al.,
2022) that encoder-decoder T5 architecture Arabic models (e.g. AraT5-base and AT5B) significantly underperform BERT models on discriminative tasks. Our results suggest that, for Arabic, encoder-decoder models require more data and parameters to catch up with encoder-only models on discriminative tasks.
Table 3: F1-score and Exact Match (EM) scores of T5style models on the Question Answering (QA) task.
We further validate the performance of AraMUS
by conducting an extensive set of experiments on the ALUE benchmark under few-shot setting. Figure 1 shows AraMUS and the best publicly avail-
| Dev | Test | | | |
|------------|----------|----------|------|------|
| Model | EM | F1 | EM | F1 |
| AraT5-base | 40.2±0.4 | 61.4±0.8 | 31.2 | 65.7 |
| AT5B | 40.8±0.7 | 61.6±1.1 | 31.6 | 67.2 |
| AraMUS | 49.8±1.1 | 69.1±0.9 | 35.3 | 72.3 |
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
| Rouge1 | Rouge2 | RougeL | |
|-----------------|----------|----------|----------|
| WikiLingua Dev | | | |
| AraT5-base | 25.0±0.2 | 10.0±0.0 | 22.4±0.2 |
| AT5B | 26.1±2.8 | 10.5±1.6 | 23.2±2.5 |
| AraMUS | 30.5±0.1 | 13.2±0.1 | 26.9±0.1 |
| WikiLingua Test | | | |
| AraT5-base | 25.1 | 10.2 | 22.5 |
| AT5B | 27.8 | 11.5 | 24.8 |
| AraMUS | 30.9 | 13.5 | 27.1 |
| EASC Test | | | |
| AraT5-base | 10.7 | 2.7 | 9.3 |
| AT5B | 12.6 | 3.5 | 11.3 |
| AraMUS | 16.1 | 6.7 | 13.3 |
able Arabic PLMs (JABER and SABER) performances on 3 representative ALUE tasks (see the full results in Table 7 of Appendix C) and the average ALUE score. The 3 selected tasks are: SEC
because it shows specific results; OHSD since with FID and OOLD they show similar result patterns, and MDD as a representative of trends observed for tasks MQ2Q, SVREG, and XNLI.
Table 4: T5-style models' performances on the Text Summarization task.
First, we notice that exceptionally on SEC, AraMUS performs on par with JABER and underperforms SABER on many data points. We think that this is because the text-to-text approach is not effective for multi-label classification tasks under a few-shot setting. Second, we observe that AraMUS
has a marginal gain compared to the best baseline
(SABER) on some tasks like OHSD, e.g. 0.2%,
1.0% and 6.0% on 8, 128, and 256 examples respectively. As for the remaining 4 tasks (represented by MDD), we observe that AraMUS significantly outperforms both baselines by a large margin. Overall, Table 5: Question Generation dev and test sets BLEU
score of T5-style models.
Finally, we assess the text generation abilities of AraMUS by experimenting on 3 generative tasks in Table 3, 4 and 5. Overall, the observations are consistent with the results obtained on ALUE, AraMUS reports the highest scores on all tasks and across all metrics. More precisely, AraMUS significantly outperforms AT5B, the state-of-the-art Arabic T5-base model, by 7.5% and 5.1% on QA
F1 score dev and test sets respectively. Similarly, AraMUS has a gain of 4.4%, 4.1%, and 3.5% on TS dev, test, and EASC test rouge1 score respectively. However, gains are not always significant on generative tasks, as we observe a smaller margin of improvement of 0.5% and 0.4% and against the best baseline on QG dev and test sets respectively.
## 4 Conclusion
| Model | Dev | Test |
|------------|---------|--------|
| AraT5-base | 6.7±0.1 | 13.5 |
| AT5B | 8.1±0.1 | 17.0 |
| AraMUS | 8.6±0.1 | 17.4 |
In this paper, we introduced AraMUS which is not only the largest Arabic PLM in terms of pretraining data and model size, but also the first multibillion Arabic PLM to be extensively evaluated on a wide range of NLP tasks. Since our work gives clues on the benefits and limitations of scaling up data and model sizes, we hope that it will pave the way for the Arabic NLP community to focus on problems that are beyond the reach of PLM scaling.
## Limitations
While our model shows state-of-the-art results on many discriminative and generative tasks, we can think of the following main caveats of our work.
First, the number of generative tasks that we evaluate on is relatively small especially when consider that AraMUS is text-to-text encoder-decoder model. This is mainly because of the rarity of Arabic generative datasets that are at the same time well-established and open-source. Second, it would be important to study how end-tasks performances is impacted when ablating the model size (e.g. 1-6 billion parameters models), pretraining data quantity or/and quality.
## References
Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2021. ARBERT &
MARBERT: Deep bidirectional transformers for Arabic. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7088–7105, Online. Association for Computational Linguistics.
Wissam Antoun, Fady Baly, and Hazem Hajj. 2020.
Arabert: Transformer-based model for arabic language understanding. In *LREC 2020 Workshop Language Resources and Evaluation Conference 11–16* May 2020, page 9.
Wissam Antoun, Fady Baly, and Hazem Hajj. 2021a.
AraELECTRA: Pre-training text discriminators for Arabic language understanding. In *Proceedings of* the Sixth Arabic Natural Language Processing Workshop, pages 191–195, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
Wissam Antoun, Fady Baly, and Hazem Hajj. 2021b.
Aragpt2: Pre-trained transformer for arabic language generation. In *Proceedings of the Sixth Arabic Natural Language Processing Workshop*, pages 196–207.
Payal Bajaj, Chenyan Xiong, Guolin Ke, Xiaodong Liu, Di He, Saurabh Tiwary, Tie-Yan Liu, Paul Bennett, Xia Song, and Jianfeng Gao. 2022. Metro: Efficient denoising pretraining of large scale autoencoding language models with model generated signals. *arXiv* preprint arXiv:2204.06644.
Emily M Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*,
pages 610–623.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx,
Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
arXiv preprint arXiv:2003.10555.
Robert Dale. 2021. Gpt-3: What's it good for? *Natural* Language Engineering, 27(1).
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathy MeierHellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2021. Glam: Efficient scaling of language models with mixture-of-experts.
Moussa Kamal Eddine, Nadi Tomeh, Nizar Habash, Joseph Le Roux, and Michalis Vazirgiannis. 2022.
Arabart: a pretrained arabic sequence-to-sequence model for abstractive summarization. *arXiv preprint* arXiv:2203.10945.
Mahmoud El-Haj, Udo Kruschwitz, and Chris Fox.
2010. Using mechanical turk to create a corpus of arabic summaries.
Ibrahim Abu El-Khair. 2016. 1.5 billion words Arabic Corpus. *arXiv preprint arXiv:1611.04033*.
AbdelRahim Elmadany, El Moatez Billah Nagoudi, and Muhammad Abdul-Mageed. 2022. Orca: A challenging benchmark for arabic language understanding.
arXiv preprint arXiv:2212.10758.
Abbas Ghaddar, Yimeng Wu, Sunyam Bagga, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, and Phillippe Langlais. 2022. Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding. In *Proceedings of the 2022* Conference on Empirical Methods in Natural Language Processing, pages 3135–3151, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Imane Guellil, Houda Saâdane, Faical Azouaou, Billel Gueni, and Damien Nouvel. 2021. Arabic natural language processing: An overview. *Journal of King* Saud University-Computer and Information Sciences, 33(5):497–507.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
Go Inoue, Bashar Alhafni, Nurpeiis Baimukan, Houda Bouamor, and Nizar Habash. 2021. The interplay of variant, size, and task type in arabic pre-trained language models. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, WANLP
2021, Kyiv, Ukraine (Virtual), April 9, 2021, pages 92–104. Association for Computational Linguistics.
Taku Kudo and John Richardson. 2018. Sentencepiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71.
Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. 2020. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4034–4048, Online. Association for Computational Linguistics.
Imad Lakim, Ebtesam Almazrouei, Ibrahim Abualhaol, Merouane Debbah, and Julien Launay. 2022. A holistic assessment of the carbon footprint of noor, a very large arabic language model. In Proceedings of BigScience Episode\*\# 5–Workshop on Challenges & Perspectives in Creating Large Language Models*, pages 84–94.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.
arXiv preprint arXiv:1910.13461.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed precision training. In In International Conference on Learning Representations.
El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, AbdelRahim Elmadany, Alcides Alcoba Inciarte, and Md Tawkat Islam Khondaker. 2022a. Jasmine: Arabic gpt models for few-shot learning. arXiv preprint arXiv:2212.10755.
El Moatez Billah Nagoudi, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. 2022b. AraT5: Textto-text transformers for Arabic language generation.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 628–647, Dublin, Ireland.
Association for Computational Linguistics.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances* in neural information processing systems, 32:8026–
8037.
Xiaoye Qu, Yingjie Gu, Qingrong Xia, Zechang Li, Zhefeng Wang, and Baoxing Huai. 2023. A survey on arabic named entity recognition: Past, recent advances, and future trends. *arXiv preprint* arXiv:2302.03512.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
J Rae, G Irving, and L Weidinger. 2021a. Language modelling at scale: Gopher, ethical considerations, and retrieval. *DeepMind Blog*.
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021b. Scaling language models:
Methods, analysis & insights from training gopher.
arXiv preprint arXiv:2112.11446.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *arXiv preprint arXiv:1910.10683*.
Ali Safaya, Moutasem Abdullatif, and Deniz Yuret.
2020. Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2054–2059.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´
Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model.
arXiv preprint arXiv:2211.05100.
Haitham Seelawi, Ibraheem Tuffaha, Mahmoud Gzawi, Wael Farhan, Bashar Talafha, Riham Badawi, Zyad Sober, Oday Al-Dweik, Abed Alhakim Freihat, and Hussein Al-Natsheh. 2021. Alue: Arabic language understanding evaluation. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 173–184.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
Seongjin Shin, Sang-Woo Lee, Hwijeen Ahn, Sungdong Kim, HyoungSeok Kim, Boseop Kim, Kyunghyun Cho, Gichang Lee, Woomyoung Park, Jung-Woo Ha, et al. 2022. On the effect of pretraining corpora on in-context learning by a large-scale language model. arXiv preprint arXiv:2204.13509.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053.
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990.
Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, et al. 2021. Ernie 3.0:
Large-scale knowledge enhanced pre-training for language understanding and generation. *arXiv preprint* arXiv:2107.02137.
Teo Susnjak. 2022. Chatgpt: The end of online exam integrity? *arXiv preprint arXiv:2212.09292*.
Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli. 2021. Understanding the capabilities, limitations, and societal impact of large language models.
arXiv preprint arXiv:2102.02503.
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022. Unifying language learning paradigms. *arXiv preprint* arXiv:2205.05131.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue:
A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of* the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355.
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. *arXiv preprint arXiv:2112.04359*.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 483–498.
Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, et al. 2021. Pangu-α: Largescale autoregressive pretrained chinese language models with auto-parallel computation. arXiv preprint arXiv:2104.12369.
Imad Zeroual, Dirk Goldhahn, Thomas Eckart, and Abdelhak Lakhouaja. 2019. Osian: Open source international arabic news corpus-preparation and integration into the clarin-infrastructure. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 175–182.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. Designing effective sparse expert models. *arXiv preprint arXiv:2202.08906*.
## A Pretraining A.1 Data Collection
Our pre-training corpus is mainly sourced from the publicly available web scrapes of the Common Crawl (CC) project. We downloaded 90 shards of CC monthly data ranging from May 2013 (the earliest available) up to July 2022. Also, we use an *inhouse collection* of 47GB of Arabic dialect textual data (DIALECT) in order to enhance the awareness of our model to Arabic dialects (Abdul-Mageed et al., 2021). In addition, we include high-quality news corpora such as NEWS (Zeroual et al., 2019)
and El-KHAIR (El-Khair, 2016) which are commonly used in previous Arabic PLM works (Safaya et al., 2020; Antoun et al., 2020; Nagoudi et al.,
2022b; Ghaddar et al., 2022). Finally, we use 28GB of *in-house* Arabic data curated from different text genres like literature, books, and Wikipedia.
| Source | Original | Clean | Filtering % |
|----------|------------|---------|---------------|
| CC | 8.7TB | 439GB | 95% |
| DIALECT | - | 47GB | - |
| NEWS | 21GB | 14GB | 34% |
| EL-KHEIR | 16GB | 13GB | 19% |
| Others | 28GB | 16GB | 45% |
| Total | 8.8TB | 529GB | 94% |
Table 6: Size of the pre-training corpora before (Original) and after (Clean) applying data filtering and deduplication heuristics.
As it has been shown to be crucial for English (Raffel et al., 2019), multilingual (Xue et al.,
2021), and Arabic (Ghaddar et al., 2022) PLM
end-tasks performance, we aggressively filter and deduplicate the collected data using the heuristics described in (Ghaddar et al., 2022). Table 6 shows data sizes before and after applying the heuristics.
While we discard 95% of CC data, it is still considered, along with DIALECT, to form more than 90% of our 529GB final pre-training corpus.
## A.2 Implementation Details
We use the SentencePiece (Kudo and Richardson, 2018) tokenizer in order to process text into subtokens. We train the tokenizer from scratch on our pre-training corpus by setting the vocabulary size to 64k, a value which is used commonly by previous Arabic PLMs (Antoun et al., 2020; Ghaddar et al.,
2022; Nagoudi et al., 2022a).
Following (Raffel et al., 2019), we pre-train AraMUS on the *Replace corrupted spans* tasks with a random token probability of 15%. The pre-training code is based on the PyTorch (Paszke et al., 2019)
version of the Megatron-LM library (Shoeybi et al.,
2019). AraMUS is pre-trained on 16 sever, each occupied with 8 NVIDIA A100 GPUs with 80GB
memory. Model and data parallel sizes are set to 4 and 32 respectively. The total batch size is 4096, which is based on the max batch size which can fit on a single GPU (32). To speed up the pretraining, we use mixed-precision training (Micikevicius et al., 2018), except when calculating attention softmax and when reducing gradients. We use the Adafactor optimizer (Shazeer and Stern, 2018)
with an initial learning rate of 0.005, 10k warm-up steps with the inverse square-root scheduler.
## B Finetuning B.1 Datasets And Evaluation
ALUE (Seelawi et al., 2021) is a well-established benchmark that consists of a collection of eight Arabic NLU tasks. Although its datasets are relatively small compared to the one of the English GLUE (Wang et al., 2018) benchmark, but it is supported by a public leaderboard with hidden test sets which ensures a fair comparison between models.
Following (Seelawi et al., 2021), we report Pearson correlation on SVREG, Jaccard on SEC, and accuracy on XNLI, and use the F1 score otherwise.
We also report the unweighted average sum over the 8 tasks.
As for generative tasks, we follow (Ghaddar et al., 2022) by considering 3 tasks for evaluation, as their datasets are fully open source. We use Wikilingua (Ladhak et al., 2020) and EASC (El-Haj et al., 2010) for TS, and the set of datasets used in
(Abdul-Mageed et al., 2021; Nagoudi et al., 2022b)
for QA and QG. We follow (Ghaddar et al., 2022) for splitting the data into train/dev/test, and report Rouge scores (Lin, 2004) on TS, BLEU (Papineni et al., 2002) on QG, and Exact Match (EM) and F1 score on QA. Therefore, AraMUS results can be directly comparable with the baselines reported by (Ghaddar et al., 2022).
B.2 Baseline We compared AraMUS with the state-of-the-art Arabic PLMs that have been evaluated on publicly available datasets, these include:
- **ARBERT and MARBERT** are respectively MSA and Arabic Dialect BERT-base (Devlin et al., 2018) models provided by (AbdulMageed et al., 2021).
- **JABER and SABER** are respectively BERTbase and BERT-large models provided by
(Ghaddar et al., 2022).
2890
- **ALM-1.0** 4is a recently published Arabic BERT-large model.
- **AraT5-base and AT5B** are Arabic T5base (Raffel et al., 2019) models provided by
(Nagoudi et al., 2022b) and (Ghaddar et al.,
2022) respectively.
It is worth mentioning that it was not possible to compare AraMUS with its counterpart multi-billion Arabic GPT models because:
## B.2.1 Noor
NOOR (Lakim et al., 2022) is the largest existing Arabic PLM with 10B parameters. In their work, the authors didn't make their model publicly available neither reported their results on public datasets.
## B.2.2 Aragpt-Mega
AraGPT-Mega (Antoun et al., 2021b) has 1.5B parameters and is publicly available for download.
However, we tried to run *in-house* experiments with this model but it didn't perform well on many tasks. Most likely because it was only pre-trained on 27GB of Arabic text, which is considered small compared to the model size. Therefore, we preferred not to report weak results for this model.
B.2.3 Jasmine Jasmine (Nagoudi et al., 2022a) is an *in-progress* project that aims to develop and evaluate a set of Arabic GPT models up to 13B parameters. This inprogress work was released at the time of writing our paper. The authors mentioned that the 13B
model is still at early pre-training stage, while the 6.7B version is only pre-trained for 143k steps.
Therefore, their *fully pre-trained* Jasmine has 2.7B
parameters only. This model is evaluated, in a few shot setting only, on a set of discriminative and generative tasks on the ARLUE (Abdul-Mageed et al., 2021) and ARGEN (Nagoudi et al., 2022b)
benchmarks respectively. However, many of the datasets in ARLUE and ARGEN have not been publicized yet (Elmadany et al., 2022; Ghaddar et al., 2022). In addition, the authors didn't open source their model weights nor shared their code to replicate their dataset splits.
## B.3 Implementation Details
We used early stopping based on the performance of the dev sets during our extensive hyperparameter search. We search the learning rate from 4https://github.com/FlagAI-Open/FlagAI/tree/
master/examples/ALM
the set of {5e-5, 1e-4, 2e-4, 1e-3}, batch size from
{8, 16, 32, 64}, the learning rate scheduler from
{constant, cosine}, and the dropout rate from {0.1, 0.15, 0.2, 0.3}, and fixed the epoch number to a maximum of 120 for all the experiments. Each finetuning experiment uses 4 NVIDIA A100 GPUs, with the model parallel size set to 4.
After finding the best hyper-parameters, we ran all the experiments 5 times and reported the average score on the dev sets 5, in order to validate the credibility of our results. For each ALUE task, we selected the best-performing model among the 5 runs and used it for the ALUE leaderboard test submission, and we computed the scores on generative tasks datasets.
We simulate a few-shot setting on the ALUE
tasks by randomly sampling a subset of {8, 16, 32, 64, 128, 256} examples of the training data. When the number of classes is more than the number of samples (e.g. MDD and SEC with 8 examples) we randomly add one example for each missing class in order to ensure that each class has a represented data point. All models are identically fine-tuned, and we report the average and standard deviation of 5 randomly selected folds.
## C Few-Shot Results
| Model | MQ2Q* | MDD | SVREG | SEC | FID | OOLD | XNLI | OHSD | Avg. |
|--------------------|-----------|----------|-----------|----------|-----------|-----------|-----------|----------|----------|
| 8 Examples JABER | 50.0±15.8 | 8.9±1.8 | 18.8±17.5 | 21.7±0.2 | 56.7±13.5 | 56.5±7.9 | 35.7±2.3 | 54.9±5.9 | 37.9±8.1 |
| SABER | 53.5±6.9 | 8.8±1.4 | 34.2±09.6 | 21.1±0.8 | 63.0±11.2 | 65.3±12.6 | 35.5±1.9 | 58.1±7.3 | 42.4±6.5 |
| AraMUS | 60.2±3.7 | 16.7±1.8 | 54.5±8.7 | 23.2±3.5 | 69.0±2.8 | 69.5±1.6 | 35.8±1.1 | 58.3±7.7 | 48.4±3.9 |
| 16 Examples JABER | 56.2±14.5 | 7.9±1.1 | 45.2±16.1 | 24.3±3.0 | 69.9±5.6 | 68.0±12.5 | 37.0±3.4 | 53.0±5.7 | 45.2±7.7 |
| SABER | 54.6±8.2 | 9.0±2.1 | 47.7±16.7 | 21.6±1.9 | 73.0±2.8 | 80.3±7.9 | 35.8±2.3 | 57.7±8.0 | 47.5±6.2 |
| AraMUS | 61.4±4.7 | 20.4±1.9 | 66.6±5.6 | 25.5±4.8 | 74.3±1.2 | 82.3±1.7 | 39.1±4.9 | 59.1±7.5 | 53.6±4.0 |
| 32 Examples JABER | 66.9±3.3 | 8.0±1.8 | 63.7±11.7 | 27.0±3.3 | 72.1±3.9 | 71.7±5.9 | 38.7±2.9 | 57.7±7.7 | 50.7±5.1 |
| SABER | 63.3±6.6 | 9.8±2.3 | 72.3±9.3 | 28.7±4.1 | 74.5±1.4 | 81.2±9.3 | 37.4±1.4 | 54.6±7.2 | 52.7±5.2 |
| AraMUS | 69.2±4.3 | 21.3±1.1 | 74.5±3.6 | 28.0±5.0 | 74.8±2.6 | 85.5±2.1 | 45.3±3.7 | 59.3±6.9 | 57.2±3.7 |
| 64 Examples JABER | 68.6±3.5 | 11.0±1.9 | 72.6±7.8 | 31.5±1.5 | 73.7±0.8 | 77.0±2.7 | 42.4±2.2 | 58.8±8.4 | 54.4±3.6 |
| SABER | 67.8±2.8 | 12.8±1.9 | 79.6±3.3 | 34.8±1.7 | 77.2±1.4 | 87.0±2.1 | 39.6±4.2 | 61.4±7.4 | 57.5±3.1 |
| AraMUS | 74.8±1.8 | 22.3±1.0 | 81.8±3.7 | 31.5±1.8 | 77.7±0.7 | 89.6±1.5 | 55.5±3.6 | 64.0±8.7 | 62.2±2.8 |
| 128 Examples JABER | 70.0±1.5 | 16.9±0.6 | 80.5±1.3 | 35.3±1.8 | 76.4±1.1 | 82.4±2.8 | 44.6±1.0 | 64.2±4.0 | 58.8±1.8 |
| SABER | 72.1±0.9 | 18.9±2.0 | 83.6±2.0 | 39.5±2.8 | 78.3±1.3 | 88.7±1.4 | 44.8±4.0 | 66.8±4.0 | 61.6±2.3 |
| AraMUS | 77.5±1.1 | 25.7±1.7 | 84.1±0.9 | 37.0±1.4 | 78.6±0.5 | 90.4±0.9 | 63.6±1.5 | 67.8±4.1 | 65.6±1.5 |
| 256 Examples JABER | 72.7±1.0 | 22.4±0.6 | 83.7±0.7 | 39.3±0.8 | 79.0±1.1 | 84.9±1.0 | 53.1±2.2 | 62.5±6.2 | 62.2±1.7 |
| SABER | 72.8±1.7 | 25.5±1.9 | 85.0±1.3 | 42.2±0.5 | 79.8±1.2 | 89.6±0.7 | 48.0±13.5 | 70.6±1.3 | 64.2±2.8 |
| AraMUS | 78.1±1.2 | 30.2±0.8 | 86.3±1.3 | 41.1±0.7 | 80.8±1.7 | 92.3±0.9 | 72.6±0.7 | 71.2±3.4 | 69.1±1.3 |
Table 7: Dev ALUE performances across training set sizes. Underline figures indicates extra samples where added to ensure that each class is represented at least by one data point.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5
✗ A2. Did you discuss any potential risks of your work?
There is no risks of our work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 3.4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
A.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
A.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3.4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
A.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
white-etal-2023-leveraging | Leveraging Explicit Procedural Instructions for Data-Efficient Action Prediction | https://aclanthology.org/2023.findings-acl.182 | Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests. While large language models have found success automating these dialogues in constrained environments, their widespread deployment is limited by the substantial quantities of task-specific data required for training. The following paper presents a data-efficient solution to constructing dialogue systems, leveraging explicit instructions derived from agent guidelines, such as company policies or customer service manuals. Our proposed Knowledge-Augmented Dialogue System (KADS) combines a large language model with a knowledge retrieval module that pulls documents outlining relevant procedures from a predefined set of policies, given a user-agent interaction. To train this system, we introduce a semi-supervised pre-training scheme that employs dialogue-document matching and action-oriented masked language modeling with partial parameter freezing. We evaluate the effectiveness of our approach on prominent task-oriented dialogue datasets, Action-Based Conversations Dataset and Schema-Guided Dialogue, for two dialogue tasks: action state tracking and workflow discovery. Our results demonstrate that procedural knowledge augmentation improves accuracy predicting in- and out-of-distribution actions while preserving high performance in settings with low or sparse data. | # Leveraging Explicit Procedural Instructions For Data-Efficient Action Prediction
Julia White and **Arushi Raghuvanshi** and **Yada Pruksachatkun**
Infinitus Systems, Inc.
{julia.white,arushi,yada.pruksachatkun}@infinitus.ai
## Abstract
Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests. While large language models have found success automating these dialogues in constrained environments, their widespread deployment is limited by the substantial quantities of task-specific data required for training. The following paper presents a data-efficient solution to constructing dialogue systems, leveraging explicit instructions derived from agent guidelines, such as company policies or customer service manuals. Our proposed Knowledge-Augmented Dialogue System (KADS) combines a large language model with a knowledge retrieval module that pulls documents outlining relevant procedures from a predefined set of policies, given a user-agent interaction. To train this system, we introduce a semi-supervised pre-training scheme that employs dialogue-document matching and actionoriented masked language modeling with partial parameter freezing. We evaluate the effectiveness of our approach on prominent taskoriented dialogue datasets, Action-Based Conversations Dataset and Schema-Guided Dialogue, for two dialogue tasks: action state tracking and workflow discovery. Our results demonstrate that procedural knowledge augmentation improves accuracy predicting inand out-of-distribution actions while preserving high performance in settings with low or sparse data.
## 1 Introduction
For many real-world applications, it is crucial for task-oriented dialogue (TOD) systems to complete user requests while strictly adhering to established procedures. For example, consider a customer service agent who must first verify a client's details before changing their password. Although large language models have demonstrated potential in modeling such dialogues, they require large
![0_image_0.png](0_image_0.png)
Figure 1: The Knowledge-Augmented Dialogue System
(KADS) is composed of two modules: a knowledge retriever and a language model. The knowledge retriever takes the inner product as a measure of similarity between an embedded dialogue and each document in a provided knowledge base containing procedural instructions. The most similar document is then passed to a language model which attends over both the dialogue and retrieved document to generate the agent's next action.
amounts of data with consistent procedural representations to *implicitly* store procedures in the parameters of their underlying networks. In practical settings, such high-quality data is not always readily available as some procedures may naturally occur infrequently or change over time. In this paper, we explore a solution to TOD modeling which improves performance in low-data settings by referencing *explicitly* stored agent guidelines.
We outline a methodology of incorporating procedural knowledge (i.e., knowledge concerning the requisite steps to address a user inquiry) into a language model with the objective of predicting agent actions in dialogue tasks. Our proposed system, the Knowledge-Augmented Dialogue System
(KADS), consists of two modules: a knowledge retriever which, given a dialogue between an agent and user, retrieves the most pertinent instructions 2895 from a knowledge base of agent procedures and a language model which considers the retrieved instructions along with the ongoing dialogue to inform an action prediction (see architecture in Figure 1).
In prior work, retrieval-enhanced language models have achieved success integrating external knowledge from internet searches into conversational agents (Shuster et al., 2022; Thoppilan et al.,
2022). However, a more controllable approach is necessary for instruction retrieval in task-oriented dialogue. Rather than querying the open web, it's more suitable to perform retrieval over a closed set of documents, like in (Guu et al., 2020; Lewis et al.,
2020). However, while the training schemes utilized in these works sufficiently prime a model for question-answering tasks, they are not as effective for action prediction.
Following the lines of (Henderson and Vulic´,
2021), which introduces a unique pre-training objective for slot-labeling, our method leverages custom objectives suited for action prediction tasks.
We employ a specialized warm-up task where dialogues are matched with corresponding procedural instructions to ensure that the knowledge retrieval module is initialized with reasonable dialogue and document embeddings. Then, the system is trained on an special case of masked language modeling in which masked actions are predicted from customeragent dialogues. Finally, we found it necessary to encourage our system to incorporate signal from retrieved procedures by routinely freezing the language model's weights during training.
We evaluated this approach on two dialogue tasks— action state tracking and workflow discovery— using two task-oriented dialogue datasets: Action-Based Conversations Dataset and Schema-Guided Dialogue. Our results suggest that KADS yields improved action prediction accuracy against several baselines, including an unaugmented language model and a language model augmented with static guidelines, on both in- and out-of-distribution procedures. Furthermore, we demonstrate that knowledge augmentation bolsters our system's ability to predict actions that occur infrequently in the training data.
## 2 Dialogue Tasks
TOD systems are employed for a variety of tasks including action state tracking and workflow discovery.
Action state tracking (AST) aims to predict the next action performed by an agent during an interaction with a customer (Chen et al., 2021).
Formally, we represent an interaction as a sequence of turns x belonging to one of three categories: agent utterances x a([agent]), agent actions x b([action]), or customer utterances x c
([customer]). The model receives an interaction between a customer and agent up to turn t where prefix tokens p indicate the turn category: X = p0 x0 p1 x1 *... p*t xt with p ∈ [agent], [action], [customer]. See Appendix B for an example. The model then predicts the following agent action x b t+1 which consists of a button, or b-slot, and any corresponding slot values if they are present: x b t = b 0 t: v 00 t, v01 t
.
The goal of **workflow discovery (WD)** is to recover the workflow— the set of ordered actions taken by an agent— given a complete dialogue between a customer and agent (Hattami et al., 2022).
Formally, we represent a dialogue as a sequence of turns belonging to one of two categories: agent utterances or customer utterances. The model receives a dialogue of length T between a customer and agent where prefix tokens indicate the turn category: X = p0 x0 p1 x1 *... p*T xT with p ∈
[agent], [customer]. The model then predicts the corresponding agent actions x b0
; x b1
; ...; x b T
.
## 3 Approach 3.1 Architecture
The end goal of KADS is to learn a distribution p(y|X) over possible action sequences y given an interaction or dialogue X. Our approach utilizes a knowledge retriever module to produce a relevance score between a given procedural document z and X. We calculate the relevance score according to (Devlin et al., 2019) as the inner product of the BERT vector embeddings of X and z. A retrieval distribution p(z|X) is obtained by taking the softmax over the relevance scores corresponding to each available document and the given interaction or dialogue. Finally, we train a T5 language model (Raffel et al., 2020), conditioned on both the retrieved document z and the interaction X,
to generate an action sequence y, where the likelihood of generating y is obtained by treating z as a latent variable and marginalizing over all possible documents: p(y|X) = Pz∈Z
p(y|*X, z*)p(z|X).
2896
## 3.2 Training
To train KADS we follow a three-step procedure:
first, we warm-up the knowledge retriever's embedding modules with a dialogue-document matching task; then, we pre-train the full model with actionoriented masked language modeling (MLM); finally, we train on one of two downstream dialogue tasks— AST or WD. For all tasks except dialoguedocument matching, our training objective is to maximize the log-likelihood logp(y|X) of the correct output action sequence y. However, calculating the marginal probability over documents in a knowledge corpus can become costly as the number of documents grows, so we approximate this probability by summing over the top 5 documents with the highest probability under p(z|X). We then compute the gradient of the log-likelihood with respect to the model parameters of both the knowledge retriever and language model and optimize using stochastic gradient descent.
We first perform the dialogue-document matching warm-up routine to ensure that the knowledge retriever is initialized with reasonable dialogue and document embeddings. The embedding modules are pre-trained using a semi-supervised training procedure with the objective of retrieving the document that most likely corresponds to a specific dialogue. This label is determined according to which document has the highest action overlap with the dialogue or, when provided, which document corresponds to the user's ground-truth intent.
For the MLM pre-training task, we randomly mask action sequences from dialogue transcripts such that the system learns to retrieve relevant documents in order to better predict the actions corresponding to each [MASK] token. To prevent KADS
from learning to ignore retrieved documents we employ several tricks during MLM training. First, we filter out dialogues with action sequences that are not detailed in the agent guidelines. This is done to ensure that only examples in which the knowledge retriever may be useful are present. Additionally, we freeze the language model weights with 0.9 probability to encourage updates to the knowledge retriever parameters which minimize the MLM loss.
## 4 Data
We evaluate KADS on two TOD datasets: ActionBased Conversations Dataset and Schema-Guided Dialogue. Both consist of multi-domain customer service interactions that loosely follow a set of predefined company policies which specify the actions to be taken by an agent to satisfy a particular customer inquiry. The core differences between these two datasets are their action and document structures.
In **Action-Based Conversations Dataset**
(ABCD) (Chen et al., 2021), actions are composed such that the b-slot belongs to a predefined set of b-slots which describe the action being taken
(e.g., "pull up account") and slot values consist of any corresponding information provided by the user (e.g., "[email protected]"). In a given interaction, an average of 4 actions are taken. The documents provided within ABCD are composed of a plain text description of a possible customer inquiry followed by an ordered set of action b-slots that should be performed by the agent.
In **Schema-Guided Dialogue (SGD)** (Rastogi et al., 2020), we take action b-slots to be the description of how the agent will interact with a piece of information (e.g., "inform", "confirm", or "request") and values as the type of information in question (e.g., "departure times"). In this dataset, the average number of actions per interaction is significantly longer at 21 actions, and the documents corresponding to SGD consist of a customer inquiry followed by all of the information types, or values, that can be acquired to fulfill the given inquiry.
We use the train/dev/test splits presented in the original datasets (8034/1004/1004 and 16142/2482/4201 interactions per split for ABCD
and SGD respectively), and hold out a randomlyselected subset of 10% of actions during training for out-of-distribution testing. See Appendix B for more details, including dialogue and corresponding document examples.
## 5 Results
The evaluation of our TOD system begins with bslot and value prediction accuracy for both known and novel actions. We also examine the data efficiency of our approach by reporting these metrics for progressively reduced training pools. We compare our model's performance against a base T5 model and T5 with static guidelines— a comprehensive list of agent actions— appended to the input sequence (T5 + guide)1. Then, we assess
| AST | WD | | | | | | | |
|------------|-------|--------|-------|--------|-------|--------|-------|------|
| Model | ABCD | SGD | ABCD | SGD | | | | |
| B-Slot | Value | B-Slot | Value | B-Slot | Value | B-Slot | Value | |
| T5 | 79.5 | 82.2 | 51.8 | 31.6 | 65.9 | 66.8 | 58.7 | 28.3 |
| T5 + guide | 81.3 | 82.5 | NA | NA | 56.8 | 58.4 | NA | NA |
| KADS | 85.2 | 83.1 | 63.2 | 39.5 | 72.5 | 73.0 | 53.1 | 23.8 |
the efficacy of our knowledge retriever in selecting relevant documents. Finally, an ablation study of our pre-training routine highlights the importance of our custom training procedure. See Appendix A for details of our experimental setup.
## 5.1 In-Distribution Performance
We first observe b-slot and value prediction accuracy on procedures observed during training (Table 1).
On ABCD, KADS achieves higher b-slot prediction accuracy than our baselines for both tasks.
The inclusion of a static guideline offers slightly improved accuracy on AST but is not nearly as effective as the dynamic guide provided by the knowledge retriever. We attribute the performance boost in part to KADS's ability to predict actions that are less represented during training.
This characteristic is evidenced by the model's performance in low-data settings (Figure 2). We observe that the difference in action prediction accuracy between our model and the unaugmented baseline increases when training on progressively fewer dialogues. Additionally, we find that, for the base and static guide models, the correlation between a b-slot's level of occurrence in the training data and the model's accuracy in predicting that bslot is notably higher (0.27 and 0.24 respectively)
than in the knowledge-augmented model (0.18).
We conclude from these results that KADS is more robust to low-data settings where the quantity of individual action occurrences is low or inconsistent.
2 On SGD, we see similar trends for the AST task.
However, for the WD task, which concerns recovering the entire action sequence from a dialogue at once, we see that knowledge augmentation does not
![3_image_0.png](3_image_0.png)
provide substantial improvement in performance.
This may be due to the nature of SGD dialogues, which contain multiple client requests, while the model is augmented with a singular document providing instructions for a singular customer request.
## 5.2 Out-Of-Distribution Performance
Next, we evaluate the ability of KADS to generalize to novel procedures by assessing performance on actions not seen during training (Table 2).
Both tasks, AST and WD, show knowledge augmentation to improve novel b-slot prediction accuracy over the baselines, coming only second to T5 trained on the full dataset ("full data") including
"out-of-distribution" actions. These results demonstrate that KADS is able to relatively accurately predict new actions in a zero-shot fashion by making use of documents containing information about the action.
## 5.3 Document Selection Accuracy
We use document selection accuracy to assess how well our knowledge retriever selects documents that correspond to a customer's inquiry. On ABCD,
we define the correct document as the document
| Dataset | Accuracy | | |
|-----------|------------|------|------|
| DDM | MLM | AST | |
| ABCD | 98.0 | 82.3 | 85.4 |
| SGD | 66.2 | 74.9 | 63.9 |
with the most action b-slots overlapping with the full customer-agent interaction. On SGD, where calls often consist of multiple customer inquiries, the correct document is instead defined as the document corresponding to the labeled customer intent for any given step of the interaction. In Table 3, we see that approximate document selection accuracy for ABCD is near 90% while SGD is only slightly above 50%. This is likely due to the significant overlap in procedures for similar customer inquiries on the latter dataset. For example, making an appointment with a doctor, dentist, or hairstylist requires similar values to be filled, which results in related documents being somewhat interchangeable for these inquiries.
Furthermore, we measure document selection accuracy on our pre-training tasks (Table 3): dialoguedocument matching and MLM. Notably, the knowledge retriever's document selection accuracy decreases between pre-training with the dialoguedocument matching task and fine-tuning on the final task. This is likely due to the objective changing from maximizing document selection accuracy to predicting correct action sequences, resulting in some drift from the selection of approximated
"correct" documents.
## 5.4 Pre-Training Scheme Ablations
Our full training scheme is a multi-step process ensuring optimal performance from our KnowledgeAugmented Dialogue System. First, the knowledge retrieval module is tuned on a dialogue-document matching task to ensure that the model is initialized with sensible dialogue and document embeddings.
Next, the full system is trained on an MLM task which acts as the simpler in-between before our final task. Finally, we train the model for one of our two downstream dialogue tasks. Removing any step from this procedure results in decreased performance on the final task. In Table 4, we share b-slot and value prediction accuracy on AST after pretraining with several ablations of our full scheme.
These results show that the elimination of either the dialogue-document matching or MLM task results in lower accuracy. These tasks, which allow our model to effectively harness the knowledge retrieval module, are crucial to our pre-training procedure.
| Model | B-Slot | Value |
|----------|----------|---------|
| none | 82.7 | 79.4 |
| MLM only | 81.5 | 79.0 |
| DDM only | 82.6 | 78.5 |
| full | 85.2 | 83.1 |
## 6 Conclusion
While large language models make for effective TOD systems in constrained settings, real-world applications often present insufficient data to train these models. KADS offers a method of learning workflows with minimal or sparse supporting data and presents a more controllable and performant solution to low-resource TOD automation. While our results offer a promising outlook for action prediction given dynamic guidance from structured procedural documents, future work should investigate the use of unstructured company guidelines and multi-document retrieval.
## 7 Limitations
Our paper assesses procedural knowledge augmentation using a limited number of highly structured instructional documents. Naturally, the results presented may vary for unstructured guidelines. Additionally, due to the limited size of publicly available TOD datasets, we have not tested how our method may scale to settings with larger document
| Model | ABCD | SGD | | |
|------------|--------|--------|-------|------|
| B-Slot | Value | B-Slot | Value | |
| T5 | 0.0 | 11.6 | 46.2 | 25.8 |
| T5 + guide | 2.7 | 17.9 | NA | NA |
| KADS | 11.6 | 21.4 | 49.8 | 32.2 |
| full data | 94.6 | 85.7 | 61.3 | 38.1 |
spaces (> 100 documents). For larger document sets, more efficient methods of computing similarity such as Maximum Inner Product Search (MIPS)
algorithms may be necessary to approximate documents with the highest relevance scores.
## References
Derek Chen, Howard Chen, Yi Yang, Alexander Lin, and Zhou Yu. 2021. Action-based conversations dataset: A corpus for building more in-depth taskoriented dialogue systems. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3002–3017, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org.
Amine El Hattami, Stefania Raimondo, Issam Laradji, David Vazquez, Pau Rodriguez, and Chris Pal. 2022.
Workflow discovery from dialogues in the low data regime.
Matthew Henderson and Ivan Vulic. 2021. ´ Convex:
Data-efficient and few-shot slot labeling.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.
Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Proceedings of the 34th International Conference on Neural Information Processing Systems*, NIPS'20, Red Hook, NY, USA. Curran Associates Inc.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. *Proceedings of the* AAAI Conference on Artificial Intelligence, 34:8689–
8696.
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Naman Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, ChungChing Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers:
State-of-the-art natural language processing. *CoRR*,
abs/1910.03771.
## Abcd
Input [agent] hello! how can i help you today? [customer] i'm thinking about buying an item but first i would like to get some more info on the product [agent] sure. i can help you with that. what item are you looking for more information on? [customer] the tommy hilifiger shirt [agent] and what would you like to know about it? [customer] i would like to know how long is the arm length [agent] sure give me one second and i can find that out for you [customer] ok [action] search faq Output search shirt Document get shirt info [SEP] search faq; search shirt; select faq
| Input | [customer] i am interested to know how the weather is going to be on 7th of march in san diego. |
|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
| Output | offer temperature; offer precipitation |
| Document | get the weather of a certain location on a date [SEP] [required] city [optional] date [result] precipitation; humidity; wind; temperature; city; date |
## A Experimental Details
Our implementations are based on the Hugging Face Transformer models (Wolf et al., 2019). Each embedding module in the knowledge retriever is a small BERT model with 4 layers and a hidden size of 512, and the language model used is a pretrained T5 model, *t5-base*. All models were trained with a learning rate of 0.00001 using the AdamW
optimizer and an effective batch size of 32. We used an NVIDIA TITAN X GPU for all experiments.
## B Data Details
We evaluate on two TOD datasets: ActionBased Conversations Dataset (ABCD) and SchemaGuided Dialogue (SGD)— each with a slightly different composition.
ABCD contains over 10,000 human-to-human customer service dialogues across multiple domains. The agent's actions are constrained to a set of 30 action b-slots and unrestricted, free-form slot values. There are a total of 55 structured documents relating recommended sequences of action b-slots to various customer inquiries.
SGD contains over 20,000 multi-domain conversations between a human and a virtual assistant.
There are 8 possible action b-slots and 132 possible slot values. There are a total of 53 documents containing the required and optional slot values to collect in order to fulfill a specific customer intent.
![6_image_0.png](6_image_0.png)
Example AST input and output sequences for both datasets are provided in Table 5: these include the input interaction between a customer and agent, the output next agent action, and the corresponding document. The distribution of actions (b-slots and slot values for ABCD and SGD respectively)
indicate an imbalance in both datasets with some actions being significantly more represented than others (Figure 3).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
We do not see any obvious ethical concerns or risks related to our work
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1,4
✓ B1. Did you cite the creators of artifacts you used?
3.1,4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The artifacts used do not have specific licensing terms that impact our paper, and any further information about licensing that readers might want can be found in our citations
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4, B (appendix)
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The datasets we use do not include PII, and any further information about licensing that readers might want can be found in our citations
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
3.1 (we use a well-known model and provide a citation that would offer any architectural details a reader might want to know), A (appendix)
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3, A (appendix)
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kambhatla-etal-2023-quantifying | Quantifying Train-Evaluation Overlap with Nearest Neighbors | https://aclanthology.org/2023.findings-acl.183 | Characterizing benchmark datasets is crucial to interpreting model performance. In this work, we study train-evaluation overlap as a measure of an individual dataset{'}s adequacy to evaluate model generalization over a wide range of datasets. We quantify the overlap with a simple novel metric based on a nearest neighbors approach between the training and evaluation sets. We identify nearest training examples for each evaluation example by mapping instances with generic and task-specific embedding methods. Our study on eleven classification and extractive QA tasks reveals a wide range of train-evaluation overlap, and we show that the data collection method of the dataset and the difficulty of the task may play a role in the amount of overlap. Lastly, we use our nearest neighbor analysis to identify challenging or potentially mislabeled examples. Our analysis quantifies train-evaluation overlap, providing insights for constructing datasets to study generalization. | # Quantifying Train-Evaluation Overlap With Nearest Neighbors
Gauri Kambhatla Thuy Nguyen Eunsol Choi The University of Texas at Austin
{gkambhat, eunsol}@utexas.edu [email protected]
## Abstract
Characterizing benchmark datasets is crucial to interpreting model performance. In this work, we study train-evaluation overlap as a measure of an individual dataset's adequacy to evaluate model generalization over a wide range of datasets. We quantify the overlap with a simple novel metric based on a nearest neighbors approach between the training and evaluation sets.
We identify nearest training examples for each evaluation example by mapping instances with generic and task-specific embedding methods.
Our study on eleven classification and extractive QA tasks reveals a wide range of trainevaluation overlap, and we show that the data collection method of the dataset and the difficulty of the task may play a role in the amount of overlap. Lastly, we use our nearest neighbor analysis to identify challenging or potentially mislabeled examples. Our analysis quantifies train-evaluation overlap, providing insights for constructing datasets to study generalization.
## 1 **Introduction**
Benchmark datasets in NLP (Rajpurkar et al. 2016; Wang et al. 2018) are invaluable for driving and tracking progress in the field. While evaluating on a held-out set of data ideally tests for generalizability to new data, frequent overlap between training and evaluation sets hinders assessing a model's generalization capacity (Elangovan et al., 2021; Lewis et al., 2021a; Krishna et al., 2021). In this paper, we quantify the overlap between the training and evaluation splits in datasets through a simple metric based on a nearest neighbor approach, and analyze datasets along the axis of dataset collection method.
We categorize data collection methods frequently used in the literature into four categories, based on how naturally the language is captured; some datasets harvest user generated content (e.g.,
movie reviews paired with their scores), while language in other datasets is written by crowdworkers to fool existing models (Nie et al., 2020) or synthetically generated from templates (Warstadt et al., 2020).
We analyze the train-evaluation overlap in eleven NLP datasets varying in data collection method on two tasks - classification and extractive question answering - through a nearest neighbors approach. To quantify the overlap between training and evaluation datasets, we identify the nearest train neighbor to each evaluation example using cosine similarity between the input representations.
We experiment with two types of representations –
general sentence embeddings (Gao et al., 2021) and task-specific embeddings (after task-specific training (Devlin et al., 2019)). Then, we copy the label of the nearest training example to each evaluation example, constructing a simple nearest neighbor baseline model. In nearly every setting, we show that copying labels from the nearest train example alone achieve a competitive baseline, indicating overlap in content between the training and evaluation sets without any task specific training. We find that naturally-collected datasets exhibit stronger training and evaluation set overlap compared to more synthetic and adversarially-generated data.
We introduce a new metric, named InsSim, which summarizes the distance from each evaluation example to its nearest training examples, indicating the train-evaluation overlap. We use the nearest neighbor classifier and InsSim score to estimate the difficulty of *individual* evaluation examples, and suggest splitting evaluation datasets into challenging and easier subsets. Our analysis motivates careful benchmark designs (Koh et al.,
2021a) that aims to capture both natural language usage and distributional shifts.
## 2 **Related Work**
Representing a sequence of tokens as a single, fixed dimensional vector (Reimers and Gurevych, 2019; Arora et al., 2017; Kiros et al., 2015) has been studied extensively. Such an encoder can act as a dense 2905 passage retriever (Karpukhin et al., 2020), paired with an efficient similarity search method (Qin et al., 2020).
Two prior studies in question answering (Lewis et al., 2021a; Krishna et al., 2021) look in-depth into the overlap between the training and evaluation sets. They identify the most similar training example either by answer string match or comparing the question embedding constructed for passage retrieval. The follow up work further develops the QA model (Lewis et al., 2021b) for copying the answer from the nearest training example, after augmenting training examples with generated question answer pairs. Our study in Section 4.3 extends this setting for a wide range of tasks and different embedding methods. Similar to our work, Elangovan et al. (2021) examine train-test overlap for text classification tasks. They also compute the similarity for each test instance to the entire training set using a similarity function. However, they utilize a bag-of-words approach to represent text (where we use sentence embeddings). In addition, we provide analysis for a broad range of datasets.
Many works have explored whether models simply memorize the training dataset or actually learn the task, thus generalizing to unseen examples. Our nearest-neighbor match classification method resembles ProtoBERT (Tänzer et al., 2022), which shows promising performance in rare classes. The model classifies examples by comparing distance to the centroid of training examples belonging to each class. Our method is simpler, without estimating a probability distribution over the output classes. Tirumala et al. (2022) also study the effect of dataset size and model size on memorization, but look at the dynamics of memorization in language models *during* training, finding that larger language models tend to memorize data faster, and that certain parts of speech are memorized faster than others.
Other work studies different subsets of datasets and how this can change evaluation. Ethayarajh et al. (2022) study dataset difficulty in terms of the lack of usable information to a particular model V ,
as well as difficulty of data subsets using a measure of pointwise V -information for individual data instances. As in our work, Swayamdipta et al. (2020)
study difficulty of individual instances, although they focus on the training rather than evaluation set. Similarly, Godbole and Jia (2022) propose a method for better evaluation of generalization on more difficult examples (those assigned lower likelihood by a pretrained LM), focused on creating the train-eval split. In our work, we introduce a very simple and generalizable method of splitting examples by whether classification with the nearest training example can succeed.
Recent work (Sakaguchi et al., 2021; Malinin et al., 2021, 2022; Koh et al., 2021b) focuses on modeling distributional shifts in carefully constructed real world datasets, such as simulating shifts by having training set from one region and the test set from another region. This can be one path to mitigate frequent train-evaluation overlap in naturally occurring datasets.
## 3 **Categorizing Dataset Collection Method**
NLP datasets are collected through diverse methods for multiple purposes - some datasets mirror the user-facing applications closely (e.g., question answering datasets and machine translation datasets),
while other datasets are carefully designed for diagnostic purposes. With the rise of harder to interpret, high capacity models (Brown et al., 2020; Chowdhery et al., 2022), many datasets are designed to probe model qualities. Would different data collection method yield different level of train evaluation overlap? To investigate this, we first categorize the data collection method of datasets below. We propose a discrete scale of naturalness, from purely synthetic to user-generated, as follows:
- Synthetic (SYN): template-generated or highly-constrained crowd-sourced text. Here, both inputs and outputs are synthetically generated.
- Crowd-sourced (CWD): input text and output labels are both generated by crowdworkers.
- Artificial labels (LAB): input text are collected from real world user interactions, but output labels are annotated by crowdworkers.
- User-generated (USE): input text is collected from user interactions and labels also arise naturally from users.
We note that our definition of synthetic data includes highly-constrained crowd-sourced text, by which we mean that the annotators have limited freedom in the content of their annotations. For example, for the WinoGrande dataset workers are instructed to choose an anchor word to use in the twin sentences, they are given a range for sentence length, and they are asked to maintain 70% overlap between sentences. This is less natural than what the human might have generated on their own.
We provide examples of the datasets of each type we study here, approximately ordered from the least to most natural datasets.
WinoGrande A crowd-sourced, commonsense reasoning benchmark inspired by the Winograd Schema Challenge, in which twin sentences with a small edit distance each have a missing word and two possible options (Sakaguchi et al., 2021).
CSQA 2.0 (Commonsense Question Answering 2.0) A corpus of crowdsourced yes/no commonsense reasoning questions (e.g., "a playing card is capable of cutting soft cheese?") (Talmor et al.,
2021).
ANLI (Adversarial NLI) A natural language inference corpus with data collected "adversarially" in three rounds using a human-in-the-loop approach
(Nie et al., 2020).
MNLI (Multi-Genre Natural Language Inference)
A corpus of sentence pairs (crowdsourced) with annotations for textual entailment (given a premise and hypothesis, does the first entail, contradict, or is neutral to the other). We conduct experiments using both the matched (in-domain) and mismatched (cross-domain) evaluation sets (Williams et al., 2018).
SQuAD 2.0 (Stanford Question Answering Dataset 2.0) A corpus of crowdsourced questions (along with a Wikipedia context), and annotated answer spans. Unlike SQuAD 1.1, not all questions have answers (Rajpurkar et al., 2018).
MRPC (Microsoft Research Paraphrase Corpus) A
corpus of sentence pairs extracted from online news sources, where each pair is annotated for whether the sentences are semantically equivalent (Dolan and Brockett, 2005). The sentences was paired based on heuristics (e.g., "two sentences share at least three common words").
NQ (Natural Questions) A corpus of questions from popular Google search queries, paired with a retrieved Wikipedia document, annotated with an answer. We use simplified MRQA version, which removes unanswerable questions, yes/no questions or questions without a short answer span and considers paragraph containing a short answer span as context instead of the entire document
(Kwiatkowski et al., 2019; Fisch et al., 2019).
TweetEval A corpus of tweets containing multiple classification tasks (Barbieri et al., 2020), though we used the subset of the dataset specifically for sentiment analysis. We also pre-process the data to remove examples with the neutral label, making the classification task binary (positive/negative) for out-domain evaluation with SST-2.
SST-2 (Stanford Sentiment Treebank) A corpus of movie review sentences with annotations for sentiment (positive/negative) (Socher et al., 2013).
AG News A corpus of news articles from the web, categorized into four topics (business, sci/tech, sports, world) (Zhang et al., 2015).
IMDb (IMDb Review Dataset) A balanced corpus of movie reviews from IMDb with negative (score
≤ 4 out of 10) and positive reviews (score ≥ 7 out of 10) (Maas et al., 2011).
## 4 **Nearest Neighbor Analysis With Two** Types Of Encoders
We begin studying overlap with an analysis of nearest neighbor data instances between the train and evaluation datasets. We define the nearest neighbor for each evaluation example xe in the given training dataset Xtrain. This is dependent on the embedding function E(x), and the training dataset Xtrain. Following prior work (Snell et al., 2017; Tänzer et al.,
2022), we define the similarity between two examples xi and xj as the cosine similarity between their embeddings, E(xi) and E(xj ). We describe how to encode each example below.
## 4.1 **Instance Encoder**
We consider two types of encoder E(x) for each data instance x - a general sentence embedding function and an embedding function learned while optimizing for the target task. We study two tasks, classification and extractive question answering (Rajpurkar et al.). Classification tasks map input text x to y from pre-defined label set Y , and question answering tasks map an input x consisting of {question q, evidence passage c} to answer string y which is a span in the evidence passage.
As the output should be entailed from the input, we only pass in input to the instance encoder. We note that such a nearest neighbors approach to studying overlap of the input could be extended to generation tasks such as translation or summarization, or semantic parsing, although we do not examine these in this work.
General Sentence Embedding [Eg] We experiment with two types of general sentence embeddings; (1) [CLS] token embeddings from the pretrained LM before fine-tuning (Liu et al., 2019a)
| Nearest training example | Overlap | | | | | |
|----------------------------|-----------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|-------------|-------------|-----|----------|
| Dataset | Eval example | (Eg) | (Et) | Eg | Et | |
| WinoGrande Megan | forgot | to | buy | de | | |
| odorant at the store so they borrowed Jessica's deodorant and __ hoped they never found out. | Elena asked Erin if she could borrow her deodorant, but __ had forgotten to bring some. | Natalie was having an ant problem and hates bugs so called Elena for help since __ is fearless. | 0.330 0.168 | | | |
| MNLI | Premise: Most of the dances are suggestive of ancient courtship rituals, with the man being forceful and arrogant, the woman shyly flirtatious. Hypothesis: The dances have an equal number of male and female dancers. | Premise: In Kerala, try to see the lively kathakali dances, in which men play both male and female parts to enact both divine and heroic Indian legends in the most gorgeous costumes and elaborate makeup. Hypothesis: The lively kathakali dances in Kerala feature men who play the role of males and females. | Premise: Here there are several attractive hotels, including one with tropical gardens, that cater to visitors hoping to catch a glimpse of the Himalayas at sunrise or sunset. Hypothesis: All of the hotels here have an indoor heated pool to offer as well. | 0.343 0.119 | | |
| NQ | who heads the executive department of west virginia government | who's the head of the executive branch of the government | who | began | the | reformed |
| (MRQA) | movement (a branch of the protestant reformation) in zurich switzerland | 0.630 0.367 | | | | |
| AG News | Allianz to fight US court ruling on WTC attacks MUNICH - German insurance concern Allianz said on Tuesday it would fight a US jury decision in New York... | Allianz Says Trade Center Ruling May Cost It Up to 80 Mln Euros Allianz AG, Europe's largest insurer, said a New York court ruling that defined... | Developer Wins Victory in WTC Case NEW YORK (Reuters) - A New York developer hoping to rebuild the destroyed World Trade Center... | 0.415 0.372 | | |
Table 1: Examples of the most similar instances for the evaluation example according to two embedding methods.
Unigram overlap of each train instance with the evaluation example is highlighted in blue. Average unigram overlap over the full dataset between evaluation examples and nearest train examples according to the different embedding methods are shown in the last two columns.
and (2) SimCSE embeddings (Gao et al., 2021)
which showed strong performance over various benchmark datasets. Gao et al. (2021) first encode input sentence with a pretrained language model and then take the [CLS] representation to get a fixed dimensional representation and improve it with a contrastive learning objective (Chen et al., 2020). Specifically, they construct positive sentence pairs by applying two different standard dropout masks (Gal and Ghahramani, 2016) on the input representation on the same sentence, and construct negative pairs by taking other sentences in the same mini-batch. While we choose these two embeddings for our analysis, other sentence embedding methods (Kiros et al., 2015; Wu et al., 2020)
can be used.
Task Specific Learned Embedding [Et] To construct task specific embedding, we first fine-tune a pre-trained language model to perform our target tasks. Unless otherwise specified, we use the RoBERTa-large model (Liu et al., 2019b). We use standard recipes for using pre-trained LMs.
For classification, we take the [CLS] representation through a fully-connected layer to predict the correct label from a label set (classification task). For extractive QA, we encode concatenation of question and context tokens and take the final representations of the context tokens through fully-connected layer to predict the answer start and answer end token.
## 4.2 **Nearest Neighbor Analysis**
We first provide some manual inspection of similar examples. Table 1 presents a few examples from the evaluation set from various datasets, along with their most similar training examples for each embedding function. We observe that the two embedding functions capture different views, and that the general embedding (Eg) captures more lexical similarity. This reiterates prior work showing that task-specific embeddings (such as averaging token representations or using the CLS token) performs poorly on semantic similarity tasks (Reimers and Gurevych, 2019). We report the average unigram overlap between evaluation examples and their nearest train neighbor with both general and task-specific representations in Table 1. We provide examples for additional datasets in Appendix D (see Table 11 for qualitative examples) and quantitative unigram overlap in Appendix A (Table 7).
| Dataset | Random | Finetune | Eg | Et | | | |
|------------|--------------|---------------|----------------|----------------|----------------|----------------|----------------|
| (FULL) | SimCSE (500) | SimCSE (FULL) | CLS (FULL) | CLS (500) | CLS (FULL) | | |
| WinoGrande | 49.57 | 78.37 | 50.67 (+0.24) | 51.78 (+1.82) | 49.88 (+0.31) | 50.43 (+1.42) | 49.80 (+2.37) |
| ANLI | 33.94 | 57.37 | 33.38 (+0.57) | 35.34 (+1.78) | 39.03 (+5.12) | 36.03 (+5.34) | 56.28 (+45.22) |
| MNLI | 33.15 | 89.94 | 37.96 (+9.91) | 45.93 (+19.19) | 37.23 (+4.08) | 73.95 (+66.21) | 89.12 (+86.42) |
| MRPC | 56.45 | 92.12 | 58.18 (+0.14) | 62.68 (+6.23) | 63.06 (-4.03) | 79.79 (+61.65) | 88.42 (+75.43) |
| TweetEval | 45.45 | 94.60 | 76.54 (+54.64) | 81.11 (+60.30) | 72.59 (+34.21) | 88.94 (+78.87) | 93.72 (+85.94) |
| SST-2 | 48.85 | 96.84 | 74.31 (+47.93) | 78.90 (+53.79) | 65.14 (+16.06) | 92.09 (+83.83) | 95.07 (+90.94) |
| AG News | 25.68 | 95.47 | 79.67 (+72.66) | 89.83 (+81.32) | 84.20 (+59.25) | 90.63 (+90.30) | 93.75 (+93.55) |
| IMDb | 50.16 | 95.17 | 70.94 (+26.48) | 72.75 (+32.70) | 64.06 (+14.06) | 93.34 (+85.94) | 94.84 (+89.22) |
In every dataset, there is more lexical overlap when nearest neighbor was found using general representations, supporting our qualitative observations.
## 4.3 **Classification With The Nearest Neighbor**
After identifying the nearest training example for each evaluation example, what can we do with it? Inspired by a recent study in question answering (Lewis et al., 2021a) which copies the answer of the training question that is most similar to the evaluation question (where the evaluation question is a duplicate or paraphrase of the train question), we apply this method widely to all datasets we study to build a non-parametric classification model. This is similar to the protoBERT model (Tänzer et al.,
2022) which uses k-nearest neighbor classification algorithms. However, we use the label from the nearest neighbor without constructing an embedding representing each class label. For extractive QA tasks, we use the answer as the label and calculate performance as the exact-match to the nearest neighbor. High performance of this baseline will indicate greater train-evaluation overlap.
Table 2 presents the results for the two embedding types we study, as well as two training data sizes. Here, we look at gold labels, and focus on differences between embedding types and training data sizes. We also report the difference to the classification performance for taking the *farthest* training example in parentheses and a random baseline which assigns labels according to the label distribution. We also show the total RoBERTalarge fine-tuned performance as an upperbound.
Fine-tuned performance for all datasets and other models are shown in Appendix B.
How does nearest neighbor classification work with different encoders? Comparing general CLS token embeddings (without fine-tuning) with SimCSE embeddings, we see mixed results - sometimes using SimCSE results in higher performance, sometimes general CLS token embeddings. However, the difference between performance on the nearest neighbor and performance on the farthest neighbor using CLS embeddings without finetuning is generally lower than when we use SimCSE embeddings, indicating the nearest semantic neighbor might be more relevant with SimCSE embeddings over CLS tokens, which follows prior work (Reimers and Gurevych, 2019).
After fine-tuning, copying the label of nearest neighbor shows strong performance across all datasets except WinoGrande. We attribute the strong performance to the task-specific nature of CLS embeddings (Reimers and Gurevych, 2019);
while they have low semantic similarity, they are close together in terms of *task* similarity (e.g., examples that require the model to do the same type of reasoning are more similar) leading to a high nearest neighbor performance.
## How Does Nearest Neighbor Classification Interact With Data Collection Methods? The Nearest
neighbor performance roughly corresponds with the degree of naturalness; for all user-generated classification tasks (LAB and USE), copying the label of nearest neighbor shows competitive performance, even without task-specific fine-tuning. On challenging, synthetically and adversarially generated datasets (WinoGrande and ANLI), however, the nearest neighbor approach shows smaller gains.
We hypothesize that this is because researchers can control data diversity and task difficulty in the synthetic setting to make a benchmark more challenging, which cannot be done in the natural case.
In addition, higher performance with natural data might signify more match with the pre-training data
![5_image_0.png](5_image_0.png)
of the model. We also note that the correspondence between performance and data collection method could also be due to task difficulty and types, as the user-generated datasets tend to be easier for models to learn. Label match to the nearest neighbor is nearly always higher than to the farthest neighbor and performs better than the random baseline, showing that a simple nearest neighbor approach corresponds to the overlap between train and evaluation sets.
How does nearest neighbor classification vary with encoder model power and training data size?
Figure 1 shows the nearest neighbor classification performance for label predictions of different power models of varying training data sizes for selected user-generated and synthetic/crowdsourced datasets. Here we study predicted labels rather than gold labels, and use RoBERTa-large, RoBERTA-base (Liu et al., 2019b )
and DistilBERT (Sanh et al., 2020 ). As fine-tuned CLS embeddings achieve high performance due to task-specific or reasoning similarity, we use Sim-
CSE representations for more general semantic similarity between nearest neighbors. Across all datasets, the nearest neighbor classification appears to be relatively consistent regardless of the size of the encoder model. For more natural datasets (bottom row of Figure 1 ), we see a large increase in
![5_image_1.png](5_image_1.png)
performance when the training data size increases from 10k to the full dataset; this is less consistent for synthetic and crowdsourced datasets (top row of Figure 1 ). This could indicate that for more natural datasets, or easier tasks, a larger amount of data leads to a higher comparative overlap, but this is not necessarily the case with synthetic and crowdsourced data.
What can we learn from examples where nearest neighbor classification fails?
We seek to understand cases in which the evaluation label does not match the nearest train label for classification tasks. We randomly sample 100 examples (20 from each of WinoGrande, MNLI, MRPC, AGNews and IMDb datasets) where nearest neighbor classifica-
| Setting | Random | Eg (SimCSE) | Eg (CLS) | Et | |
|------------------------|-------------------|----------------|----------------|----------------|----------------|
| in-domain | 33.15 | 45.93 (+19.19) | 37.23 (+4.08) | 89.12 (+86.42) | |
| MNLI | out-domain (ANLI) | 33.39 | 37.84 (+9.20) | 34.37 (+2.64) | 84.81 (+81.70) |
| out-domain (MNLI-mm) | 33.06 | 32.57 (+0.16) | 36.36 (+3.59) | 88.90 (+85.97) | |
| in-domain | 48.85 | 78.90 (+53.79) | 65.14 (+16.06) | 95.07 (+90.94) | |
| SST-2 | out-domain (IMDb) | 50.02 | 69.88 (+37.66) | 51.13 (+1.18) | 88.69 (+76.75) |
| out-domain (TweetEval) | 27.59 | 44.98 (+22.70) | 49.06 (-6.90) | 87.55 (+78.78) | |
tion fails, and manually categorize them into three types:
- *not similar*: Failure at general semantic similarity
- *mismatch*: Semantic / task similarity mismatch
- *ambiguous*: The label for either the evaluation or train example is ambiguous (or incorrect)
We note that the first two categories, *not similar* and *mismatch* are failures due to the nearest neighbors approach, while the last category, *ambiguous*, is relevant to the dataset itself. Table 12 in Appendix E provides examples. We show the percentage of annotated examples in each category for each dataset, in Figure 2. The majority of manually annotated examples were ambiguous, which is a possible reason for why the model performs worse on instances without label match.
How does nearest neighbor classification perform under domain shift? We perform analysis on distribution shifts on two classification tasks –
sentiment classification and natural language inference. We report the classification results from copying the nearest neighbor in the training set
(parallel to Section 4.3) in Table 3. We find that the most similar example in the train set is less likely to have the same label as the evaluation example when the evaluation example is taken from different distribution. Yet, the nearest neighbor classification almost always outperforms the baseline, sometimes strongly.
## 5 **Quantifying Overlap With Instance** Similarity
In this section, we introduce a new metric, Instance Similarity (InsSim), and use it to identify easy and challenging instances in the evaluation dataset.
| 1k | full | |
|------------|---------------|---------------|
| WinoGrande | 0.458 / 0.900 | 0.594 / 0.878 |
| CSQA 2.0 | 0.399 / 0.900 | 0.520 / 0.900 |
| ANLI | 0.505 / 0.912 | 0.658 / 0.962 |
| MNLI | 0.384 / 0.900 | 0.622 / 0.900 |
| SQuAD 2.0 | 0.466 / 0.899 | 0.636 / 0.900 |
| MRPC | 0.525 / 0.841 | 0.579 / 0.881 |
| TweetEval | 0.469 / 0.903 | 0.561 / 0.939 |
| NQ | 0.481 / 0.927 | 0.717 / 0.981 |
| SST-2 | 0.489 / 0.835 | 0.608 / 0.900 |
| AG News | 0.546 / 0.864 | 0.751 / 0.906 |
| IMDb | 0.648 / 0.894 | 0.709 / 0.959 |
Defining **InsSim** We define a metric, InsSim(xe), for each individual evaluation example xe based on its nearest neighbors in the provided training dataset. We notate topN(xe, Xtrain, k) as set of k nearest examples in the total training dataset Xtrain of xe according to the similarity function described in Section 4.
$\sum_{x_{i}\in\text{top}\mathcal{N}(x_{e},X_{\text{train}},k)}\text{Sim}(x_{e},x_{i})$
k We conduct our analysis with a default setting of k = 5.
Interpreting **InsSim** The higher InsSim(xe),
the easier for a machine learning model to estimate P(ye|xe), if the label of the example matches its nearest train neighbors (we study this further in this section). An alternative metric would be estimating the input distribution P(x) based on the training
| Dataset | Total | Performance (MISMATCH) | Performance (MATCH) | M/MM ∆ | | | | |
|------------|----------------|--------------------------|-----------------------|----------|-------|-------|-------|--------|
| All (MM) | Low | High | All (M) | Low | High | | | |
| WinoGrande | 78.31 (48.22%) | 78.56 | 79.23 | 77.17 | 78.20 | 73.47 | 80.20 | -0.36 |
| ANLI | 57.34 (64.53%) | 54.48 | 60.26 | 49.19 | 62.55 | 63.53 | 67.16 | +8.08 |
| MNLI | 89.94 (54.14%) | 88.35 | 89.40 | 86.39 | 91.85 | 92.52 | 93.26 | +3.49 |
| MRPC | 88.61 (37.32%) | 83.79 | 84.45 | 77.82 | 91.55 | 90.52 | 92.79 | +7.75 |
| TweetEval | 94.50 (18.89%) | 83.24 | 81.89 | 83.89 | 97.14 | 96.31 | 98.12 | +13.91 |
| SST-2 | 96.45 (21.10%) | 88.59 | 87.27 | 87.5 | 98.69 | 97.57 | 99.52 | +10.10 |
| AG News | 95.42 (10.17%) | 69.47 | 73.16 | 65.52 | 98.37 | 98.19 | 98.54 | +28.90 |
| IMDb | 95.07 (27.25%) | 90.21 | 86.78 | 92.32 | 96.89 | 95.69 | 97.93 | +6.68 |
data and evaluate the likelihood of xe according to this distribution. While P(x) will estimate how likely xe is with respect to the entire training set X*train*, InsSim will only consider the k closest elements in the training dataset. Given strong few-shot learning ability of recent pre-trained models (Liu et al., 2019b; Brown et al., 2020), we anticipate this metric can more effectively capture the predicted performance on example xe.
We report the average InsSim score on each dataset in Table 4. A higher score will imply heavier train-evaluation dataset overlap. Using task-specific embeddings brings examples closer together significantly across all datasets. The number of total training instances varies significantly across datasets (see Table 6 in the Appendix A), so larger datasets tend to exhibit higher InsSim. We find that the average InsSim tends to be higher for tasks that are more naturally generated, indicating less data diversity between training and evaluation sets. Our metric is coarse in that it does not specify whether the similarity between instances are caused by lexical or topical overlap (e.g., containing the same entity) or syntactic overlap (e.g., similar sentence structure).
To better evaluate model generalization, we propose to divide evaluation examples into two subsets - (1) MATCH: examples where the evaluation label equals the nearest gold train label, and (2)
MISMATCH: examples where the evaluation label does not match the nearest gold train label. We use general sentence embeddings (SimCSE) for the representations for better generalizability. We hypothesize that the MATCH subset is easier for models.
## How Does Model Performance Differ Between
MATCH and MISMATCH **subsets?** We show RoBERTa-large performance on each of these subsets, along with the difference between them, in Table 5. As expected, performance is generally higher when labels match, confirming our hypothesis. However, this is not the case for WinoGrande.
We conjecture this is because semantic similarity is not as relevant to the WinoGrande reasoning task. This is further shown by a high difference between performance on the two subsets for the AG News dataset, for which semantic similarity is more strongly relevant. In addition, Table 5 shows the percent of total examples in the MIS-MATCH subset; we see that overall performance on the dataset loosely *inversely* correlates with the proportion of MISMATCH examples; further illustrating that these examples are more difficult.
Can we use the InsSim **score to identify difficult evaluation examples?** We further split our MATCH and MISMATCH data subsets by their InsSim score: we report performance breakdown on highest and lowest 30% of the data sorted by InsSim. RoBERTa-large performance on these sets is also shown in Table 5. Our results indicate that a higher InsSim leads to higher performance on examples where the evaluation labels match the nearest train example label, but not necessarily when they do not match. In challenging datasets (WinoGrande, ANLI, MNLI and MRPC), when the label of the evaluation example does not match the label of the nearest training example, being closer to the nearest neighbor actually hurts the model performance, suggesting over-generalization from the nearest training example. These results emphasize that in addition to evaluating model performance on a full dataset, it could be useful to evaluate models on these subsets individually to better assess model generalization; performance can be significantly different on more challenging subsets. We will publicly release our code for splitting datasets into MATCH and MISMATCH subsets at https:
//github.com/GauriKambhatla/train_eval_overlap.
## 6 **Conclusion**
In this paper, we analyze eleven downstream NLP datasets for train-evaluation overlap using a nearest neighbors approach, quantified with a simple measure of instance similarity. We categorize datasets according to their data collection method, and find that more naturally-collected data and easier tasks tend to demonstrate higher train-eval overlap than more synthetically-generated data and difficult tasks. Lastly, we suggest using nearest neighbor analysis to split the evaluation data into more easy and challenging subsets, determined by the overlap with the training set, and advocate studying model performance on these subsets as well as the full dataset for a more comprehensive evaluation of model generalizability.
## Limitations
Our study is limited in scope, studying only classification and extractive QA tasks in English; the trends we highlight in this work might not generalize to different tasks or other languages. We also acknowledge that we only use BERT-based models for our analysis, so it is uncertain whether these findings are applicable to other models. In addition, the overlap we describe in this paper is defined by semantic similarity rather than literal overlap between sentences and phrases. We are not claiming that this overlap is good or bad, rather we show that when the overlap is large, it is more difficult to evaluate model generalization.
We note that there are multiple confounding factors in our results. First, while we highlight the role of dataset collection method in our analysis, the naturalness of data collection method is negatively correlated with task difficulty (i.e., the more natural datasets we study are also the least difficult). As a result, differences in performance can be attributed to task difficulty as well as data collection method. Second, our study is limited in scope of similarity metrics (only cosine similarity)
and embeddings used to compute similarity. Using different embedding or metric can change the results.
## Acknowledgements
We thank the ACL reviewers and meta-reviewer for thoughtful comments and suggestions to improve the paper.
## References
Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A
simple but tough-to-beat baseline for sentence embeddings. In *ICLR*.
Francesco Barbieri, Jose Camacho-Collados, Luis Espinosa Anke, and Leonardo Neves. 2020. TweetEval:
Unified benchmark and comparative evaluation for tweet classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1644–1650, Online. Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *ArXiv*,
abs/2005.14165.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1597–1607.
PMLR.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira,
Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. *naacl*, abs/1810.04805.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In *Proceedings of the Third International Workshop* on Paraphrasing (IWP2005).
Aparna Elangovan, Jiayuan He, and Karin Verspoor.
2021. Memorization vs. generalization : Quantifying data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1325–1335, Online.
Association for Computational Linguistics.
Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. 2022. Understanding dataset difficulty with V-usable information. In *Proceedings* of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5988–6008. PMLR.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Mrqa 2019 shared task: Evaluating generalization in reading comprehension. *ArXiv*, abs/1910.09753.
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. *ArXiv*, abs/1506.02142.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence embeddings.
Ameya Godbole and Robin Jia. 2022. Benchmarking long-tail generalization with likelihood splits.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘
Lewis, Ledell Yu Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020. Dense passage retrieval for open-domain question answering. *ArXiv*,
abs/2004.04906.
Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. *ArXiv*,
abs/1506.06726.
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard L.
Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. 2021a. Wilds: A benchmark of in-the-wild distribution shifts. In *ICML*.
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. 2021b. WILDS: A benchmark of in-the-wild distribution shifts. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5637–5664. PMLR.
Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021.
Hurdles to progress in long-form question answering. In *NAACL*.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel.
2021a. Question and answer test-train overlap in open-domain question answering datasets. In *Proceedings of the 16th Conference of the European* Chapter of the Association for Computational Linguistics: Main Volume, pages 1000–1008, Online.
Association for Computational Linguistics.
Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021b. Paq: 65 million probably-asked questions and what you can do with them. *Transactions of the Association for* Computational Linguistics, 9:1098–1115.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a.
RoBERTa: A Robustly Optimized BERT Pretraining Approach. Technical Report arXiv:1907.11692, arXiv. ArXiv:1907.11692 [cs] type: article.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Andrey Malinin, Andreas Athanasopoulos, Muhamed Barakovic, Meritxell Bach Cuadra, Mark J F Gales, Cristina Granziera, Mara Graziani, Nikolay Kartashev, Konstantinos Kyriakopoulos, Po-Jui Lu, Nataliia Molchanova, Antonis Nikitakis, Vatsal Raina, Francesco La Rosa, Eli Sivena, Vasileios Tsarsitalidis, Efi Tsompopoulou, and Elena Volf. 2022. Shifts 2.0: Extending the dataset of real distributional shifts.
Andrey Malinin, Neil Band, Ganshin, Alexander, German Chesnokov, Yarin Gal, Mark J F Gales, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Roginskiy, Denis, Mariya Shmatova, Panos Tigas, and Boris Yangel. 2021. Shifts: A dataset of real distributional shift across multiple large-scale tasks.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4885–4901, Online. Association for Computational Linguistics.
Chunyuan Qin, Chuan Deng, Jiashun Huang, Kun xian Shu, and Mingze Bai. 2020. An efficient faiss-based search method for mass spectral library searching.
2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE), pages 513–518.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. *Commun.*
ACM, 64(9):99–106.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.
Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017.
Prototypical networks for few-shot learning. *ArXiv*,
abs/1703.05175.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9275–9293, Online. Association for Computational Linguistics.
Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021. Commonsenseqa 2.0: Exposing the limits of ai through gamification. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1.
Michael Tänzer, Sebastian Ruder, and Marek Rei. 2022.
Memorisation versus generalisation in pre-trained language models. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7564–7578, Dublin, Ireland. Association for Computational Linguistics.
Kushal Tirumala, Aram H. Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. 2022. Memorization without overfitting: Analyzing the training dynamics of large language models.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R.
Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. *Transactions of the* Association for Computational Linguistics, 8:377–
392.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American*
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. Clear: Contrastive learning for sentence representation. *ArXiv*,
abs/2012.15466.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level Convolutional Networks for Text Classification. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.
## A **Dataset Statistics**
We provide additional statistics about the datasets we studied, including licensing and data split sizes
(Table 6). The WinoGrande and CSQA 2.0 datasets are licensed with CC-BY, ANLI is licensed with Creative Commons-Non Commercial 4.0, MNLI, the TweetEval sentiment task, and NQ (MRQA
version) are licensed with MIT. All the datasets we study are in English.
| Dataset | Train | Dev | Collection | Task |
|------------|---------|-------|--------------|---------------|
| WinoGrande | 40k | 1.2k | SYN | Classfication |
| CSQA 2 | 9.2k | 2.5k | SYN | Classfication |
| ANLI | 163k | 3.2k | CWD | Classfication |
| MNLI | 392k | 9.8k | CWD | Classfication |
| SQuAD 2 | 131k | 11.8k | CWD | ExtractiveQA |
| MRPC | 3.6k | 2.1k | LAB | Classfication |
| TweetEval | 24.9k | 6.3k | LAB | Classfication |
| NQ | 104k | 12.8k | LAB | ExtractiveQA |
| SST-2 | 67k | 872 | USE | Classfication |
| AG News | 12k | 7.6k | USE | Classfication |
| IMDb | 25k | 25k | USE | Classfication |
Table 6: Dataset statistics. For Natural Questions (NQ),
we use the MRQA subset, and for TweetEval, we use the sentiment split, with neutral label examples filtered out.
## B **Model Performance & Compute**
Here we list the total fine-tuned model performance for each model on each validation dataset for varying amounts of training data. DistilBERT
(66M parameters) performance is listed in Table 10, RoBERTa-base (123M parameters) performance in Table 9, and RoBERTa-large (354M parameters)
performance in Table 8. We take the average of three runs to get the numbers listed in these tables.
We run all experiments on RTX 8000 GPUs.
WinoGrande 55.33 62.19 75.93 78.37
CSQA 2.0 51.87 54.54 - 54.66
ANLI 34.56 35.18 43.28 57.34
MNLI 76.00 84.40 86.81 89.94
SQuAD 2.0 59.12 69.29 80.44 87.49
MRPC 81.62 86.52 - 92.12
NQ 64.46 68.04 76.48 80.33
TweetEval 90.45 92.58 93.95 94.60
SST-2 92.89 93.23 95.41 96.84
AG News 90.58 90.66 93.66 95.47
IMDb 93.53 93.81 95.04 95.17
| Dataset | Eg | Et |
|------------|-------|-------|
| WinoGrande | 0.330 | 0.168 |
| CSQA 2.0 | 0.284 | 0.105 |
| ANLI | 0.951 | 0.207 |
| MNLI | 0.343 | 0.119 |
| SQuAD 2.0 | 0.319 | 0.124 |
| MRPC | 0.343 | 0.131 |
| TweetEval | 0.125 | 0.062 |
| NQ | 0.630 | 0.367 |
| SST-2 | 0.296 | 0.103 |
| AG News | 0.492 | 0.127 |
| IMDb | 0.415 | 0.372 |
Table 8: Performance (RoBERTa-large) for each training setting. F1 scores are shown for MRPC, SQuAD 2.0, and NQ, accuracy scores shown for all other datasets.
MRPC and CSQA 2.0 have training set sizes less than 10k.
| 500 | 1k | 10k | Full | |
|-----------|-------|-------|--------|-------|
| CSQA 2.0 | 51.87 | 54.54 | - | 54.66 |
| ANLI | 34.56 | 35.18 | 43.28 | 57.34 |
| MNLI | 76.00 | 84.40 | 86.81 | 89.94 |
| MRPC | 81.62 | 86.52 | - | 92.12 |
| NQ | 64.46 | 68.04 | 76.48 | 80.33 |
| TweetEval | 90.45 | 92.58 | 93.95 | 94.60 |
| SST-2 | 92.89 | 93.23 | 95.41 | 96.84 |
| AG News | 90.58 | 90.66 | 93.66 | 95.47 |
| IMDb | 93.53 | 93.81 | 95.04 | 95.17 |
## C **Hyperparameters**
We use the hyperparameters from existing work when listed, otherwise we perform hyperparameter tuning through a grid search over learning rate
(LR), number of epochs, batch size, and max sequence length. For classification tasks, these are:
LR {2e − 7, 2e − 5, 2e − 3}, epochs (full dataset)
{3, 5, 7}, epochs (10k) {5, 7, 9, 10}, epochs (1k, 500) {7, 11, 15, 20}, batch size {32, 64, 128}, sequence length {128, 256, 512}. For the extractive QA tasks, these are: LR {3e − 7, 3e − 5, 3e − 3},
epochs (full dataset) {2, 3}, epochs (10k) {3, 4, 5}, epochs (1k, 500) {5, 7, 10}, batch size {8, 12}, max length {384}.
| Dataset | Training Size | | | |
|------------|-----------------|-------|-------|-------|
| 500 | 1k | 10k | Full | |
| WinoGrande | 53.51 | 56.35 | 61.09 | 66.14 |
| CSQA 2.0 | 51.79 | 51.79 | - | 54.02 |
| ANLI | 35.94 | 36.72 | 42.00 | 51.75 |
| MNLI | 65.92 | 73.14 | 81.62 | 87.56 |
| SQuAD 2.0 | 50.83 | 56.21 | 72.94 | 83.43 |
| MRPC | 81.62 | 86.52 | - | 91.50 |
| NQ | 45.77 | 57.40 | 71.96 | 78.92 |
| TweetEval | 89.38 | 91.02 | 93.16 | 93.28 |
| SST-2 | 88.99 | 92.09 | 93.35 | 94.5 |
| AG News | 88.93 | 89.36 | 92.71 | 95.21 |
| IMDb | 92.86 | 92.71 | 94.86 | 95.54 |
Table 9: Performance (RoBERTa-base) for each training setting. F1 scores are shown for MRPC, SQuAD 2.0, and NQ, accuracy scores shown for all other datasets. MRPC and CSQA 2.0 have training set sizes less than 10k.
## D **Additional Nearest Instance Examples**
Table 11 shows additional examples of nearest neighbors for the datasets not shown in Table 1.
## E **Examples Of Nearest Neighbor** Classification Failure Categories
Table 12 shows examples of evaluation examples and their nearest train neighbor whose labels do not match.
| Dataset | Training Size | | | |
|------------|-----------------|-------|-------|-------|
| 500 | 1k | 10k | Full | |
| WinoGrande | 48.77 | 48.93 | 51.22 | 51.38 |
| CSQA 2.0 | 51.04 | 51.71 | - | 53.99 |
| ANLI | 35.63 | 36.59 | 41.34 | 46.25 |
| MNLI | 49.13 | 54.67 | 68.70 | 82.00 |
| SQuAD 2.0 | 44.12 | 45.14 | 52.52 | 69.75 |
| MRPC | 77.37 | 77.74 | - | 88.70 |
| NQ | 29.26 | 32.78 | 60.12 | 74.45 |
| TweetEval | 87.62 | 89.05 | 90.94 | 91.51 |
| SST-2 | 82.80 | 84.86 | 89.79 | 91.06 |
| AG News | 87.93 | 88.89 | 91.41 | 94.73 |
| IMDb | 88.11 | 88.91 | 91.95 | 93.18 |
Table 10: Performance (DistilBERT-base) for each training setting. F1 scores are shown for MRPC, SQuAD 2.0, and NQ, accuracy scores shown for all other datasets.
MRPC and CSQA 2.0 have training set sizes less than 10k.
| Nearest training example | | | |
|----------------------------|-----------------------------------|--------------------|-----------------------------------|
| Dataset | Eval example | (Eg) | (Et) |
| CSQA 2.0 | You should always try to phrase | Do people always quote facts after being asked a question? | A good reporter always does their |
| your questions with the most double negatives. | best work even when the assignment is underwhelming. | | |
| ANLI | P The Toffee Crisp bar is a chocolate bar first manufactured in the United Kingdom by Mackintosh's in 1963. It is now produced by Nestlé in the UK. It consists of... H The Toffee Crisp bar is not sold in the US. | P The following is a list of female cabinet ministers of Thailand. Thailand is a country located at the centre of the Indochina peninsula in Southeast Asia... H Thailand does not have male cabinet ministers. | |
| SQuAD 2.0 | Inter-network routing was what | What is defined as a way of filtering network data between a host | |
| kind of system? | or network and another network? P The Toffee Crisp bar is a chocolate bar first manufactured in the United Kingdom by Mackintosh's in 1963. It is now produced by Nestlé in the UK. It consists of... H The company will make a bar with no toffee. | In which year did Poland declassify most of its Warsaw Pact-era archives? | |
| MRPC | Phrase 1 Saddam's other son, Odai, surrendered Friday, but the Americans are keeping it quiet because he's a U.S. agent. Phrase 2 Hussein's other son, Uday, surrendered yesterday, but the Americans are keeping it quiet because he's a US agent. | Phrase 1 The only other JI member to reveal similar information is Omar al Faruq , now held at a secret location by the United States. Phrase 2 The only other JI member to reveal similar information is Omar al Faruq, now held by the United States at a secret location. | Phrase 1 Initial reports said the attackers fired from a mosque within in the city, 30 miles west of Baghdad. Phrase 2 The Centcom statement said the gunmen appeared to have fired from a mosque in the city, 50 km ( 32 miles ) west of Baghdad. |
| SST-2 | i just loved every minute of this | i loved this film. | gives a superb performance full |
| film. | of deep feeling. | | |
| IMDb | Haines is excellent as the brash cadet who thinks West Point will really amount to something now that he has arrived. Haines displays his easy, goofy comic persona as he takes on West Point and Joan Crawford, the local beauty... | One of the biggest hits of 1926, Brown of Harvard is a exciting comedy/drama featuring regatta and football scenes that gave William Haines the role he needed to become a major star. It's patented Haines all the way: brash smart aleck who takes nothing serious until he is rejected by everyone... | As Jack Nicholson's directorial debut, Drive, He Said displays at the least that he is a gifted director of actors. Even when the story might seem to lose its way to the audience (and to a modern audience - if they can find it, which pops up now and again on eBay - it might seem more free formed than they think)... |
Table 11: Examples of the most similar instances for the evaluation example according to two embedding methods.
| Category | Dataset | Example (eval and most similar train) | Labels |
|-------------------------------------------------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|
| not similar | MNLI | Eval: P uh i don't know i i have mixed emotions about him uh sometimes i like him but at the same times i love to see somebody beat him H I like him for the most part, but would still enjoy seeing someone beat him. Train: P You can imagine what a thorn in the flesh I am to him! H You can imagine how much he is bothered by me, even though I treat him well | Eval: Entail Train: Neutral |
| mismatch | WinoGrande | Eval: Randy only ever added a little bit of hot sauce to his food, especially compared to Adam, as _ was much more sensitive to spice. Train: Randy found it easier to be healthy than Derrick because _ did not eat a wide variety of fruits and vegetables. | Eval: Randy Train: Derrick |
| ambiguous | AG News | Eval: Intel Doubles Dividend, Boosts Buyback by $11.5 Bln (Update2) Intel Corp., the world's biggest computer-chip maker, doubled its quarterly dividend and boosted its stock buyback program by $11. Train: Intel Doubles Dividend, Expands Buyback Chip giant Intel Corp. reported Wednesday that its board doubled the company's quarterly dividend and authorized an expansion of its ongoing stock repurchase program. | Eval: Business Train: Sci/Tech |
| Table 12: Examples of label-mismatched eval and nearest train examples for each category. | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section after conclusion (6)
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 3, 4
✓ B1. Did you cite the creators of artifacts you used?
Sections 3, 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
These are listed in Table 6 (Appendix)
## C ✓ **Did You Run Computational Experiments?** Sections 4, 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix B
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
weinstein-goldberg-2023-unsupervised | Unsupervised Mapping of Arguments of Deverbal Nouns to Their Corresponding Verbal Labels | https://aclanthology.org/2023.findings-acl.184 | Deverbal nouns are nominal forms of verbs commonly used in written English texts to describe events or actions, as well as their arguments. However, many NLP systems, and in particular pattern-based ones, neglect to handle such nominalized constructions. The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation and require semantic ontologies, making their applications restricted to a small set of nouns. We propose to adopt instead a more syntactic approach, which maps the arguments of deverbal nouns to the universal-dependency relations of the corresponding verbal construction. We present an unsupervised mechanism{---}based on contextualized word representations{---}which allows to enrich universal-dependency trees with dependency arcs denoting arguments of deverbal nouns, using the same labels as the corresponding verbal cases. By sharing the same label set as in the verbal case, patterns that were developed for verbs can be applied without modification but with high accuracy also to the nominal constructions. | # Unsupervised Mapping Of Arguments Of Deverbal Nouns To Their Corresponding Verbal Labels
Aviv Weinstein Department of Computer Science Bar-Ilan University [email protected]
## Abstract
Deverbal nouns are nominal forms of verbs commonly used in written English texts to describe events or actions, as well as their arguments. However, many NLP systems, and in particular pattern-based ones, neglect to handle such nominalized constructions. The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation and require semantic ontologies, making their applications restricted to a small set of nouns. We propose to adopt instead a more syntactic approach, which maps the arguments of deverbal nouns to the universal-dependency relations of the corresponding verbal construction. We present an unsupervised mechanismbased on contextualized word representationswhich allows to enrich universal-dependency trees with dependency arcs denoting arguments of deverbal nouns, using the same labels as the corresponding verbal cases. By sharing the same label set as in the verbal case, patterns that were developed for verbs can be applied without modification but with high accuracy also to the nominal constructions.
1 Introduction Systems that aim to extract and summarize information from large text collections often revolve around the concept of predicates and their arguments. Such predicates are often realized as verbs
(*the performers interpret the music*), but the same predicative concepts can also be realized as nouns
(*musical interpretation by the performers*). This process of realizing verbal predicates as nouns is called *nominalization*, and it involves changing the syntactic structures around the content words participating in the construction, while keeping its semantics the same. In this work, we are interested in mapping arguments of nominal constructions that appear in text, to the corresponding ones in verbal structures (i.e., to identify the syntactic object role of *music* and syntactic subject role of *performers*,
in *music interpretation by the performers*).
Yoav Goldberg Department of Computer Science Bar-Ilan University [email protected]
![0_image_0.png](0_image_0.png)
Nominalizations, also known as nominal predicates, are nouns derived from words of a different part of speech, such as verbs or adjectives.
For example, in English1, the nominalization *interpretation* is derived from the verb *interpret*, and the nominalization *precision* is related to the adjective *precise*. The usage of nominalizations is widespread in English text, and according to Gurevich et al. (2007), about half of all sentences in written texts contain at least one nominalization.
In our work, we observed a ratio of 120k nominalizations to 180k verbs, in a random collection of 100k Wikipedia sentences. Thus, interpretation of nominalizations is central to many language understanding tasks. In the current work, We focus on nominalizations which are derived solely from verbs, commonly called deverbal nouns.
Existing attempts around identifying arguments of nominalizations either rely on a predefined semantic roles ontology (e.g., SRL based roles such 1While this work focuses on English nominalizations, the phenomena itself is not English specific.
as those in VerbNet (Schuler, 2005) or FrameNet
(Baker et al., 1998)) as suggested by Pradhan et al.
(2004), Pado et al. ´ (2008) and Zhao and Titov
(2020), or consider a limited subset of nominalized structures (Lapata (2000) and Gurevich and Waterman (2009)). Early works approached the task in a fully supervised manner (Lapata (2000),
Pradhan et al. (2004)), hence suffering from insufficient annotated nominal data. To overcome that, Pado et al. ´ (2008) and more recently Zhao and Titov
(2020) considered a transfer scenario from verbal arguments to nominal arguments while assuming only supervised data for verbs. Nevertheless, their methods were limited to specific predicates, even with extensive annotated verbal data. Moreover, the previous works considered each a different set of argument types due to supervision constraints.
Our Proposed Task Rather than relying on a predefined semantic roles ontology, in this work we propose to map the arguments of deverbal nouns to the *syntactic* arguments of the corresponding active verbal form. This allows us to define a task with a consistent and a restricted label set (syntactic subject, syntactic object, syntactic prepositional modifier with preposition X), while still maintaining expressivity: if one knows how to extract the verbal argument from the active verbal form, they will be able to also extract the nominal ones.
A natural formulation is to ask "How will this verb arguments be realized in a deverbal noun construction?". However, this approach is problematic, as the same verbal structure, e.g. IBM appointed Sam as manager, can be realized in many different ways around the same nominalization, including: IBM's appointment of Sam as manager, Sam's appointment as manger by IBM and Sam's IBM
appointment as manager.
One solution would be to ask for all the possible nominal realizations. This is the approach taken by nominalization lexicons such as NomLex (Macleod et al., 1998). However, this is also problematic in practice, as the different possible syntactic structures may conflict when encountering a nominalization within a sentence (*IBM's appointment* vs.
Sam's appointment).
We resolve this by asking the opposite question:
"given a nominalized instance within a sentence and its set of arguments, how will these arguments map to those of an active verb construction?". That is, rather than asking "how will this verbal construction be realized as a nominal one" we ask "how will this nominal case be realized as an active verb construction". Using this formulation, we define a corpus enrichment task, in which we take in a corpus of syntactic trees, and annotate each deverbal noun case with its nominal arguments, using the corresponding verbal argument labels. An example of the trees enrichment is provided in Figure 1.
Potential Utility Our motivation follows that of Tiktinsky et al. (2020): we imagine the use of the enhanced trees in systems that integrates universal dependency trees (Nivre et al., 2016) as part of their logic, using machine-learned or pattern-based techniques. Our proposed enrichment will allow users to search for a verb construction, and retrieve also nominal realizations of the same relation.
One proposed usage case regards the task of Open Information Extraction (OpenIE; Etzioni et al., 2008), which refers to the extraction of relation tuples from plain text, without demanding a predefined schema. These tuples can be extracted from both verbal and nominal phrases, e.g., the tuple (Steve Jobs; founded; Apple) from the phrase Steve Jobs founded Apple and the tuple (IBM; research) from the phrase *IBM's research*. Some OpenIE systems, such as Renoun (Yahya et al.,
2014) and Angeli et al.'s (2015) system, integrate rule-based patterns to extract such relations from nominal phrases, e.g., (X; Y) from phrases of the structure "X's Y". However, these patterns can be misleading, as *IBM's research* interprets differently from *Rome's destruction* (IBM researched vs. Rome was destructed), leading to contradicting relations. To overcome that, we suggest using verbbased patterns to extract relations from nominal phrases, upon integrating our enhanced trees. Concretely, based our enhanced trees, an OpenIE system can use a pattern that detects the nsubj-phrase and dobj-phrase for both verbs and nouns, to construct the relation tuple (nsubj; verb/noun; dobj).
With this approach, different nominal phrases with the same syntactic structure, would properly map to different ordered relations, as (destruction; Rome)
for the phrase *Rome's destruction*.
An Unsupervised Approach We take an unsupervised approach to this nominal-to-verbal argument mapping, relying on pre-trained contextualized word representations. The intuition behind our approach is that in order to resolve nominal arguments to verbal ones, there are two prominent signals: the semantic types of the arguments, and their syntactic configuration with respect to their predicate. We hypothesize that pre-trained contextualized word embeddings capture both of these signals (as shown in Section 7.2), and also capture the similarities between the verbal and nominal cases (as demonstrated in Appendix A). Briefly, our approach works by identifying the candidate arguments of each deverbal noun instance, retrieving a set of sentences containing the corresponding active verb form, encoding both the deverbal noun instance and the active verb sentences using a masked language model, and searching for a mapping that maximizes some similarity metric between the nominal argument candidates and the verbal instances.
Our contributions in this work are thus twofold: (1) we formulate the task of aligning nominal arguments to the arguments of their corresponding active verbal form; and (2) we propose an unsupervised method for tackling this task. We also provide code2for enriching universal dependency trees (Nivre et al., 2016) with nominal arguments.
## 2 Deverbal Nouns
Deverbal nouns are one type of nominalizations which are derived specifically from verbs, e.g., the deverbal noun *treatment* is derived from the verb treat. The events represented by deverbal nouns are described using phrases in the sentence that complement the nouns. The arguments of the deverbal noun correspond to the arguments of the matching verb; each matches a different question about the action taken. For instance, in the phrase professional treatment of illness, *professional* refers to the actor/subject of the verb *treat* (professionals),
and *illness* refers to the object of the action *treat*.
The deverbal nouns, as typical nouns, are most often complemented by other noun phrases (treatment of illness, *his treatment* and *health treatment*)
and adjectives (*professional treatment*). Implicit and other types of complementing arguments are not considered part of this work's scope. Each deverbal noun defines a unique structure of these arguments, assigning different roles for the same typed arguments. For instance, consider the phrases *time* preference of the individual and *individual waste* of time, which match the same syntactic structure
("noun-compound of noun"). However, the first sentence matches the structure "Obj Noun of Subj" 2Our code is available at https://github.com/
AvivWn/NounVerbUDTransfer
("individuals2 prefer time1"), and the second sentence refers to the structure "Subj Noun of Obj"
("individual1 waste time2"). Furthermore, even the same deverbal noun may demand different labels for similar arguments in different contexts. For example, in the phrase "Rome's destruction", *Rome* was destroyed, whereas in the phrase "Rome's destruction of the city", *Rome* is the destroyer. Therefore, the argument roles are not determined solely by syntactic structure, and incorporate a mix of syntactic configuration, argument semantics, and predicate-specific information.
## 3 Related Works
Arguments of nominalizations were long investigated in the field of NLP. One early research explored the syntactic structure of the arguments and modeled the structure of many nominalizations, resulting in a detailed lexicon called NomLex (Macleod et al., 1998). The lexicon seeks to describe the allowed complements structures for a nominalization and relate the nominal complements to the arguments of the corresponding verb.
Following the publishing of NomLex, Meyers et al.
(1998) described how an Information Extraction
(IE) system could exploit the linguistic information in the NomLex lexicon. Yet, the suggested approach remained hardly utilized by further research, as many works only exploited the verb-noun pairs specified by the lexicon.
Regarding identifying and labeling nominalization's arguments, a supervised approach was suggested while considering various task settings. One preceding paper by Lapata (2000) presented a probabilistic procedure to infer whether the modifier of a nominalization (the head noun) stands in subject or object relation with it. For instance, the algorithm should predict that the modifier's role in the phrase *child behavior* is subject since the phrase refers to the *child* as the agent of the action described by the verb *behave*. Stated differently, this procedure focuses on extracting only one specific argument of nominalizations in a noun phrase. Another distinguished paper by Pradhan et al. (2004)
considered FrameNet-based (Baker et al., 1998) semantic arguments of nominalizations and applied a machine learning framework for eventive nominalizations in English and Chinese, aiming to identify and label their arguments. Finally, Kilicoglu et al.
(2010) published a similar approach for nominalizations used in biomedical text.
Some related works acknowledge the shortage of labeled argument nominalizations and suggest unsupervised methods for data expansion based on labeled argument verbs. Similarly to ours, these works exploited the similarity and alignment of the noun-verb arguments. For example, Pado et al. ´
(2008) and Zhao and Titov (2020) considered the argument labeling task for nominalizations in a setup where the verbal sentences are human labeled, and with regards to semantic role labeling
(SRL) arguments. Pado et al. ´ (2008) exploited the similarities between the argument structure of event nominalizations and corresponding verbs while utilizing common syntactic features and distributionalsemantic similarities. More recently, Zhao and Titov (2020) suggested a variational auto-encoder method, in which the labeler serves as an encoder, whereas the decoder generates the selectional preferences of the arguments for the predicted roles.
A different approach taken by Gurevich and Waterman (2009) using a fully unsupervised manner while automatically extracting and labeling verbal arguments of verbs from a large parsed corpus of Wikipedia. This approach resembles an intermediate stage of ours yet differs as it considers a reduced set of argument types (subject and object) and a reduced possible set of argument syntax for the nominalizations (possessive and 'of' arguments).
Lately, Lee et al. (2021) engaged with a different task with similar applications. They suggested an unsupervised method for paraphrasing clauses with nominalizations into active verbal clauses.
## 4 Task Definition
As discussed in the introduction, we define a task of labeling the arguments of deverbal nouns within a sentence, with labels of the arguments in the corresponding active verb constructions. Here we provide a more complete and formal definition. While our aim is to label all of the deverbal nouns in a given corpus, here we focus on describing the task with relation to a single instance of a sentence and a deverbal noun within it.
We consider the syntactic arguments of active verbal forms to belong to the set L consisting of the universal dependency relations nsubj, *dobj* and nmod:X, where X is a preposition (e.g., *nmod:in*,
nmod:on, *nmod:with*). In words, the syntactic subject, syntactic object, and arguments attached as prepositional phrases where the identity of the preposition is part of the relation. While these prepositions may correspond to many different semantic roles, for a given verb they usually indicate a concrete and unique role.
Formally, given a sentence with words w1*, . . . , w*n, and a marked deverbal noun within the sentence (say in position wi), we seek to find K
pairs of the form (relk, wjk
), 1 ≤ k ≤ K, where relk ∈ {*nsubj, dobj, nmod*:X} and wjk is a word in the sentence (jk is an index of a sentence word).
For simplicity, we also demand that every relation type cannot be repeated more than once in the identified set of pairs. These pairs indicate arguments of the deverbal noun and their relations to it, expressed using an active-verb label set.
In Figure 1, the blue edges of the bottom tree indicate the output *(nsubj, 1), (dobj, 6)*. Note that the task includes both the *identification* of the arguments and their *label assignment*.
## 5 Methodology
While we intend to handle all deverbal nouns in a given collection of sentences, here we focus on how to resolve a single deverbal noun. We identify deverbal nouns and their corresponding verbal forms based on a given lexicon of verb-noun pairs, which we consider as input. In this work, we use the NomLex lexicon (Macleod et al., 1998), where future work can also replace this with a learned model.
Given a deverbal noun within a sentence, we first identify its potential arguments. This is realized by searching a set of syntactic relations in the corresponding universal dependency tree (we use the UDv1 parser trained by Tiktinsky et al. (2020) via the spaCy toolkit3). We then label the arguments by comparing their contextualized word embeddings to those of the corresponding verb arguments, in a set of sentences containing this verb (we further motivate this comparison in Appendix A). Finally, based upon the labeled arguments, we construct the final output as pairs of the arguments' label (i.e.
verbal UD relation) and the arguments' head word.
## 5.1 Argument Identification
Given a sentence and a specific deverbal noun within, we first identify the phrases which could correspond to the desired arguments of the matching verb. The identified set of phrases is referred to as "argument candidates". Naively, every phrase in the sentence can complement the deverbal noun 3https://spacy.io
$$\ell_{n}=\arg\operatorname*{max}_{\ell}s i m(\mathbf{a_{n}},a v g(\{{\tilde{\mathbf{a}}}\mid\ell({\tilde{a}})=\ell,{\tilde{a}}\in{\tilde{A}}\}))$$
$$\begin{array}{l}{{\ell_{n}=\arg\operatorname*{max}_{\ell}s i m(\mathbf{a_{n}},a v g(\{{\tilde{\mathbf{a}}}\mid\ell({\tilde{a}})=\ell,{\tilde{a}}\in{\cal A}\}))}}\\ {{\ell_{n}=\arg\operatorname*{max}_{\ell}s u m(\{s i m(\mathbf{a_{n}},{\tilde{\mathbf{a}}})\mid\ell({\tilde{a}})=\ell,{\tilde{\mathbf{a}}}\in k n n(\mathbf{a_{n}},{\tilde{\mathbf{A}}},k)\})}}\end{array}$$
ℓ
and be considered as an argument, thus resulting in a relatively large set of candidates. To reduce this set, we consider the syntactic dependency tree of the sentence, searching for words that stand with direct dependency relation with the deverbal noun.
Then, for every identified word we construct the argument candidate as the phrase corresponding to the subtree headed by this word according to the dependency tree. More specifically, we observed that arguments of deverbal nouns are realized using words that stand with the deverbal nouns in a small set of possible syntactic relations: nmod:poss, compound, *amod*, and *nmod:X*. Table 1 provides an example of these syntactic relations, using argument candidates for the deverbal noun *analysis*. In Section 7.1 we compare this approach and other considered approaches to identify the arguments.
| Phrase | UD Relation |
|----------------------|---------------|
| his analysis | nmod:poss |
| data analysis | compound |
| linguistic analysis | amod |
| analysis of the data | nmod:of |
Table 1: The types of UD relations we used to identify candidate arguments, and their example with the deverbal noun *analysis*.
## 5.2 Argument Labeling
Upon argument identification, we aim to label the identified argument candidates of the deverbal nouns, with the desired argument types (*nsubj*,
dobj, *nmod:X* or ∅), such that the labels align to the labels of the corresponding arguments in the active verbal form (the label ∅ indicates that this argument candidate is not in fact an argument of the noun, such as *primary* in the phrase the primary influence). For instance, in the sentence *The emperor's destruction of Paris*, we wish to label the emperor as *nsubj* and Paris as *dobj*, since the sentence can only be understood as the verbal sentence The emperor destroyed Paris.
Concretely, denote the argument candidates as a1*, . . . , a*N . We need to assign them with labels
$$(1\mathrm{a})$$
$$(1\mathbf{b})$$
ℓ1*, . . . , ℓ*N , where ℓi ∈ {∅*, nsubj, dobj, nmod*:X},
under the constraint that every two arguments ai, aj , can share labels if and only if they match the label ∅ (as emphasized in the defined task).
We start from obtaining a set of verbal reference sentences S, containing M sentences s1*, . . . , s*M,
each sentence sm contains the verbal form of the deverbal noun (these are obtained using a simple keyword search). In each of these instances sm, we use simple active and passive verbal dependency patterns to identify the Am verbal arguments a˜
m 1
, ..., a˜
m Am
, labelled as ˜ℓ m 1
, . . . ,
˜ℓ m AM
. Intuitively, we now seek to find for each of our nominal argument an the most similar verbal argument a˜
m j
,
and match their labels. In our experiments, we obtained a set S containing about 1,500 reference sentences4regarding every verb that was required by the evaluation datasets.
We encode both the input sentence and the reference sentences using a contextualized encoder (we use BERT-large-uncased (Devlin et al., 2018) in this work), resulting in vectors a1*, . . . ,* aN for the input sentence and vectors ˜am 1
, ..., ˜am Am for each verb reference sentence sm. We denote the entire set of verbal arguments as A˜ and the corresponding set of vectors as A˜ . We use a metric function sim(a, ˜a) over the pair of vectors to quantify their similarity (we use *cosine* similarity in this work).
We then choose the label of each nominal argument an independently5 based on its closest neighbours in A˜ . We consider two variants: in the first one
(1a, nearest-avg-argument), we select the label ℓn by averaging the reference vectors for each verbal argument label, and then choosing the label whose corresponding average vector is the most similar to the nominal argument's vector. In the second variant (1b, k-nearest-argument), we take the knearest verbal argument vectors (we use k=5) to the nominal argument vector. We compute the sum of similarities between an and each of the k-nearest vector ˜a corresponding to each label, and choose the label with the highest sum.
For both labeling variants, we assign the label ∅
for arguments whose similarity with any other reference argument does not pass a chosen threshold.
## 6 Evaluation Data
Our task is to identify arguments of deverbal nouns and assign each one of them a label from the set L = {nsubj, dobj, *nmod:X*}. For evaluation, we need sentences with deverbal nouns whose arguments are labeled with these relations. For example, the deverbal noun *relocation* in the phrase *Family* relocation to Manchester should be labeled with the pairs *(nsubj, 1)* and *(nmod:to, 4)*, as specified in Section 4.
We create three such evaluation datasets, the first based on a nominalization paraphrasing dataset, and the other two are based on the NomLex lexicon, while they differ by the coverage of deverbal nouns that they consider, as we further explain. Moreover, to compare our method's performance to earlier works, we consider the CoNLL-2009 dataset (Hajicˇ
et al., 2009) for evaluation, as we discuss in 7.3.
The paraphrasing-derived evaluation set is derived from a manually annotated dataset for the task of paraphrasing sentences from nominal to verbal form (Lee et al., 2021). The original dataset includes a collection of 449 samples from 369 unique sentences representing 142 different verbs. Each sample represents a paraphrasing between the original nominalization phrase (from a given sentence)
and a verbal clausal phrase, for instance *genetic* analysis from a sample which is paraphrased as analyze genes from a sample. For every paraphrasing sample, the dataset specifies the components of the nominal phrase within the structure "*adj/noun* nominalization *prep pobj*", and the components of the active verbal phrase ("*arg0* verb *arg1 pp*").
To construct our evaluation set based on this data, we first match each of the nominal components adj/noun and pobj with a verbal component from the set of arg0, arg1 and pp, choosing the one with the closest orthography to the nominal one.
From this, we derive the verbal argument labeling for the components of the nominal phrase. Then, we replace each verbal label with its matching UD
relation.6 Finally, for every nominal component we determine its head word position in the given context. The word positions paired with the matching verbal relations, construct a sample in our new paraphrasing-derived evaluation set.
In the course of dataset construction, we filter out some data samples. To start with, data samples that specify two nominal components that match the same verbal component were removed from our dataset, as they do not fit the constraints of the defined task. For example, in the phrase *environmental assessment for the project* the combined components of the noun can be understood together as the object of the matching verb (*assess the environmental impact of the project*), hence resulting with two nominal arguments labeled with the same verbal relation. Secondly, we consider only the first single data sample for every repeated nominal phrase to ensure a single truth of labeling for every nominal phrase. Following the filtering process we remain with 309 samples with 122 different verbs.
The NomLex evaluation sets are constructed using the NomLex lexicon.7 The NomLex lexicon contains a list of about 4k deverbal nouns, and for each of them specifies the various ways in which their arguments can be realized syntactically, and how they map to the corresponding verbal arguments. For example, an adapted NomLex entry for a deverbal noun like *destruction* would specify the related forms of the noun (i.e., the verb and other related deverbal nouns) and, most significantly, a set of dependency-tree patterns corresponding to several different realizations of the noun. Each dependency-tree pattern represents a set of labeled arguments in a specific dependency tree. For instance, the entry of *destruction* would contain a pattern that corresponds to the dependency structure shown in the middle of Figure 1 and demands the labeling of *Rome* as subject and *city* as object. Hence, using a parsed dependency tree of a sentence with a deverbal noun, we can extract the labeled arguments in the sentence for any specified pattern that fulfills the sentence's dependency structure. However, this method does not allow for a definitive decision in many cases, as the lexicon often contains multiple labeled contradicting patterns. In Section 7 we show that relying solely on NomLex results in a significantly lower precision.
We collect English Wikipedia sentences from 7We converted the NomLex lexicon from its original LISPbased formatting and phrase-structure trees, to a more modern form encoded in JSON and using UD syntactic relations. The code for this conversion is accessible at https://github.
com/AvivWn/NounVerbUDTransfer.
Guo et al. (2020) that contain a deverbal noun, and for each sentence, we identify the deverbal noun's arguments and labels based on the adapted NomLex entry as described above. We discard sentences for which the entry suggests two or more different assignments, when matching two or more dependency patterns. We then map NomLex's labels into the corresponding dependency relations of the active verbal form. To match the examples in the paraphrasing dataset, we consider only data samples with two labeled arguments each.
We divide the collected samples into two evaluation sets based on the verbal form of the represented deverbal nouns. **NomLex***paraphrasing* considers only samples which refer to verbs that appeared in the paraphrasing-derived corpus, whereas NomLex*other* considers samples that match 315 other verbs. In each evaluation set, we keep 25 labeled sentences for each verb.
Tune/Test Split Our method is unsupervised but still requires tuning of hyperparameters. We keep a tuning subset for each origin of the evaluation set (paraphrasing-derived and NomLex), which is also used for evaluation during development. In the paraphrasing dataset, we sample 20% of the dataset to construct the tuning set while keeping aside 80%
of the dataset for evaluation. Out of the 122 verbs in the paraphrasing-derived evaluation set, 12 appear only in the tuning set, 83 only in the test set, and 27 appear in both sets. The split aims to ensure that the results are not verb-specific and to prevent overfitting, as we do hyperparameter optimization on the tuning set, which does not contain all the verbs that appear in the test set. To tune the method for NomLex-based data, we perform a similar tune-test split on NomLex*paraphrasing* based upon the same tune-test verb division made for the paraphrasing evaluation set. Concretely, NomLex instances of the 12 tuning-only verbs and 83 test-only verbs were included only in the NomLex tuning set and test set, correspondingly; Instances of the 27 common verbs were divided into the tune-test sets in a 20%-80% ratio. Moreover, we preserve entirely NomLex*other* corpus for testing.
Evaluation Metrics We use two evaluation metrics: **Relation-F1** is the F1 score of all the predicted word-relation pairs compared to the gold labeled pairs (without distinguishing argument labels, for comparability with Zhao and Titov (2020)
which uses CoNLL-2009 evaluation scorer (Hajicˇ
et al., 2009)). **Exact-Match** scores how many noun instances had all their relations identified and labeled correctly. A predicted relation is considered correct if it matches both the same argument head word and the same label as the gold relation.
## 7 Experiments And Results
In this section, we consider the results of our method on the evaluation sets and experiments we conducted concerning the two stages of our method.
The setup which produced the best results is discussed in 7.2, including the chosen hyperparameters, which were tuned over the tuning sets.
Baseline As a baseline for our approach, we considered the same process we used for generating the NomLex evaluation sets. More specifically, for a given parsed sentence with a given deverbal noun, our baseline method attempts to match the deverbal noun instance with all dependency patterns in appropriate entry within the adapted NomLex lexicon. Every fulfilled pattern should result in a set of labeled arguments. The combined set of noncolliding arguments, i.e., arguments that match a single argument type, are then mapped into pairs of headwords and UD relations, which are also the output of the baseline method.
## 7.1 Argument Identification
Using the set of relation labels in Section 5.2 and considering each one of them as an argument candidate, we cover 94.6% of all the relations in our paraphrasing-derived test-set, while producing 76 candidates (16.2% of all proposed candidates) that are not arguments. We find this to be of sufficient coverage and accuracy for the paraphrasing dataset.
Regarding the NomLex evaluation sets, all arguments were identified using that relations set (100%
coverage), while producing 24.8% and 23.1% nonargument candidates for NomLex*paraphrasing* and NomLex*other*, respectively. As NomLex does not consider adjectival arguments, we choose to consider a reduced set of dependency relations without the *amod* relation, keeping the same coverage and producing only 8.8% and 8.7% non-argument candidates, respectively.
For the paraphrasing-derived dataset we also considered two other alternatives: relying on the information in the NomLex lexicon for each noun, resulting in coverage of 58.5% and producing 6.9%
non-argument candidates, and relying on NomLex
| Paraphrasing-derived | NomLexparaphrasing | NomLexother | | | | |
|-------------------------|----------------------|---------------|-------|-------|-------|-------|
| Method | F1 | Exact | F1 | Exact | F1 | Exact |
| baseline (NomLex-based) | 43.42 | 7.66 | - | - | - | - |
| all-subject | 27.67 | 0.00 | 37.04 | 0.00 | 41.52 | 0.00 |
| all-object | 36.50 | 0.00 | 40.24 | 0.00 | 38.19 | 0.00 |
| nearest-avg-argument | 44.08 | 17.74 | 39.81 | 18.38 | 40.10 | 19.49 |
| k-nearest-argument | 62.93 | 36.29 | 53.74 | 34.98 | 53.67 | 35.06 |
lexicon while also considering *amod* relations, resulting in an increased coverage (85.3%) and increased non-argument candidates (13.9%). These low coverage results are anticipated as NomLex lexicon lacks the representation of some nominal structures, hence we chose the label-set approach as it was the most effective one.
We explored the resulted argument candidates and gathered three main reasons for the nonargument candidates. First, some correspond to arguments missing in the evaluation set. In the paraphrasing set, this is due to the focus on two arguments structure for each deverbal noun; In contrast, in the NomLex evaluation sets, this is primarily due to discarding of undetermined arguments and for the lack of prepositional adjuncts representation (which are captured using the dependency relations). Other resulted non-argument candidates are misaligned with the correct arguments, not sharing the same head-word, as emerged from a humanbased evaluation set (such as paraphrasing-derived).
Finally, the remaining non-arguments are indeed not an argument of the noun.
## 7.2 Argument Labeling
Main Results We experiment with two different labeling methods, as discussed in Section 5.2:
nearest average of reference argument representations for each argument (nearest-avg-argument); knearest reference arguments (k-nearest-argument).
The results of the various labeling methods are shown in Table 2 while considering the most suitable identification method for every evaluation set as raised from the argument identification comparison. We report our results on the three test sets and in comparison with the performance of the baseline method and naive 'all-subject' and 'all-object' methods (which label all argument relations with nsubj and *dobj*, respectively). As articulated from our results, both labeling methods performed better than the baseline regarding the paraphrasing evaluation set. Moreover, k-nearest-argument outperformed nearest-avg-argument on all metrics of all evaluation sets. Best results were attained by calibrating the methods on the matching tuning sets, e.g., selecting a specific threshold for labeling
∅-typed arguments (0.56 for paraphrasing tune-set and 0.48 for NomLex tune-set). Yet, we examined similar performance tendencies between the tuning sets and the test sets (see Appendix B), implying a generalization of our method for other examples.
We further validated our method generalization for any arbitrary verb, by scoring relatively similar results on NomLex*other* as on NomLex*paraphrasing* without additional tuning, while each considers nouns that match a different set of verbs. The extended results in Appendix B also demonstrate the Relation-F1 scores of our best method regarding the most common relations in the test sets.
## Importance Of Contextualization Arguments Of
verbs and deverbal nouns share semantics, as both commonly paraphrase the same entity in different contexts. For instance, the subject of the verb *acquire* usually matches the semantic role of a 'HUMAN' (*John acquired the ingredients*) or a 'COMPANY' (*Apple acquired another startup company*).
The same subjects can be realized in a deverbal noun context, as in The ingredients acquisition of john and *Apple's acquisition of the startup company*, correspondingly. The semantic role of words can be represented by vector representations, both contextualized representations as BERT and uncontextualized representations as Word2Vec (Mikolov et al., 2013) vectors. We compared our main results with pre-trained BERT-based representations to uncontextualized representations, using pre-trained Fasttext Word2Vec model made by Bojanowski et al. (2017). The results of our method regarding the two representations are shown in Table 3. Using Word2Vec we see a decrease of about 25% in Relation-F1 and about 40% in Exact-Match compared to BERT results using our best method, from which we conclude that the context of the argument also affects the performance of our method.
| Method | BERT | Word2Vec |
|-----------------|---------------|---------------|
| nearest-avg-arg | 44.08 (17.74) | 20.78 (4.44) |
| k-nearest-arg | 62.93 (36.29) | 46.53 (21.37) |
Table 3: The best results of the suggested labelers using BERT and Word2Vec representations, on the paraphrasing test set, specified as "Relation-F1 (Exact-Match)".
Syntax vs Semantics The previous experiment has demonstrated that the contextualized vectors outperform the static ones, suggesting the need for more than word semantics. In the following experiment, we further quantify the contribution of syntactic position vs. argument semantics to the final predictions. We manipulate the paraphrasing evaluation set by switching the sentence positions of the two specified arguments for each tagging sample. Note that the resulting sentence is usually neither grammatically nor semantically correct.
Then, we apply our labeling stage while considering the BERT vectors over the arguments in the new positions. When compared to the labels of the same arguments received in the original positions, we see almost 70% difference. Thus, the syntactic position has an innegligible effect on the verb-noun alignment that our method aims to resolve.
## 7.3 Comparison To Earlier Work
Existing unsupervised attempts that approach the nominal argument labeling task as a transfer scenario from verbal arguments to nominal arguments
(as our work), rely on a predefined semantic roles ontology. For instance, Zhao and Titov (2020) consider SRL roles of verbs to label nouns with the same set of roles, as appears in CoNLL-2009 dataset (Hajic et al. ˇ , 2009). Our defined task and proposed methods do not require a predefined semantic roles ontology, yet can be tested on one for comparability with such existing work. Thus, we apply our labeling methods on CoNLL-2009 nominal test data after verbalizing the nominal predicates in the dataset while considering the CoNLL-2009 verbal train data as verbal references.
For evaluation comparability with Zhao and Titov
(2020), we skip the argument identification stage and assume the identified arguments are given. Finally, we calculate the F1 performance (as discussed for "Relation-F1" in Section 6) of our methods, which we compare to the matching ones reported by Zhao and Titov (2020). As shown in Table 4, our best method ('k-nearest-argument')
outperforms their baselines ('Most-frequent', 'Factorization' and 'Direct-transfer'). However, their
'Full-system' approach transcends our method by exploiting a supervised verbal SRL system and data augmentations, which we do not use in our work.
| Method | F1 |
|---------------------------|-------|
| Most-frequent | 56.51 |
| Factorization | 44.48 |
| Direct-transfer | 55.85 |
| Full-system | 63.09 |
| k-nearest-argument (Ours) | 58.82 |
Table 4: F1 results reported by Zhao and Titov (2020)
on CoNLL-2009 nominal test data, compared to the result of our best labeler applied on the same dataset.
## 8 Conclusions
In this work, we formulate the task of aligning arguments of deverbal nouns to the arguments of their corresponding active verbal form. We formulate the task as a UD enrichment task, aiming to enrich deverbal nouns in text with verbal UD relations for the matching nominal arguments. Our formulation, compared to the ones suggested in previous works, does not rely on a predefined roles ontology.
We suggest an unsupervised approach to this nominal-to-verbal argument mapping based on pretrained contextualized word representations. Our method tries to match nominal identified arguments with automatically extracted arguments of the corresponding verb. The suggested method outperforms the NomLex-based baseline, which is based on an expertly constructed comprehensive lexicon.
We also show the importance of contextualization, experiencing a 25% decrease in performance when using uncontextualized vectors. Moreover, we further validate our hypothesis that semantics and syntactic structure are captured in the considered word representations using a dedicated experiment.
We provide a standalone code for enriching universal dependency trees with nominal arguments for a given parsed corpus, which can be integrated into NLP systems that use universal dependency patterns as part of their design or features.
## Limitations
The main drawback of the work is in its evaluation, which was performed on datasets which were not manually annotated for the task, but adapted to it in various means. While we believe these evaluation sets do provide a strong indication regarding task performance, evaluating on bespoke data explicitly annotated for the task is usually preferable. Another limitation is language specificity: the work currently focuses on English, without considering other languages, which are also left for future work.
## Ethics Statement
Like all works that depend on embeddings, the resulting models may be biased in various ways.
Users should take this into consideration when deploying them in products.
## Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
## References
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344–354.
Collin F Baker, Charles J Fillmore, and John B Lowe.
1998. The berkeley framenet project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86–90.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805.
Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. *Communications of the ACM*,
51(12):68–74.
Mandy Guo, Zihang Dai, Denny Vrandeciˇ c, and Rami ´
Al-Rfou. 2020. Wiki-40b: Multilingual language model dataset. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2440–
2452.
Olga Gurevich, Richard Crouch, Tracy Holloway King, and Valeria De Paiva. 2007. Deverbal nouns in knowledge representation. Journal of Logic and Computation, 18(3):385–404.
Olga Gurevich and Scott Waterman. 2009. Mining of parsed data to derive deverbal argument structure. In Proceedings of the 2009 Workshop on Grammar Engineering Across Frameworks (GEAF 2009), pages 19–27, Suntec, Singapore. Association for Computational Linguistics.
Jan Hajic, Massimiliano Ciaramita, Richard Johans- ˇ
son, Daisuke Kawahara, Maria Antonia Mart ` ´ı, Llu´ıs Marquez, Adam Meyers, Joakim Nivre, Sebastian `
Pado, Jan ´ Step ˇ anek, et al. 2009. The conll-2009 ´
shared task: Syntactic and semantic dependencies in multiple languages.
Halil Kilicoglu, Marcelo Fiszman, Graciela Rosemblat, Sean Marimpietri, and Thomas C Rindflesch. 2010.
Arguments of nominals in semantic interpretation of biomedical text. In *Proceedings of the 2010 workshop on biomedical natural language processing*,
pages 46–54. Association for Computational Linguistics.
Maria Lapata. 2000. The automatic interpretation of nominalizations. In *AAAI/IAAI*, pages 716–721.
John Lee, Ho Hung Lim, and Carol Webster. 2021. Paraphrasing compound nominalizations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8023–8028, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Catherine Macleod, Ralph Grishman, Adam Meyers, Leslie Barrett, and Ruth Reeves. 1998. Nomlex: A
lexicon of nominalizations. In *Proceedings of EURALEX*, volume 98, pages 187–193.
Adam Meyers, Catherine Macleod, Roman Yangarber, Ralph Grishman, Leslie Barrett, and Ruth Reeves.
1998. Using nomlex to produce nominalization patterns for information extraction. In *The Computational Treatment of Nominals*.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. *arXiv preprint* arXiv:1301.3781.
Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In *Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)*, pages 1659–1666.
Sebastian Pado, Marco Pennacchiotti, and Caroline ´
Sporleder. 2008. Semantic role assignment for event nominalisations by leveraging verbal data. In *Proceedings of the 22nd International Conference on* Computational Linguistics-Volume 1, pages 665–672.
Association for Computational Linguistics.
Sameer Pradhan, Honglin Sun, Wayne Ward, James H
Martin, and Dan Jurafsky. 2004. Parsing arguments of nominalizations in english and chinese. In *Proceedings of HLT-NAACL 2004: Short Papers*, pages 141–144. Association for Computational Linguistics.
Karin Kipper Schuler. 2005. *VerbNet: A broadcoverage, comprehensive verb lexicon*. University of Pennsylvania.
Aryeh Tiktinsky, Yoav Goldberg, and Reut Tsarfaty.
2020. pybart: Evidence-based syntactic transformations for ie.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. *Journal of machine* learning research, 9(11).
Mohamed Yahya, Steven Whang, Rahul Gupta, and Alon Halevy. 2014. Renoun: Fact extraction for nominal attributes. In *Proceedings of the 2014 conference on empirical methods in natural language* processing (EMNLP), pages 325–335.
Yanpeng Zhao and Ivan Titov. 2020. Unsupervised transfer of semantic role models from verbal to nominal domain. *arXiv preprint arXiv:2005.00278*.
## A Verb-Noun Argument Similarity
The similarity between arguments of verbs and arguments of matching deverbal noun realizations is a prominent requirement of our method. Similarly, Zhao and Titov (2020) exploit verb-noun similarities and base their approach on this assumption. To explore this similarity, we take the verbal and nominal arguments extracted by NomLex of the types SUBJECT, OBJECT, PP, and undetermined (Unknown), embed them using a pretrained BERT-large-uncased model, and compare their 2-dimensional representations (using t-SNE algorithm (Van der Maaten and Hinton, 2008) for dimension reduction). These representations are illustrated in Figure 2, demonstrating relatively similar representations between arguments of the verbs transport, *participate* and *violate* (marked as 'O') and the matching arguments of the corresponding noun forms (marked as 'Y'). More concretely, most nominal argument representations in these illustrations have a nearby verbal argument neighbor with the correct argument type. This similarity establishes the foundation of our work.
## B Extended Main Results
We provide here more information regarding our best results. In Table 5, we state the performance of all suggested methods when applied to the tuning sets, similar to our statement in Table 2. Moreover, Table 6 summarizes the number of instances for the most common verbal relations in each test set and the Relation-F1 score of every such relation. As expected, '*nsubj*' and '*dobj*' are the most common relations in the test sets. Other regarded relations are '*nmod:x*' relations and ∅ relations (referring to non-argument identifications or predictions).
![11_image_0.png](11_image_0.png)
| Paraphrasing-derived | NomLexparaphrasing | | | |
|-------------------------|----------------------|-------------|-------------|-------------|
| Method | Relation-F1 | Exact-Match | Relation-F1 | Exact-Match |
| baseline (NomLex-based) | 42.46 | 11.48 | - | - |
| all-subject | 31.62 | 0.00 | 39.16 | 0.00 |
| all-object | 34.78 | 0.00 | 37.92 | 0.00 |
| nearest-avg-argument | 54.62 | 21.31 | 44.96 | 21.99 |
| k-nearest-argument | 67.21 | 40.98 | 58.16 | 41.84 |
Table 5: The best results of the two suggested labelers on the two tuning sets, compared to the baseline process and the naive methods 'all-subject' and 'all-object'.
| Paraphrasing-derived | NomLexparaphrasing | NomLexother | | | | |
|------------------------|----------------------|---------------|---------|-------|---------|-------|
| Relation Type | Support | F1 | Support | F1 | Support | F1 |
| nsubj | 151 | 71.34 | 1910 | 62.86 | 6825 | 61.01 |
| dobj | 202 | 79.49 | 2075 | 63.08 | 6277 | 63.50 |
| ∅ | 58 | 9.45 | 382 | 13.22 | 1191 | 16.09 |
| nmod:to | 24 | 50.00 | 162 | 22.11 | 419 | 14.11 |
| nmod:with | 14 | 19.35 | 77 | 23.92 | 404 | 34.60 |
| nmod:for | 11 | 37.04 | 105 | 29.12 | 322 | 30.36 |
| nmod:from | 2 | 0.00 | 86 | 28.28 | 276 | 35.06 |
| nmod:in | 41 | 56.52 | 233 | 36.90 | 263 | 12.34 |
| nmod:as | 8 | 7.41 | 99 | 33.90 | 220 | 39.05 |
| nmod:on | 5 | 20.00 | 49 | 21.65 | 218 | 36.70 |
| nmod:into | 2 | 33.33 | 26 | 14.46 | 114 | 35.90 |
| nmod:against | 1 | 0.00 | 25 | 36.00 | 96 | 52.57 |
| nmod:over | 0 | - | 12 | 0.00 | 76 | 35.46 |
| nmod:about | 1 | 0.00 | 0 | - | 43 | 37.21 |
| nmod:at | 4 | 18.18 | 22 | 4.26 | 33 | 10.66 |
| nmod:of | 4 | 0.00 | 14 | 0.00 | 23 | 5.13 |
| nmod:towards | 0 | - | 0 | - | 17 | 51.43 |
| nmod:through | 11 | 0.00 | 8 | 17.14 | 13 | 21.05 |
| nmod:across | 0 | - | 2 | 40.00 | 9 | 26.09 |
| nmod:due to | 0 | - | 2 | 22.22 | 7 | 6.45 |
| nmod:between | 2 | 0.00 | 1 | 0.00 | 6 | 0.00 |
| nmod:among | 0 | - | 1 | 33.33 | 6 | 7.41 |
| nmod:along | 1 | 66.67 | 0 | - | 5 | 0.00 |
| nmod:by | 8 | 0.00 | 2 | 0.00 | 2 | 0.00 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
limitations section
✓ A2. Did you discuss any potential risks of your work?
ethics statement section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract section
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 5
✓ B1. Did you cite the creators of artifacts you used?
5 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 6
## C ✓ **Did You Run Computational Experiments?** 7
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
I did not train new models. Only pre-trained models were used.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
7
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
7
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
winata-etal-2023-decades | The Decades Progress on Code-Switching Research in {NLP}: A Systematic Survey on Trends and Challenges | https://aclanthology.org/2023.findings-acl.185 | Code-Switching, a common phenomenon in written text and conversation, has been studied over decades by the natural language processing (NLP) research community. Initially, code-switching is intensively explored by leveraging linguistic theories and, currently, more machine-learning oriented approaches to develop models. We introduce a comprehensive systematic survey on code-switching research in natural language processing to understand the progress of the past decades and conceptualize the challenges and tasks on the code-switching topic. Finally, we summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation. | # The Decades Progress On Code-Switching Research In Nlp: A Systematic Survey On Trends And Challenges
Genta Indra Winata1, Alham Fikri Aji2, Zheng-Xin Yong3**, Thamar Solorio**1 ∗
1Bloomberg 2MBZUAI 3Brown University [email protected], [email protected], [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Code-Switching, a common phenomenon in written text and conversation, has been studied over decades by the natural language processing (NLP) research community. Initially, code-switching is intensively explored by leveraging linguistic theories and, currently, more machine-learning oriented approaches to develop models. We introduce a comprehensive systematic survey on code-switching research in natural language processing to understand the progress of the past decades and conceptualize the challenges and tasks on the codeswitching topic. Finally, we summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation.
## 1 Introduction
Code-Switching is the linguistic phenomenon where multilingual speakers use more than one language in the same conversation (Poplack, 1978).
The fragment of the worldwide population that can be considered multilingual, i.e., speaks more than one language, far outnumbers monolingual speakers (Tucker, 2001; Winata et al., 2021a). This alone makes a compelling argument for developing NLP technology that can successfully process code-switched (CSW) data. However, it was not until the last couple of years that CSW-related research became more popular (Sitaram et al., 2019; Jose et al., 2020; Dogruöz et al. ˘ , 2021), and this increased interest has been motivated to a large extent by: 1) The need to process social media data.
Before the proliferation of social media platforms, it was more common to observe code-switching in spoken language and not so much in written language. This is not the case anymore, as multilingual users tend to combine the languages they speak on social media; 2) The increasing release of voice-operated devices. Now that smart assistants
∗ The work was done while at Bloomberg.
are becoming more and more accessible, we have started to realize that assuming users will interact with NLP technology as monolingual speakers is very restrictive and does not fulfill the needs of real-world users. Multilingual speakers also prefer to interact with machines in a CSW manner (Bawa et al., 2020). We show quantitative evidence of the upward trend for CSW-related research in Figure 1.
In this paper, we present the first large-scale comprehensive survey on CSW NLP research in a structured manner by collecting more than 400 papers published on open repositories, such as the ACL Anthology and ISCA proceedings (see §2).
We manually coded these papers to collect coarseand fine-grained information (see §2.1) on CSW
research in NLP that includes languages covered
(see §3), NLP tasks that have been explored, and new and emerging trends (see §4). In addition, motivated by the fact that fields like linguistics, socio-linguistics, and related fields, have studied Category Options Languages Bilingual, Trilingual, 4+
Venues Conference, Workshop, Symposium, Book Papers Theory / Linguistics, Empirical, Analysis, Position/Opinion/Survey, Metric, Corpus, Shared Task, Demo Datasets Social Media, Speech (Recording), Transcription, News, Dialogue, Books, Government Document, Treebank Methods Rule/Linguistic Constraint, Statistical Model, Neural Network, Pre-trained Model Tasks **Text:** Topic Modeling, Semantic Parsing, Dependency Parsing, Sentiment Analysis, Emotion Detection, Abusive Language Detection, Sarcasm Detection, Humor Detection, Humor Generation, Dialogue State Tracking, Text Generation, Natural Language Understanding, Named Entity Recognition, Part-of-Speech Tagging, Natural Language Entailment, Language Modeling, Regression, Language Identification, Machine Translation, Text Normalization, Micro-Dialect Identification, Question Answering, Summarization Speech: Acoustic Modeling, Speech Recognition, Text-to-Speech, Speech Synthesis CSW since the early 1900s, we also investigate to what extent theoretical frameworks from these fields have influenced NLP approaches (see §5),
and how the choice of methods has evolved over time (see §5.4). Finally, we discuss the most pressing research challenges and identify a path forward to continue advancing this exciting line of work
(see §6).
The area of NLP for CSW data is thriving, covering an increasing number of language combinations and tasks, and it is clearly advancing from a niche field to a common research topic, thus making our comprehensive survey timely. We expect the survey to provide valuable information to researchers new to the field and motivate more research from researchers already engaging in NLP for CSW data.
## 2 Exploring Open Proceedings
To develop a holistic understanding of the trends and advances in CSW NLP research, we collect research papers on CSW from the ACL Anthology and ISCA proceedings. We focus on these two sources because they encompass the top venues for publishing in speech and language processing in our field. In addition, we also look into personal repositories from researchers in the community that contains a curated list of CSW-related papers. We discuss below the search process for each venue.
ACL Anthology We crawled the entire the ACL Anthology repository up to October 2022.1 We then filtered papers by using the following keywords related to CSW: "codeswitch",
"code switch", "code-switching", "code-switched",
"code-switch", "code-mix", "code-mixed", "codemixing", "code mix" "mixed-language", "mixedlingua", "mixed language", "mixed lingua", and
"mix language".
ISCA Proceedings We manually reviewed publicly available proceedings on the ISCA website2 and searched for papers related to CSW using the same set of keywords as above.
Web Resources To extend the coverage of paper sources, we also gathered data from existing repositories.3,4 We can find multiple linguistics papers studying about CSW.
## 2.1 Annotation Process
We have three human annotators to annotate all collected papers based on multiple categories shown in Table 1. All papers are coded by a least one
![1_image_0.png](1_image_0.png)
![2_image_1.png](2_image_1.png)
annotator. To extract the specific information we are looking for from the paper, the annotator needs to read through the paper, as most of the information is not contained in the abstract. The full list of the annotations we collected is available in the Appendix (see §A).
To facilitate our analysis, we annotated the following aspects:
- **Languages:** CSW is not restricted to pairs of languages; thus, we divide papers by the number of languages that are covered into bilingual, trilingual, and 4+ (if there are at least four languages). For a more fine-grained classification of languages, we categorize them by geographical location (see Figure 2).
| Languages | # Publications | | | |
|-------------------|------------------|-------|-------------|----|
| *CL | ISCA | Total | Shared Task | |
| Hindi-English | 111 | 17 | 128 | 30 |
| Spanish-English | 78 | 8 | 86 | 40 |
| Chinese-English‡ | 20 | 27 | 47 | 5 |
| Tamil-English | 37 | 2 | 39 | 17 |
| Malayalam-English | 23 | 2 | 25 | 13 |
![2_image_0.png](2_image_0.png)
- **Venues:** There are multiple venues for CSWrelated publications. We considered the following type of venues: conference, workshop, symposium and book. As we will discuss later, the publication venue is a reasonable data point of the wider spread of CSW research in recent years.
- **Papers:** We classify the paper types based on their contribution and nature. We predict that we will have a high distribution of dataset/resource papers, as lack of resources has been a major bottleneck in the past.
- **Datasets:** If the paper uses a dataset for the research, we will identify the source and modality (i.e., written text or speech) of the dataset.
- **Methods:** We identify the type of methods presented in work.
- **Tasks:** We identify the downstream NLP
tasks (including the speech processing-related tasks) presented in work.
## 3 Language Diversity
Here, we show the languages covered in the CSW
resources. While focusing on the CSW phenomenon increases diversity of NLP technology, as we will see in this section, future efforts are needed to provide significant coverage of the most common CSW language combinations worldwide.
## 3.1 Variety Of Mixed Languages
Figure 3 shows the distribution of languages represented in the NLP for CSW literature. Most of the papers use datasets with two language pairs. However, we did find a few papers that address CSW scenarios with more than two languages. We
| Languages | # Publications | | |
|-----------------------------------|------------------|-------|----|
| non-ST | ST | Total | |
| Language Identification | 46 | 17 | 63 |
| Sentiment Analysis | 31 | 30 | 61 |
| NER | 17 | 14 | 31 |
| POS Tagging | 29 | 1 | 30 |
| Abusive/Offensive Lang. Detection | 9 | 16 | 25 |
| ASR | 20 | 0 | 22 |
| Language Modeling | 19 | 1 | 20 |
| Machine Translation | 8 | 5 | 13 |
consider this a relevant future direction in CSW:
scaling model abilities to cover n languages, with n ≥ 2.
CSW in two languages We group the number of publications focusing on bilingual CSW based on world regions in Figure 3 (bottom). We can see that the majority of research in CSW has focused on South Asian-English, especially on Hindi-English, Tamil-English, and Malayalam-English, as shown in Table 2. The other common language pairs are Spanish-English and Chinese-English. That table also shows that many of the publications are shared task papers. This is probably reflecting efforts from a few research groups to motivate more research into CSW, such as that behind the CALCS workshop series.
Looking at the languages covered, we also find that there are many language pairs that come from different language families, such as TurkishGerman (Çetinoglu ˘ , 2016; Çetinoglu and Çöl- ˘
tekin, 2019; Özate¸s and Çetinoglu ˘ , 2021; Özate¸s et al., 2022), Turkish-Dutch (Gambäck and Das, 2016), French-Arabic (Sankoff, 1998; Lounnas et al., 2021), Russian-Tatar (Taguchi et al.,
2021), Russian-Kazakh (Mussakhojayeva et al.,
2022a), Hindi-Tamil (Thomas et al., 2018b),
Arabic-North African (El-Haj et al., 2018), BasqueSpanish (Aguirre et al., 2022), and WixarikaSpanish (Mager et al., 2019). There are only very few papers working on Middle Eastern - English language pairs, most of the time, the Middle Eastern languages are mixed with non-English and/or dialects of these languages (see Figure 4).
Trilingual The number of papers addressing CSW in more than two languages is still small
(see 3 top), compared to the papers looking at pairs of languages. Not surprisingly, this smaller number of papers focus on world regions where
| # Publications | | | |
|------------------|------|-------|----|
| *CL | ISCA | Total | |
| Public Dataset | 38 | 4 | 42 |
| Private Dataset | 54 | 18 | 72 |
Table 4: Publications that introduce new corpus.
either the official languages are more than two, or these languages are widely used in the region, for example, Arabic-English-French (Abdul-Mageed et al., 2020), Hindi-Bengali-English (Barman et al., 2016), Tulu-Kannada-English (Hegde et al., 2022),
and Darija-English-French (Voss et al., 2014).
4+ When looking at the papers that focus on more than three languages, we found that many papers use South East Asian Mandarin-English (SEAME)
dataset (Lyu et al., 2010a), which has Chinese dialects and Malay or Indonesian words. Most of the other datasets are machine-generated using rulebased or neural methods.
## 3.2 Language-Dialect Code-Switching
Based on Figure 4, we can find some papers with language-dialect CSW, such as Chinese-Taiwanese Dialect (Chu et al., 2007; Yu et al., 2012) and Modern Standard Arabic (MSA)-Arabic Dialect (Elfardy and Diab, 2012; Samih and Maier, 2016; ElHaj et al., 2018). The dialect, in this case, is the variation of the language with a different form that is very specific to the region where the CSW style is spoken.
## 4 Tasks And Datasets
In this section, we summarize our findings, focusing on the CSW tasks and datasets. Table 3 shows the distribution of CSW tasks for ACL papers with at least ten publications. The two most popular tasks are language identification and sentiment analysis. Researchers mostly use the shared tasks from 2014 (Solorio et al., 2014) and 2016 (Molina et al., 2016) for language identification, and the SemEval 2020 shared task (Patwa et al., 2020)
for sentiment analysis. For ISCA, the most popular tasks are unsurprisingly ASR and TTS. This strong correlation between task and venue shows that the speech processing and *CL communities remain somehow fragmented and working in isolation from one another, from the most part.
Public vs. Private Datasets Public datasets availability also dictates what tasks are being
| Source | *CL | ISCA | Total | Type | *CL | ISCA | Total |
|-----------------------------------------------------------|-------|--------|---------|-----------|-------|--------|---------|
| Social Media | 183 | 3 | 186 | | | | |
| Speech (Recording) | 29 | 102 | 141 | | | | |
| Transcription | 23 | 4 | 27 | | | | |
| News | 19 | 5 | 24 | | | | |
| Dialogue | 16 | 2 | 18 | | | | |
| Books | 7 | 1 | 8 | | | | |
| Government Document | 6 | 0 | 6 | | | | |
| Treebank | 5 | 0 | 5 | Empirical | 205 | 100 | 305 |
| Shared Task | 82 | 1 | 83 | | | | |
| Corpus (Closed) | 54 | 18 | 62 | | | | |
| Corpus (Open) | 38 | 4 | 42 | | | | |
| Analysis | 34 | 8 | 42 | | | | |
| Demo | 7 | 2 | 9 | | | | |
| Theoretical/Linguistic | 7 | 0 | 7 | | | | |
| Position/Opinion/Survey | 3 | 0 | 3 | | | | |
| Metric | 2 | 1 | 3 | | | | |
| Table 5: The source of the CSW dataset in the literature. | | | | | | | |
explored in CSW research. Public datasets such as HinGE (Srivastava and Singh, 2021b),
SEAME (Lyu et al., 2010a) and shared task datasets (Solorio et al., 2014; Molina et al., 2016; Aguilar et al., 2018; Patwa et al., 2020) have been widely used in many of the papers. Some work, however, used new datasets that are not publicly available, thus hindering adoption (see Table 4). There are two well-known benchmarks in CSW: LinCE (Aguilar et al., 2020) and GlueCOS (Khanuja et al., 2020b). These two benchmarks have a handful of tasks, and they are built to encourage transparency and reliability of evaluation since the test set labels are not publicly released. The evaluation is done automatically on their websites. However, their support languages are mostly limited to popular CSW language pairs, such as Spanish-English, Modern Standard ArabicEgyptian, and Hindi-English, the exception being Nepali-English in LinCE.
Dataset Source Table 5 shows the statistics of dataset sources in the CSW literature. We found that most of the ACL papers were working on social media data. This is expected, considering that social media platforms are known to host informal interactions among users, making them reasonable places for users to code-switch. Naturally, most ISCA papers work on speech data, many of which are recordings of conversations and interviews. There are some datasets that come from speech transcription, news, dialogues, books, government documents, and treebanks.
Paper Category Table 6 presents the distribution of CSW papers. Most of the papers are empirical work focusing on the evaluation of downstream tasks. The second largest population is shared tasks.
We also notice that many papers introduce new CSW corpus, but they are not released publicly.
Some papers only release the URL or id to download the datasets, especially for datasets that come from social media (e.g., Twitter) since redistribution of the actual tweets is not allowed (Solorio et al., 2014; Molina et al., 2016) resulting in making reproducibility harder. Social media users can delete their posts at any point in time, resulting in considerable data attrition rates. There are very few papers working on the demos, theoretical work, position papers, and introducing evaluation metrics.
## 5 From Linguistics To Nlp
Notably, papers are working on approaches that are inspired by linguistic theories to enhance the processing of CSW text. In this survey, we find three linguistic constraints that are used in the literature: equivalence constraint, matrix-embedded language Framework (MLF), and Functional Head Constraint. In this section, we will briefly introduce the constraints and list the papers that utilize the constraints.
## 5.1 Linguistic-Driven Approaches
Equivalence Constraint In a well-formed codeswitched sentence, the switching takes place at those points where the grammatical constraints of both languages are satisfied (Poplack, 1980).
Li and Fung (2012, 2013) incorporate this syntactic constraint to a statistical code-switch language model (LM) and evaluate the model on ChineseEnglish code-switched speech recognition. On the same line of work, Pratapa et al. (2018a); Pratapa and Choudhury (2021) implement the same constraint to Hindi-English CSW data by producing parse trees of parallel sentences and matching the surface order of child nodes in the trees. Winata et al. (2019c) apply the constraint to generate synthetic CSW text and find that combining the real CSW data with synthetic CSW data can effectively improve the perplexity. They also treat parallel sentences as a linear structure and only allow switching on non-crossing alignments.
Matrix-Embedded Language Framework
(MLF) Myers-Scotton (1997) proposed that in bilingual CSW, there exists an asymmetrical relationship between the dominant *matrix language* and the subordinate *embedded language*.
Matrix language provides the frame of the sentence by governing all or most of the most of the grammatical morphemes as well as word order, whereas syntactic elements that bear no or only limited grammatical function can be provided by the embedded language (Johanson, 1999; Myers-Scotton, 2005). Lee et al. (2019a) use augmented parallel data by utilizing MLF to supplement the real code-switch data. Gupta et al. (2020) use MLF to automatically generate the code-mixed text from English to multiple languages without any parallel data.
Functional Head Constraint Belazi et al. (1994)
posit that it is impossible to switch languages between a functional head and its complement because of the strong relationship between the two constituents. Li and Fung (2014) use the constraint of the LM by first expanding the search network with a translation model and then using parsing to restrict paths to those permissible under the constraint.
## 5.2 Learning From Data Distribution
Linguistic constraint theories have been used for decades to generate synthetic CSW sentences to address the lack of data issue. However, the approach requires external word alignments or constituency parsers that create erroneous results instead of applying the linguistic constraints to generate new synthetic CSW data, building a pointergenerator model to learn the real distribution of code-switched data (Winata et al., 2019c). Chang et al. (2019) propose to generate CSW sentences from monolingual sentences using Generative Adversarial Network (GAN) (Goodfellow et al., 2020)
and the generator learns to predict CSW points without any linguistic knowledge.
## 5.3 The Era Of Statistical Methods
The research on CSW is also influenced by the progress and development of machine learning.
![5_image_0.png](5_image_0.png)
According to Figure 5, starting in 2006, statistical methods have been adapted to CSW research, while before that year, the approaches were mainly rule-based. There are common statistical methods for text classification used in the literature, such as Naive Bayes (Solorio and Liu, 2008a) and Support Vector Machine (SVM) (Solorio and Liu, 2008b). Conditional Random Field (CRF) (Sutton et al., 2012) is also widely seen in the literature for sequence labeling, such as Part-of-Speech (POS)
tagging (Vyas et al., 2014), Named Entity Recognition (NER), and word-level language identification (Lin et al., 2014; Chittaranjan et al., 2014; Jain and Bhat, 2014). HMM-based models have been used in speech-related tasks, such as speech recognition (Weiner et al., 2012a; Li and Fung, 2013)
and text synthesis (Qian et al., 2008; Shuang et al., 2010; He et al., 2012).
## 5.4 Utilizing Neural Networks
Following general NLP trends, we see the adoption of neural methods and pre-trained models growing in popularity over time. In contrast, the statistical and rule-based approaches are diminishing.
Compared to ISCA, we see more adaptation of the pre-training model. This is because ACL work is more text-based focused, where pre-trained LMs Neural-Based Models Figure 5 shows that the trend of using neural-based models started in 2013, and the usage of rule/linguistic constraint and statistical methods diminished gradually through time, but they are still used even with a low percentage. RNN and LSTM architectures are commonly used in sequence modeling, such as language modeling (Adel et al., 2013; Vu and Schultz, 2014; Adel et al., 2014c; Winata et al., 2018a; Garg et al.,
2018a; Winata et al., 2019c) and CSW identification (Samih et al., 2016a). DNN-based and hybrid HMM-DNN models are used in speech recognition models (Yilmaz et al., 2018; Yılmaz et al., 2018).
Pre-trained Embeddings Pre-trained embeddings are used to complement neural-based approaches by initializing the embedding layer. Common pre-trained embeddings used in the literature are monolingual subword-based embeddings, FastText (Joulin et al., 2016), and aligned-embeddings MUSE (Conneau et al., 2017). A standard method to utilize monolingual embeddings is to concatenate or sum two or more embeddings from different languages (Trivedi et al., 2018). A
more recent approach is to apply an attention mechanism to merge embeddings and form metaembeddings (Winata et al., 2019a,b). Characterbased embeddings have also been explored in the literature to address the out-of-vocabulary issues on word-embeddings (Winata et al., 2018b; Attia et al., 2018; Aguilar et al., 2021). Another approach is to train bilingual embeddings using real and synthetic CSW data (Pratapa et al., 2018b). In the speech domain, Lovenia et al. (2022) utilize wav2vec 2.0 (Baevski et al., 2020) as a starting model before fine-tuning.
Language Models Many pre-trained model approaches utilize multilingual LMs, such as mBERT
or XLM-R to deal with CSW data (Khanuja et al.,
2020b; Aguilar and Solorio, 2020; Pant and Dadu, 2020; Patwa et al., 2020; Winata et al., 2021a).
These models are often fine-tuned with the downstream task or with CSW text to better adapt to the languages. Some downstream fine-tuning approaches use synthetic CSW data due to a lack of available datasets. Aguilar et al. (2021) propose a character-based subword module (char2subword)
of the mBERT that learns the subword embedding that is suitable for modeling the noisy CSW text.
Winata et al. (2021a) compare the performance of the multilingual LM versus the language-specific LM for CSW context. While XLM-R provides the best result, it is also computationally heavy. There needed to be more exploration of larger models.
We see that pre-trained LMs provide better empirical results on current benchmark tasks and enables an end-to-end approach. Therefore, one can theoretically work on CSW tasks without any linguistic understanding of the language, assuming the dataset for model finetuning is available. However, the downside is that there is little understanding of how and when the LMs would fail, thus we encourage more interpretability work on these LMs in CSW setting.
## 6 **Recent Challenges And Future Direction** 6.1 More Diverse Exploration On Code-Switching Styles And Languages
A handful of languages, such as Spanish-English, Hindi-English, or Chinese-English, dominate research and resource CSW. However, there are still many countries and cultures rich in the use of CSW, which is still under-represented in NLP research (Joshi et al., 2020; Aji et al., 2022; Yong et al., 2023), especially on different CSW variations. CSW style can vary in different regions of the world, and it would be interesting to gather more datasets on unexplored and unknown styles, which can be useful for further research and investigation on linguistics and NLP. Therefore, one future direction is to broaden the language scope of CSW research.
## 6.2 Datasets: Access And Sources
According to our findings, there are more than 60%
of the datasets are private (see Table 4), and they are not released to the public. This eventually hampers the progress of CSW research, particularly in the results' reproducibility, credibility, and transparency.
Moreover, many studies in the literature do not release the code to reproduce their work. Therefore, we encourage researchers who build a new corpus to release the datasets publicly. In addition, the fact that some researchers provide urls to download the data is also problematic due to the data attrition issue we raised earlier. Data attrition is bad for reproducibility, but it is also a waste of annotation efforts. Perhaps we should work on identifying alternative means to collect written CSW data in an ecologically valid manner.
## 6.3 Model Scaling
To the best of our knowledge, little work has been done on investigating how well the scaling law holds for code-mixed datasets. Winata et al.
(2021a) demonstrate that the XLM-R-large model outperforms smaller pre-trained models on the NER and POS tasks in LinCE benchmark (Aguilar et al., 2020); however, the largest model in the study, which is the XLM-R-large model, only has 355 million parameters. Furthermore, they find that smaller models that combine word, subword, and character embeddings achieve comparable performance as mBERT while being faster in inference. Given the recent release of billion-sized large pre-trained multilingual models such as XGLM and BLOOM (Scao et al., 2022), we urge future research to study the scaling law and performancecompute trade-off in code-mixing tasks.
## 6.4 Zero-Shot And Few-Shot Exploration
The majority of pre-trained model approaches fine-tune their models to the downstream task.
On the other hand, CSW data is considerably limited. With the rise of multilingual LMs, especially those that have been fine-tuned with prompt/instruction (Muennighoff et al., 2022; Ouyang et al., 2022; Winata et al., 2022), one direction is to see whether these LMs can handle CSW input in a zero-shot fashion. This work might also tie in with model scaling since larger models have shown better capability at zero-shot and fewshot settings (Winata et al., 2021b; Srivastava et al.,
2022).
## 6.5 Robustness Evaluation
Since CSW is a widely common linguistic phenomenon, we argue that cross-lingual NLP benchmarks, such as XGLUE (Liang et al., 2020) and XTREME-R (Ruder et al., 2021), should incorporate linguistic CSW evaluation (Aguilar et al., 2020; Khanuja et al., 2020b). The reasons are that CSW
is a cognitive ability that multilingual human speakers can perform with ease (Beatty-Martínez et al.,
2020). CSW evaluation examines the robustness of multilingual LMs in learning cross-lingual alignment of representations (Conneau et al., 2020; Libovicky et al. ` , 2020; Pires et al., 2019; Adilazuarda et al., 2022). On the other hand, catastrophic forgetting is observed in pre-trained models (Shah et al., 2020), and human speakers (Hirvonen and Lauttamus, 2000; Du Bois, 2009, known as language attrition) in a CSW environment. We argue that finetuning LMs on code-mixed data is a form of *continual learning* to produce a more generalized multilingual LM. Thus, we encourage CSW
research to report the performance of finetuned models on both CSW and monolingual texts.
## 6.6 Task Diversity
We encourage creating reasoning-based tasks for CSW texts for two reasons. First, code-mixed datasets for tasks such as NLI, coreference resolution, and question-answering are much fewer in comparison to tasks such as sentiment analysis, parts-of-speech tagging, and named-entity recognition. Second, comprehension tasks with the CSW text present more processing costs for human readers (Bosma and Pablos, 2020).
## 6.7 Conversational Agents
There has been a recent focus on developing conversational agents with LMs such as ChatGPT,5 Whisper (Radford et al., 2022), SLAM (Bapna et al.,
2021), mSLAM (Bapna et al., 2022). We recommend incorporating the capability of synthesizing code-mixed data in human-machine dialogue, as CS is a prevalent communication style among multilingual speakers (Ahn et al., 2020), and humans prefer chatbots with such capability (Bawa et al.,
2020).
## 6.8 Automatic Evaluation For Generation
With the rise of pre-trained models, generative tasks gained more popularity. However, when generating CSW data, most work used human evaluation for measuring quality of the generated data.
Alternative automatic methods for CS text based on word-frequency and temporal distribution are commonly used (Guzmán et al., 2017; Mave et al.,
2018), but we believe there is still much room for improvement in this respect. One possible future direction is to align the evaluation metrics to human judgement of quality (Hamed et al., 2022) where we can assess separately the "faithfulness" of the resulting CSW data from other desired properties of language generation. Other nuances here are related to the intricacy of CSW patterns, where ideally the model would mimic the CSW style of the intended users.
## 7 Conclusion
We present a comprehensive systematic survey on code-switching research in natural language processing to explore the progress of the past decades and understand the existing challenges and tasks in the literature. We summarize the trends and findings and conclude with a discussion for future direction and open questions for further investigation.
We hope this survey can encourage and lead NLP
researchers in a better direction on code-switching research.
## Limitations
The numbers in this survey are limited to papers published in the ACL Anthology and ISCA Proceedings. However, we also included papers as related work from other resources if they are publicly available and accessible. In addition, the category in the survey does not include the code-switching type (i.e., intra-sentential, inter-sentential, etc.)
since some papers do not provide such information.
## Ethics Statement
We use publicly available data in our survey with permissive licenses. No potential ethical issues in this work.
## Acknowledgements
Thanks to Igor Malioutov for the insightful discussion on the paper.
## References
Muhammad Abdul-Mageed, Chiyu Zhang, AbdelRahim Elmadany, and Lyle Ungar. 2020. Toward microdialect identification in diaglossic and code-switched environments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5855–5876.
Heike Adel, Katrin Kirchhoff, Dominic Telaar, Ngoc Thang Vu, Tim Schlippe, and Tanja Schultz.
2014a. Features for factored language models for code-switching speech. In *Proc. 4th Workshop on* Spoken Language Technologies for Under-Resourced Languages (SLTU 2014), pages 32–38.
Heike Adel, Katrin Kirchhoff, Ngoc Thang Vu, Dominic Telaar, and Tanja Schultz. 2014b. Comparing approaches to convert recurrent neural networks into backoff language models for efficient decoding. In Proc. Interspeech 2014, pages 651–655.
Heike Adel, Dominic Telaar, Ngoc Thang Vu, Katrin Kirchhoff, and Tanja Schultz. 2014c. Combining recurrent neural networks and factored language models during decoding of code-switching speech. In Fifteenth Annual Conference of the International Speech Communication Association.
Heike Adel, Ngoc Thang Vu, and Tanja Schultz. 2013.
Combination of recurrent neural networks and factored language models for code-switching language modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
(Volume 2: Short Papers), pages 206–211.
Muhammad Farid Adilazuarda, Samuel Cahyawijaya, Genta Indra Winata, Pascale Fung, and Ayu Purwarianti. 2022. Indorobusta: Towards robustness against diverse code-mixed indonesian local languages. In Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, pages 25–34.
Wafia Adouane and Jean-Philippe Bernardy. 2020.
When is multi-task learning beneficial for lowresource noisy code-switched user-generated algerian texts? In Proceedings of the The 4th Workshop on Computational Approaches to Code Switching, pages 17–25.
Wafia Adouane, Jean-Philippe Bernardy, and Simon Dobnik. 2018. Improving neural network performance by injecting background knowledge: Detecting code-switching and borrowing in algerian texts.
In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 20–28.
Wafia Adouane, Samia Touileb, and Jean-Philippe Bernardy. 2020. Identifying sentiments in algerian code-switched user-generated comments. In *Proceedings of the Twelfth Language Resources and* Evaluation Conference, pages 2698–2705.
Laksh Advani, Clement Lu, and Suraj Maharjan. 2020.
C1 at semeval-2020 task 9: Sentimix: Sentiment analysis for code-mixed social media text using feature engineering. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 1227–1232.
Kaustubh Agarwal and Rhythm Narula. 2021. Humor generation and detection in code-mixed hindi-english.
In Proceedings of the Student Research Workshop Associated with RANLP 2021, pages 1–6.
Vibhav Agarwal, Pooja Rao, and Dinesh Babu Jayagopi.
2021. Towards code-mixed hinglish dialogue generation. In *Proceedings of the 3rd Workshop on Natural* Language Processing for Conversational AI, pages 271–280.
Akshita Aggarwal, Anshul Wadhawan, Anshima Chaudhary, and Kavita Maurya. 2020. "did you really mean what you said?": Sarcasm detection in hindi-english code-mixed data using bilingual word embeddings.
In *Proceedings of the Sixth Workshop on Noisy Usergenerated Text (W-NUT 2020)*, pages 7–15.
Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018.
Named entity recognition on code-switched data:
Overview of the calcs 2018 shared task. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 138–
147.
Gustavo Aguilar, Sudipta Kar, and Thamar Solorio.
2020. Lince: A centralized benchmark for linguistic code-switching evaluation. In *Proceedings of the* Twelfth Language Resources and Evaluation Conference, pages 1803–1813.
Gustavo Aguilar, Bryan McCann, Tong Niu, Nazneen Rajani, Nitish Shirish Keskar, and Thamar Solorio.
2021. Char2subword: Extending the subword embedding space using robust character compositionality. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1640–1651.
Gustavo Aguilar and Thamar Solorio. 2020. From english to code-switching: Transfer learning with strong morphological clues. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8033–8044.
Maia Aguirre, Laura García-Sardiña, Manex Serras, Ariane Méndez, and Jacobo López. 2022. Basco: An annotated basque-spanish code-switching corpus for natural language understanding. In *Proceedings of* the Thirteenth Language Resources and Evaluation Conference, pages 3158–3163.
Emily Ahn, Cecilia Jimenez, Yulia Tsvetkov, and Alan W Black. 2020. What code-switching strategies are effective in dialog systems? In Proceedings of the Society for Computation in Linguistics 2020, pages 254–264.
Alham Aji, Genta Indra Winata, Fajri Koto, Samuel Cahyawijaya, Ade Romadhony, Rahmad Mahendra, Kemal Kurniawan, David Moeljadi, Radityo Eko Prasojo, Timothy Baldwin, et al. 2022. One country, 700+ languages: Nlp challenges for underrepresented languages and dialects in indonesia. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 7226–7249.
Mohamed Al-Badrashiny and Mona Diab. 2016. The george washington university system for the codeswitching workshop shared task 2016. In *Proceedings of The Second Workshop on Computational Approaches to Code Switching*, pages 108–111.
Fahad AlGhamdi, Giovanni Molina, Mona Diab, Thamar Solorio, Abdelati Hawwari, Victor Soto, and Julia Hirschberg. 2016. Part of speech tagging for code switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 98–107.
Ahmed Ali, Shammur Absar Chowdhury, Amir Hussein, and Yasser Hifny. 2021. Arabic code-switching speech recognition using monolingual data. In *Proc.*
Interspeech 2021, pages 3475–3479.
Djegdjiga Amazouz, Martine Adda-Decker, and Lori Lamel. 2017. Addressing code-switching in french/algerian arabic speech. In Proc. Interspeech 2017, pages 62–66.
Saadullah Amin, Noon Pokaratsiri Goldstein, Morgan Wixted, Alejandro Garcia-Rudolph, Catalina Martínez-Costa, and Günter Neumann. 2022. Fewshot cross-lingual transfer for coarse-grained deidentification of code-mixed clinical texts. In *Proceedings of the 21st Workshop on Biomedical Language Processing*, pages 200–211.
Judith Jeyafreeda Andrew. 2021. Judithjeyafreedaandrew@ dravidianlangtech-eacl2021: offensive language detection for dravidian code-mixed youtube comments. In *Proceedings of the First Workshop on* Speech and Language Technologies for Dravidian Languages, pages 169–174.
Jason Angel, Segun Taofeek Aroyehun, Antonio Tamayo, and Alexander Gelbukh. 2020. Nlp-cic at semeval-2020 task 9: Analysing sentiment in codeswitching language using a simple deep-learning classifier. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 957–962.
Ansen Antony, Sumanth Reddy Kota, Akhilesh Lade, Spoorthy V, and Shashidhar G. Koolagudi. 2022.
An improved transformer transducer architecture for hindi-english code switched speech recognition. In Proc. Interspeech 2022, pages 3123–3127.
Lavinia Aparaschivei, Andrei Palihovici, and Daniela Gîfu. 2020. Fii-uaic at semeval-2020 task 9: Sentiment analysis for code-mixed social media text using cnn. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 928–933.
Ramakrishna Appicharla, Kamal Kumar Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2021. Iitp-mt at calcs2021: English to hinglish neural machine translation using unsupervised synthetic code-mixed parallel corpus. In *Proceedings of the Fifth Workshop* on Computational Approaches to Linguistic CodeSwitching, pages 31–35.
Dian Arianto and Indra Budi. 2020. Aspect-based sentiment analysis on indonesia's tourism destinations based on google maps user code-mixed reviews
(study case: Borobudur and prambanan temples). In Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation, pages 359–
367.
Kavita Asnani and Jyoti Pawar. 2016. Use of semantic knowledge base for enhancement of coherence of code-mixed topic-based aspect clusters. In Proceedings of the 13th International Conference on Natural Language Processing, pages 259–266.
Mohammed Attia, Younes Samih, and Wolfgang Maier.
2018. Ghht at calcs 2018: Named entity recognition for dialectal arabic using neural networks. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 98–
102.
Leonardo Badino, Claudia Barolo, and Silvia Quazza.
2004. A general approach to tts reading of mixedlanguage texts. In *Proc. Interspeech 2004*, pages 849–852.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
Advances in Neural Information Processing Systems, 33:12449–12460.
Mohamed Balabel, Injy Hamed, Slim Abdennadher, Ngoc Thang Vu, and Özlem Çetinoglu. 2020. Cairo ˘ student code-switch (cscs) corpus: An annotated egyptian arabic-english corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 3973–3977.
Kalika Bali, Jatin Sharma, Monojit Choudhury, and Yogarshi Vyas. 2014. "i am borrowing ya mixing?" an analysis of english-hindi code mixing in facebook. In Proceedings of the first workshop on computational approaches to code switching, pages 116–126.
Kelsey Ball and Dan Garrette. 2018. Part-of-speech tagging for code-switched, transliterated texts without explicit language identification. In *Association for* Computational Linguistics.
Fazlourrahman Balouchzahi, BK Aparna, and HL Shashirekha. 2021. Mucs@ dravidianlangtecheacl2021: Cooli-code-mixing offensive language identification. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 323–329.
Fazlourrahman Balouchzahi and HL Shashirekha. 2021.
La-saco: A study of learning approaches for sentiments analysis incode-mixing texts. In *Proceedings* of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 109–118.
Somnath Banerjee, Sahar Ghannay, Sophie Rosset, Anne Vilnat, and Paolo Rosso. 2020. Limsi_upv at semeval-2020 task 9: Recurrent convolutional neural network for code-mixed sentiment analysis. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1281–1287.
Suman Banerjee, Nikita Moghe, Siddhartha Arora, and Mitesh M Khapra. 2018. A dataset for building codemixed goal oriented conversation systems. In *Proceedings of the 27th International Conference on* Computational Linguistics, pages 3766–3780.
Neetika Bansal, Vishal Goyal, and Simpel Rani. 2020a.
Language identification and normalization of code mixed english and punjabi text. In Proceedings of the 17th International Conference on Natural Language Processing (ICON): System Demonstrations, pages 30–31.
Shubham Bansal, Arijit Mukherjee, Sandeepkumar Satpal, and Rupeshkumar Mehta. 2020b. On improving code mixed speech synthesis with mixlingual grapheme-to-phoneme model. In *Proc. Interspeech* 2020, pages 2957–2961.
Srijan Bansal, Vishal Garimella, Ayush Suhane, Jasabanta Patro, and Animesh Mukherjee. 2020c. Codeswitching patterns can be an effective route to improve performance of downstream nlp applications:
A case study of humour, sarcasm and hate speech detection. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 1018–1023.
Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, and Alexis Conneau. 2022. mslam: Massively multilingual joint pre-training for speech and text. arXiv preprint arXiv:2202.01374.
Ankur Bapna, Yu-an Chung, Nan Wu, Anmol Gulati, Ye Jia, Jonathan H Clark, Melvin Johnson, Jason Riesa, Alexis Conneau, and Yu Zhang. 2021. Slam:
A unified encoder for speech and language modeling via speech-text joint pre-training. arXiv preprint arXiv:2110.10329.
Kfir Bar and Nachum Dershowitz. 2014. The tel aviv university system for the code-switching workshop shared task. In Proceedings of the first workshop on computational approaches to code switching, pages 139–143.
Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. 2014a. Code mixing: A challenge for language identification in the language of social media. In *Proceedings of the first workshop on computational approaches to code switching*, pages 13–23.
Utsab Barman, Joachim Wagner, Grzegorz Chrupała, and Jennifer Foster. 2014b. Dcu-uvt: Word-level language classification with code-mixed data. In Proceedings of the first workshop on computational approaches to code switching, pages 127–132.
Utsab Barman, Joachim Wagner, and Jennifer Foster.
2016. Part-of-speech tagging of code-mixed social media content: Pipeline, stacking and joint modelling.
In *Proceedings of the Second Workshop on Computational Approaches to Code Switching*, pages 30–39.
Subhra Jyoti Baroi, Nivedita Singh, Ringki Das, and Thoudam Doren Singh. 2020. Nits-hinglish-sentimix at semeval-2020 task 9: Sentiment analysis for codemixed social media text using an ensemble model. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1298–1303.
Arup Baruah, Kaushik Das, Ferdous Barbhuiya, and Kuntal Dey. 2020. Iiitg-adbu at semeval-2020 task 12: Comparison of bert and bilstm in detecting offensive language. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1562–1568.
Anshul Bawa, Monojit Choudhury, and Kalika Bali.
2018. Accommodation of conversational codechoice. In Proceedings of the Third Workshop on Computational Approaches to Linguistic CodeSwitching, pages 82–91.
Anshul Bawa, Pranav Khadpe, Pratik Joshi, Kalika Bali, and Monojit Choudhury. 2020. Do multilingual users prefer chat-bots that code-mix? let's nudge and find out! Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1):1–23.
Anne L Beatty-Martínez, Christian A Navarro-Torres, and Paola E Dussias. 2020. Codeswitching: A bilingual toolkit for opportunistic speech planning. *Frontiers in Psychology*, page 1699.
Rafiya Begum, Kalika Bali, Monojit Choudhury, Koustav Rudra, and Niloy Ganguly. 2016. Functions of code-switching in tweets: An annotation framework and some initial experiments. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1644–
1650.
Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio. 1994. Code switching and x-bar theory:
The functional head constraint. *Linguistic inquiry*,
pages 221–237.
B Bharathi et al. 2021. Ssncse_nlp@ dravidianlangtecheacl2021: Offensive language identification on multilingual code mixing text. In *Proceedings of the First* Workshop on Speech and Language Technologies for Dravidian Languages, pages 313–318.
Irshad Bhat, Riyaz Ahmad Bhat, Manish Shrivastava, and Dipti Misra Sharma. 2017. Joining hands: Exploiting monolingual treebanks for parsing of codemixing data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 324–330.
Shishir Bhattacharja. 2010. Benglish verbs: A case of code-mixing in bengali. In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation, pages 75–84.
Shankar Biradar and Sunil Saumya. 2022. Iiitdwd@
tamilnlp-acl2022: Transformer-based approach to classify abusive content in dravidian code-mixed text.
In Proceedings of the second workshop on speech and language technologies for Dravidian languages, pages 100–104.
Astik Biswas, Febe De Wet, Thomas Niesler, et al. 2020.
Semi-supervised acoustic and language model training for english-isizulu code-switched speech recognition. In *Proceedings of the The 4th Workshop on* Computational Approaches to Code Switching, pages 52–56.
Astik Biswas, Febe de Wet, Ewald van der Westhuizen, Emre Yilmaz, and Thomas Niesler. 2018a. Multilingual neural network acoustic modelling for asr
of under-resourced english-isizulu code-switched speech. In *INTERSPEECH*, pages 2603–2607.
Astik Biswas, Ewald van der Westhuizen, Thomas Niesler, and Febe de Wet. 2018b. Improving asr for code-switched speech in under-resourced languages using out-of-domain data. In *SLTU*, pages 122–126.
Astik Biswas, Emre Yılmaz, Febe de Wet, Ewald van der Westhuizen, and Thomas Niesler. 2019.
Semi-supervised acoustic model training for fivelingual code-switched asr. *Proc. Interspeech 2019*,
pages 3745–3749.
Evelyn Bosma and Leticia Pablos. 2020. Switching direction modulates the engagement of cognitive control in bilingual reading comprehension: An erp study.
Journal of Neurolinguistics, 55:100894.
Anouck Braggaar and Rob van der Goot. 2021. Challenges in annotating and parsing spoken, codeswitched, frisian-dutch data. In *Proceedings of the* Second Workshop on Domain Adaptation for NLP,
pages 50–58.
Barbara Bullock, Wally Guzmán, Jacqueline Serigos, Vivek Sharath, and Almeida Jacqueline Toribio.
2018a. Predicting the presence of a matrix language in code-switching. In *Proceedings of the third workshop on computational approaches to linguistic codeswitching*, pages 68–75.
Barbara E. Bullock, Gualberto Guzmán, Jacqueline Serigos, and Almeida Jacqueline Toribio. 2018b. Should code-switching models be asymmetric? In *Proc.*
Interspeech 2018, pages 2534–2538.
Jesús Calvillo, Le Fang, Jeremy Cole, and David Reitter.
2020. Surprisal predicts code-switching in chineseenglish bilingual text. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4029–4039.
Marguerite Cameron. 2020. Voice onset time en codeswitching anglais-français: une étude des occlusives sourdes en début de mot (voice onset time in englishfrench code-switching: a study of word-initial voiceless stop consonants). In *Actes de la 6e conférence* conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 1: Journées d'Études sur la Parole, pages 54–63.
Houwei Cao, P. C. Ching, and Tan Lee. 2009. Effects of language mixing for automatic recognition of cantonese-english code-mixing utterances. In *Proc.*
Interspeech 2009, pages 3011–3014.
Marine Carpuat. 2014. Mixed language and codeswitching in the canadian hansard. In *Proceedings of* the first workshop on computational approaches to code switching, pages 107–115.
Özlem Çetinoglu. 2016. A turkish-german code- ˘
switching corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4215–4220.
Özlem Çetinoglu and Ça ˘ grı Çöltekin. 2019. Chal- ˘
lenges of annotating a code-switching treebank. In Proceedings of the 18th international workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019), pages 82–90.
Özlem Çetinoglu, Sarah Schulz, and Ngoc Thang Vu. ˘
2016. Challenges of computational processing of code-switching. In *Proceedings of the Second Workshop on Computational Approaches to Code Switching*, pages 1–11.
Bharathi Raja Chakravarthi. 2020. Hopeedi: A multilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, pages 41–53.
Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip McCrae. 2020a. A sentiment analysis dataset for codemixed malayalam-english. In *Proceedings of the 1st* Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 177–184.
Bharathi Raja Chakravarthi, Vigneshwaran Muralidaran, Ruba Priyadharshini, and John Philip McCrae. 2020b.
Corpus creation for sentiment analysis in code-mixed tamil-english text. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages
(CCURL), pages 202–210.
Sharanya Chakravarthy, Anjana Umapathy, and Alan W
Black. 2020. Detecting entailment in code-mixed hindi-english conversations. In Proceedings of the Sixth Workshop on Noisy User-generated Text (WNUT 2020), pages 165–170.
Joyce YC Chan, Houwei Cao, PC Ching, and Tan Lee.
2009. Automatic recognition of cantonese-english code-mixing speech. In International Journal of Computational Linguistics & Chinese Language Processing, Volume 14, Number 3, September 2009.
Joyce YC Chan, PC Ching, and Tan Lee. 2005. Development of a cantonese-english code-mixing speech corpus. In Ninth European conference on speech communication and technology.
Joyce YC Chan, PC Ching, Tan Lee, and Houwei Cao.
2006. Automatic speech recognition of cantoneseenglish code-mixing utterances. In *Ninth International Conference on Spoken Language Processing*.
Joyce YC Chan, PC Ching, Tan Lee, and Helen M
Meng. 2004. Detection of language boundary in code-switching utterances by bi-phone probabilities. In 2004 International Symposium on Chinese Spoken Language Processing, pages 293–296. IEEE.
Arunavha Chanda, Dipankar Das, and Chandan Mazumdar. 2016a. Columbia-jadavpur submission for emnlp 2016 code-switching workshop shared task: System description. In *Proceedings of the Second* Workshop on Computational Approaches to Code Switching, pages 112–115.
Arunavha Chanda, Dipankar Das, and Chandan Mazumdar. 2016b. Unraveling the english-bengali codemixing phenomenon. In *Proceedings of the second workshop on computational approaches to code* switching, pages 80–89.
Khyathi Chandu, Ekaterina Loginova, Vishal Gupta, Josef van Genabith, Günter Neumann, Manoj Chinnakotla, Eric Nyberg, and Alan W Black. 2019.
Code-mixed question answering challenge: Crowdsourcing data and techniques. In Third Workshop on Computational Approaches to Linguistic CodeSwitching, pages 29–38. Association for Computational Linguistics (ACL).
Khyathi Chandu, Thomas Manzini, Sumeet Singh, and Alan W Black. 2018. Language informed modeling of code-switched text. In *Proceedings of the Third* Workshop on Computational Approaches to Linguistic Code-Switching, pages 92–97.
Khyathi Raghavi Chandu and Alan W. Black. 2020.
Style variation as a vantage point for code-switching. In *Proc. Interspeech 2020*, pages 4761–4765.
Khyathi Raghavi Chandu, SaiKrishna Rallabandi, Sunayana Sitaram, and Alan W. Black. 2017. Speech synthesis for mixed-language navigation instructions. In *Proc. Interspeech 2017*, pages 57–61.
Ching-Ting Chang, Shun-Po Chuang, and Hung-Yi Lee.
2019. Code-switching sentence generation by generative adversarial networks and its application to data augmentation. *Proc. Interspeech 2019*, pages 554–558.
Arindam Chatterjere, Vineeth Guptha, Parul Chopra, and Amitava Das. 2020. Minority positive sampling for switching points-an anecdote for the codemixing language modeling. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6228–6236.
Sik Feng Cheong, Hai Leong Chieu, and Jing Lim.
2021. Intrinsic evaluation of language models for code-switching. In *Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)*,
pages 81–86.
Dhivya Chinnappa. 2021. dhivya-hope-detection@ ltedi-eacl2021: multilingual hope speech detection for code-mixed and transliterated texts. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, pages 73–78.
Gokul Chittaranjan, Yogarshi Vyas, Kalika Bali, and Monojit Choudhury. 2014. Word-level language identification using crf: Code-switching shared task report of msr india system. In Proceedings of The First Workshop on Computational Approaches to Code Switching, pages 73–79.
Won Ik Cho, Seok Min Kim, and Nam Soo Kim.
2020. Towards an efficient code-mixed graphemeto-phoneme conversion in an agglutinative language:
A case study on to-korean transliteration. In *Proceedings of the The 4th Workshop on Computational* Approaches to Code Switching, pages 65–70.
Parul Chopra, Sai Krishna Rallabandi, Alan W Black, and Khyathi Raghavi Chandu. 2021. Switch point biased self-training: Re-purposing pretrained models for code-switching. In *Findings of the Association* for Computational Linguistics: EMNLP 2021, pages 4389–4397.
Monojit Choudhury, Kalika Bali, Sunayana Sitaram, and Ashutosh Baheti. 2017. Curriculum design for code-switching: Experiments with language identification and language modeling with deep neural networks. In *Proceedings of the 14th International* Conference on Natural Language Processing (ICON2017), pages 65–74.
Shammur Absar Chowdhury, Amir Hussein, Ahmed Abdelali, and Ahmed Ali. 2021. Towards one model to rule all: Multilingual strategy for dialectal codeswitching arabic asr. In *Proc. Interspeech 2021*,
pages 2466–2470.
Chyng-Leei Chu, Dau-cheng Lyu, and Ren-yuan Lyu.
2007. Language identification on code-switching speech. In *Proceedings of ROCLING*.
Daniel Claeser, Samantha Kent, and Dennis Felske.
2018. Multilingual named entity recognition on spanish-english code-switched tweets using support vector machines. In *Proceedings of the Third Workshop on Computational Approaches to Linguistic* Code-Switching, pages 132–137.
Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017.
Word translation without parallel data. arXiv preprint arXiv:1710.04087.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging crosslingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022–
6034.
Amitava Das and Björn Gambäck. 2014. Identifying languages at the word level in code-mixed indian social media text. In *Proceedings of the 11th International Conference on Natural Language Processing*,
pages 378–387.
Bhargav Dave, Shripad Bhat, and Prasenjit Majumder.
2021. Irnlp_daiict@ dravidianlangtech-eacl2021:
offensive language identification in dravidian languages using tf-idf char n-grams and muril. In *Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages*, pages 266–269.
Frances Adriana Laureano De Leon, Florimond Guéniat, and Harish Tayyar Madabushi. 2020. Cs-embed at semeval-2020 task 9: The effectiveness of codeswitched word embeddings for sentiment analysis. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 922–927.
Anik Dey and Pascale Fung. 2014. A hindi-english code-switching corpus. In *LREC*, pages 2410–2413.
Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Nada Almarwani, and Mohamed AlBadrashiny. 2016. Creating a large multi-layered representational repository of linguistic code switched arabic data. In Proceedings of the Tenth International Conference on Language Resources and Evaluation
(LREC'16), pages 4228–4235.
Mona Diab and Ankit Kamboj. 2011. Feasibility of leveraging crowd sourcing for the creation of a large scale annotated resource for hindi english code switched data: A pilot annotation. In *Proceedings* of the 9th Workshop on Asian Language Resources, pages 36–40.
Anuj Diwan, Rakesh Vaideeswaran, Sanket Shah, Ankita Singh, Srinivasa Raghavan, Shreya Khare, Vinit Unni, Saurabh Vyas, Akash Rajpuria, Chiranjeevi Yarra, Ashish Mittal, Prasanta Kumar Ghosh, Preethi Jyothi, Kalika Bali, Vivek Seshadri, Sunayana Sitaram, Samarth Bharadwaj, Jai Nanavati, Raoul Nanavati, and Karthik Sankaranarayanan. 2021. Mucs 2021: Multilingual and code-switching asr challenges for low resource indian languages. In Proc. Interspeech 2021, pages 2446–2450.
Amazouz Djegdjiga, Martine Adda-Decker, and Lori Lamel. 2018. The french-algerian code-switching triggered audio corpus (facst). In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
A Seza Dogruöz, Sunayana Sitaram, Barbara Bullock, ˘
and Almeida Jacqueline Toribio. 2021. A survey of code-switching: Linguistic and social perspectives for language technologies. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1654–1666.
Manoel Verissimo dos Santos Neto, Ayrton Amaral, Nádia Silva, and Anderson da Silva Soares. 2020. Deep learning brasil-nlp at semeval-2020 task 9: sentiment analysis of code-mixed tweets using ensemble of language models. In *Proceedings of the Fourteenth* Workshop on Semantic Evaluation, pages 1233–1238.
Suman Dowlagar and Radhika Mamidi. 2021a. Gated convolutional sequence to sequence based learning for english-hingilsh code-switched machine translation. In *Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 26–30.
Suman Dowlagar and Radhika Mamidi. 2021b. Graph convolutional networks with multi-headed attention for code-mixed sentiment analysis. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 65–72.
Suman Dowlagar and Radhika Mamidi. 2021c. Offlangone@ dravidianlangtech-eacl2021: Transformers with the class balanced loss for offensive language identification in dravidian code-mixed text. In *Proceedings of the first workshop on speech and language technologies for dravidian languages*, pages 154–159.
Suman Dowlagar and Radhika Mamidi. 2022. Cmnerone at semeval-2022 task 11: Code-mixed named entity recognition by leveraging multilingual data. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1556–
1561.
Inke Du Bois. 2009. Language attrition and codeswitching among us americans in germany. *Stellenbosch papers in linguistics PLUS*, 39:1–16.
Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip R Cohen, and Mark Johnson. 2017. Multilingual semantic parsing and code-switching. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017),
pages 379–389.
Aparna Dutta. 2022. Word-level language identification using subword embeddings for code-mixed banglaenglish social media data. In *Proceedings of the* Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference, pages 76–82.
Mahmoud El-Haj, Paul Rayson, and Mariam Aboelezz.
2018. Arabic dialect identification in the context of bivalency and code-switching. In Proceedings of the 11th International Conference on Language Resources and Evaluation, Miyazaki, Japan., pages 3622–3627. European Language Resources Association.
Abdellah El Mekki, Abdelkader El Mahdaouy, Mohammed Akallouch, Ismail Berrada, and Ahmed Khoumsi. 2022. Um6p-cs at semeval-2022 task 11: Enhancing multilingual and code-mixed complex named entity recognition via pseudo labels using multilingual transformer. In Proceedings of the 16th International Workshop on Semantic Evaluation
(SemEval-2022), pages 1511–1517.
Heba Elfardy, Mohamed Al-Badrashiny, and Mona Diab.
2014. Aida: Identifying code switching in informal arabic text. In *Proceedings of The First Workshop on* Computational Approaches to Code Switching, pages 94–101.
Heba Elfardy and Mona Diab. 2012. Token level identification of linguistic code switching. In *Proceedings* of COLING 2012: posters, pages 287–296.
AbdelRahim Elmadany, Muhammad Abdul-Mageed, et al. 2021. Investigating code-mixed modern standard arabic-egyptian to english machine translation.
In *Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 56–64.
Ramy Eskander, Mohamed Al-Badrashiny, Nizar Habash, and Owen Rambow. 2014. Foreign words and the automatic processing of arabic social media text written in roman script. In Proceedings of The First Workshop on Computational Approaches to Code Switching, pages 1–12.
Guokang Fu and Liqin Shen. 2000. Model distance and it's application on mixed language speech recognition system. *ISCSLP'2000*.
Ruibo Fu, Jianhua Tao, Zhengqi Wen, Jiangyan Yi, Chunyu Qiang, and Tao Wang. 2020. Dynamic soft windowing and language dependent style token for code-switching end-to-end speech synthesis. In Proc.
Interspeech 2020, pages 2937–2941.
Pascale Fung, Xiaohu Liu, and Chi-Shun Cheung. 1999.
Mixed language query disambiguation. In *Proceedings of the 37th Annual Meeting of the Association* for Computational Linguistics, pages 333–340.
Björn Gambäck and Amitava Das. 2016. Comparing the level of code-switching in corpora. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1850–
1855.
Sreeram Ganji and Rohit Sinha. 2018. A novel approach for effective recognition of the code-switched data on monolingual language model. In Proc. Interspeech 2018, pages 1953–1957.
Yingying Gao, Junlan Feng, Ying Liu, Leijing Hou, Xin Pan, and Yong Ma. 2019. Code-switching sentence generation by bert and generative adversarial networks. In *Proc. Interspeech 2019*, pages 3525–
3529.
Avishek Garain, Sainik Mahata, and Dipankar Das.
2020. Junlp at semeval-2020 task 9: Sentiment analysis of hindi-english code mixed data using grid search cross validation. In *Proceedings of the Fourteenth* Workshop on Semantic Evaluation, pages 1276–1280.
Ayush Garg, Sammed Kagi, Vivek Srivastava, and Mayank Singh. 2021. Mipe: A metric independent pipeline for effective code-mixed nlg evaluation. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 123–132.
Saurabh Garg, Tanmay Parekh, and Preethi Jyothi.
2018a. Code-switched language models using dual rnns and same-source pretraining. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3078–3083.
Saurabh Garg, Tanmay Parekh, and Preethi Jyothi.
2018b. Dual language models for code switched speech recognition. In *Proc. Interspeech 2018*, pages 2598–2602.
Akash Kumar Gautam. 2022. Leveraging sub label dependencies in code mixed indian languages for partof-speech tagging using conditional random fields.
In Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference, pages 13–17.
Devansh Gautam, Kshitij Gupta, and Manish Shrivastava. 2021a. Translate and classify: Improving sequence level classification for english-hindi codemixed data. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic CodeSwitching, pages 15–25.
Devansh Gautam, Prashant Kodali, Kshitij Gupta, Anmol Goel, Manish Shrivastava, and Ponnurangam Kumaraguru. 2021b. Comet: Towards code-mixed translation using parallel monolingual sentences. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 47–
55.
Parvathy Geetha, Khyathi Chandu, and Alan W Black.
2018. Tackling code-switched ner: Participation of cmu. In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 126–131.
Souvick Ghosh, Satanu Ghosh, and Dipankar Das. 2016.
Part-of-speech tagging of code-mixed social media text. In *Proceedings of the second workshop on computational approaches to code switching*, pages 90–
97.
Urmi Ghosh, Dipti Misra Sharma, and Simran Khanuja.
2019. Dependency parser for bengali-english codemixed data enhanced with a synthetic treebank. In Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019), pages 91–99.
Oluwapelumi Giwa and Marelie H. Davel. 2014. Language identification of individual words with joint sequence models. In *Proc. Interspeech 2014*, pages 1400–1404.
Hila Gonen and Yoav Goldberg. 2019. Language modeling for code-switching: Evaluation, integration of monolingual data, and discriminative training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4175–4185.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. *Communications of the ACM*,
63(11):139–144.
Vinay Gopalan and Mark Hopkins. 2020. Reed at semeval-2020 task 9: Fine-tuning and bag-of-words approaches to code-mixed sentiment analysis. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1304–1309.
Koustava Goswami, Priya Rani, Bharathi Raja Chakravarthi, Theodorus Fransen, and John Philip McCrae. 2020. Uld@ nuig at semeval-2020 task 9: Generative morphemes with an attention model for sentiment analysis in code-mixed text. In *Proceedings of the Fourteenth Workshop on Semantic* Evaluation, pages 968–974.
Wen-Tao Gu, Tan Lee, and P. C. Ching. 2008. Prosodic variation in cantonese-english code-mixed speech. In Proc. International Symposium on Chinese Spoken Language Processing, pages 342–345.
Sunil Gundapu and Radhika Mamidi. 2018. Word level language identification in english telugu code mixed data. In *Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation*.
Sunil Gundapu and Radhika Mamidi. 2020. Gundapusunil at semeval-2020 task 9: Syntactic semantic lstm architecture for sentiment analysis of codemixed data. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 1247–1252.
Pengcheng Guo, Haihua Xu, Lei Xie, and Eng Siong Chng. 2018. Study of semi-supervised approaches to improving english-mandarin code-switching speech recognition. In *Proc. Interspeech 2018*, pages 1928–
1932.
Abhirut Gupta, Aditya Vavre, and Sunita Sarawagi.
2021a. Training data augmentation for code-mixed translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5760–5766.
Akshat Gupta, Sargam Menghani, Sai Krishna Rallabandi, and Alan W Black. 2021b. Unsupervised self-training for sentiment analysis of code-switched data. In *Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 103–112.
Akshat Gupta, Sai Krishna Rallabandi, and Alan W
Black. 2021c. Task-specific pre-training and cross lingual transfer for sentiment analysis in dravidian code-switched languages. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 73–79.
Deepak Gupta, Asif Ekbal, and Pushpak Bhattacharyya.
2018a. A deep neural network based approach for entity extraction in code-mixed indian social media
text. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC 2018).
Deepak Gupta, Asif Ekbal, and Pushpak Bhattacharyya.
2020. A semi-supervised approach to generate the code-mixed text using pre-trained encoder and transfer learning. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2267–
2280.
Deepak Gupta, Ankit Lamba, Asif Ekbal, and Pushpak Bhattacharyya. 2016. Opinion mining in a codemixed environment: A case study with government portals. In *Proceedings of the 13th International* Conference on Natural Language Processing, pages 249–258.
Deepak Gupta, Pabitra Lenka, Asif Ekbal, and Pushpak Bhattacharyya. 2018b. Uncovering code-mixed challenges: A framework for linguistically driven question generation and neural based question answering. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 119–130.
Vishal Gupta, Manoj Chinnakotla, and Manish Shrivastava. 2018c. Transliteration better than translation?
answering code-mixed questions over a knowledge base. In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 39–50.
Gualberto A Guzmán, Joseph Ricard, Jacqueline Serigos, Barbara E Bullock, and Almeida Jacqueline Toribio. 2017. Metrics for modeling code-switching across corpora. In *INTERSPEECH*, pages 67–71.
Gualberto A Guzman, Jacqueline Serigos, Barbara Bullock, and Almeida Jacqueline Toribio. 2016. Simple tools for exploring variation in code-switching for linguists. In *Proceedings of the second workshop on* computational approaches to code switching, pages 12–20.
Gualberto Guzmán, Joseph Ricard, Jacqueline Serigos, Barbara E. Bullock, and Almeida Jacqueline Toribio.
2017. Metrics for modeling code-switching across corpora. In *Proc. Interspeech 2017*, pages 67–71.
Injy Hamed, Mohamed Elmahdy, and Slim Abdennadher. 2018. Collection and analysis of code-switch egyptian arabic-english speech corpus. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)*.
Injy Hamed, Amir Hussein, Oumnia Chellah, Shammur Chowdhury, Hamdy Mubarak, Sunayana Sitaram, Nizar Habash, and Ahmed Ali. 2022. Benchmarking evaluation metrics for code-switching automatic speech recognition. arXiv preprint arXiv:2211.16319.
Injy Hamed, Ngoc Thang Vu, and Slim Abdennadher.
2020. Arzen: A speech corpus for code-switched egyptian arabic-english. In *Proceedings of the*
Twelfth Language Resources and Evaluation Conference, pages 4237–4246.
Silvana Hartmann, Monojit Choudhury, and Kalika Bali.
2018. An integrated representation of linguistic and social functions of code-switching. In *Proceedings of* the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Ji He, Yao Qian, Frank K Soong, and Sheng Zhao. 2012.
Turning a monolingual speaker into multilingual for a mixed-language tts. In *Thirteenth Annual Conference of the International Speech Communication* Association.
Asha Hegde, Mudoor Devadas Anusha, Sharal Coelho, Hosahalli Lakshmaiah Shashirekha, and Bharathi Raja Chakravarthi. 2022. Corpus creation for sentiment analysis in code-mixed tulu text. In *Proceedings of the 1st Annual Meeting of the ELRA/ISCA*
Special Interest Group on Under-Resourced Languages, pages 33–40.
Megan Herrera, Ankit Aich, and Natalie Parde. 2022.
Tweettaglish: A dataset for investigating tagalogenglish code-switching. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 2090–2097.
Pekka Hirvonen and Timo Lauttamus. 2000. Codeswitching and language attrition: Evidence from american finnish interview speech. *SKY journal of* linguistics, 13:47–74.
Eftekhar Hossain, Omar Sharif, and Mohammed Moshiul Hoque. 2021. Nlp-cuet@
lt-edi-eacl2021: Multilingual code-mixed hope speech detection using cross-lingual representation learner. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, pages 168–174.
Xinhui Hu, Qi Zhang, Lei Yang, Binbin Gu, and Xinkang Xu. 2020. Data augmentation for codeswitch language modeling by fusing multiple text generation methods. In *Proc. Interspeech 2020*,
pages 1062–1066.
Bo Huang and Yang Bai. 2021. hub at semeval-2021 task 7: Fusion of albert and word frequency information detecting and rating humor and offense. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 1141–
1145.
Fei Huang and Alexander Yates. 2014. Improving word alignment using linguistic code switching data. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 1–9.
Dana-Maria Iliescu, Rasmus Grand, Sara Qirko, and Rob van der Goot. 2021. Much gracias: Semisupervised code-switch detection for spanish-english:
How far can we get? *NAACL 2021*, page 65.
David Imseng, Hervé Bourlard, and Mathew Magimai Doss. 2010. Towards mixed language speech recognition systems. In *Proc. Interspeech 2010*, pages 278–281.
Aaron Jaech, George Mulcaire, Mari Ostendorf, and Noah A Smith. 2016. A neural model for language identification in code-switched tweets. In *Proceedings of The Second Workshop on Computational Approaches to Code Switching*, pages 60–64.
Devanshu Jain, Maria Kustikova, Mayank Darbari, Rishabh Gupta, and Stephen Mayhew. 2018. Simple features for strong performance on named entity recognition in code-switched twitter data. In *Proceedings of the Third Workshop on Computational* Approaches to Linguistic Code-Switching, pages 103–
109.
Naman Jain and Riyaz Ahmad Bhat. 2014. Language identification in code-switching scenario. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 87–93.
Anupam Jamatia, Björn Gambäck, and Amitava Das.
2015. Part-of-speech tagging for code-mixed englishhindi twitter and facebook chat messages. In *Proceedings of the international conference recent advances in natural language processing*, pages 239–
248.
Florian Janke, Tongrui Li, Eric Rincón, Gualberto A
Guzman, Barbara Bullock, and Almeida Jacqueline Toribio. 2018. The university of texas system submission for the code-switching workshop shared task 2018. In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 120–125.
Soroush Javdan, Behrouz Minaei-Bidgoli, et al. 2020.
Iust at semeval-2020 task 9: Sentiment analysis for code-mixed social media text using deep neural networks and linear baselines. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1270–1275.
Ganesh Jawahar, Muhammad Abdul-Mageed, VS Laks Lakshmanan, et al. 2021. Exploring text-to-text transformers for english to hinglish machine translation with synthetic code-mixing. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 36–46.
Sai Muralidhar Jayanthi, Kavya Nerella, Khyathi Raghavi Chandu, and Alan W Black. 2021.
Codemixednlp: An extensible and open nlp toolkit for code-mixing. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 113–118.
Harsh Jhamtani, Suleep Kumar Bhogi, and Vaskar Raychoudhury. 2014. Word-level language identification in bi-lingual code-switched texts. In *Proceedings of* the 28th Pacific Asia Conference on language, information and computing, pages 348–357.
Lars Johanson. 1999. The dynamics of code-copying in language encounters. *Language encounters across* time and space, 3762.
Navya Jose, Bharathi Raja Chakravarthi, Shardul Suryawanshi, Elizabeth Sherly, and John P McCrae. 2020. A survey of current datasets for codeswitching research. In 2020 6th international conference on advanced computing and communication systems (ICACCS), pages 136–141. IEEE.
Aditya Joshi, Ameya Prabhu, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text. In *Proceedings of COLING 2016, the* 26th International Conference on Computational Linguistics: Technical Papers, pages 2482–2491.
Aravind Joshi. 1982. Processing of sentences with intrasentential code-switching. In *Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics*.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6282–6293.
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov.
2016. Fasttext.zip: Compressing text classification models. *arXiv preprint arXiv:1612.03651*.
David Jurgens, Stefan Dimitrov, and Derek Ruths. 2014.
Twitter users\# codeswitch hashtags!\# moltoimportante\# wow. In *Proceedings of the First Workshop on* Computational Approaches to Code Switching, pages 51–61.
Laurent Kevers. 2022. Coswid, a code switching identification method suitable for under-resourced languages. In *Proceedings of the 1st Annual Meeting* of the ELRA/ISCA Special Interest Group on UnderResourced Languages, pages 112–121.
Humair Raj Khan, Deepak Gupta, and Asif Ekbal. 2021.
Towards developing a multilingual and code-mixed visual question answering system by knowledge distillation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1753–1767.
Ankush Khandelwal, Sahil Swami, Syed S Akhtar, and Manish Shrivastava. 2018. Humor detection in english-hindi code-mixed social media content:
Corpus and baseline system. In *Proceedings of the* Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Simran Khanuja, Sandipan Dandapat, Sunayana Sitaram, and Monojit Choudhury. 2020a. A new dataset for natural language inference from codemixed conversations. In Proceedings of the The 4th Workshop on Computational Approaches to Code Switching, pages 9–16.
Simran Khanuja, Sandipan Dandapat, Anirudh Srinivasan, Sunayana Sitaram, and Monojit Choudhury.
2020b. Gluecos: An evaluation benchmark for codeswitched nlp. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3575–3585.
Yerbolat Khassanov, Haihua Xu, Van Tung Pham, Zhiping Zeng, Eng Siong Chng, Chongjia Ni, and Bin Ma. 2019. Constrained output embeddings for endto-end code-switching speech recognition with only monolingual data. In *Proc. Interspeech 2019*, pages 2160–2164.
Levi King, Eric Baucom, Timur Gilmanov, Sandra Kübler, Daniel Whyatt, Wolfgang Maier, and Paul Rodrigues. 2014. The iucl+ system: Word-level language identification via extended markov models. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 102–106.
Ondˇrej Klejch, Electra Wallington, and Peter Bell. 2021.
The cstr system for multilingual and code-switching asr challenges for low resource indian languages. In Proc. Interspeech 2021, pages 2881–2885.
Kate M. Knill, Linlin Wang, Yu Wang, Xixin Wu, and Mark J.F. Gales. 2020. Non-native children's automatic speech recognition: The interspeech 2020 shared task alta systems. In *Proc. Interspeech 2020*,
pages 255–259.
Hiroaki Kojima and Kazuyo Tanaka. 2003. Mixedlingual spoken word recognition by using vq codebook sequences of variable length segments. In Eighth European Conference on Speech Communication and Technology.
Jun Kong, Jin Wang, and Xuejie Zhang. 2020. Hpccynu at semeval-2020 task 9: A bilingual vector gating mechanism for sentiment analysis of code-mixed text.
In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 940–945.
Ayush Kumar, Harsh Agarwal, Keshav Bansal, and Ashutosh Modi. 2020. Baksa at semeval-2020 task 9: Bolstering cnn with self-attention for sentiment analysis of code mixed text. In *Proceedings of the* Fourteenth Workshop on Semantic Evaluation, pages 1221–1226.
Mari Ganesh Kumar, Jom Kuriakose, Anand Thyagachandran, Arun Kumar A, Ashish Seth, Lodagala V.S.V. Durga Prasad, Saish Jaiswal, Anusha Prakash, and Hema A. Murthy. 2021. Dual script e2e framework for multilingual and code-switching asr. In Proc. Interspeech 2021, pages 2441–2445.
Ritesh Kumar, Aishwarya N Reganti, Akshit Bhatia, and Tushar Maheshwari. 2018. Aggression-annotated corpus of hindi-english code-mixed data. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Yash Kumar Lal, Vaibhav Kumar, Mrinal Dhar, Manish Shrivastava, and Philipp Koehn. 2019. De-mixing sentiment from code-mixed text. In Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pages 371–377.
Grandee Lee and Haizhou Li. 2020. Modeling codeswitch languages using bilingual parallel corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 860–870.
Grandee Lee, Xianghu Yue, and Haizhou Li. 2019a.
Linguistically motivated parallel data augmentation for code-switch language modeling. In *Interspeech*,
pages 3730–3734.
Grandee Lee, Xianghu Yue, and Haizhou Li. 2019b.
Linguistically motivated parallel data augmentation for code-switch language modeling. In *Proc. Interspeech 2019*, pages 3730–3734.
Chengfei Li, Shuhao Deng, Yaoping Wang, Guangjing Wang, Yaguang Gong, Changbin Chen, and Jinfeng Bai. 2022. Talcs: An open-source mandarin-english code-switching corpus and a speech recognition baseline. In *Proc. Interspeech 2022*, pages 1741–1745.
Chia-Yu Li and Ngoc Thang Vu. 2020. Improving codeswitching language modeling with artificially generated texts using cycle-consistent adversarial networks.
In *Proc. Interspeech 2020*, pages 1057–1061.
Ying Li and Pascale Fung. 2012. Code-switch language model with inversion constraints for mixed language speech recognition. In *Proceedings of COLING 2012*,
pages 1671–1680.
Ying Li and Pascale Fung. 2013. Language modeling for mixed language speech recognition using weighted phrase extraction. In *Interspeech*, pages 2599–2603.
Ying Li and Pascale Fung. 2014. Language modeling with functional head constraint for code switching speech recognition. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 907–916.
Ying Li, Yue Yu, and Pascale Fung. 2012. A mandarinenglish code-switching corpus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2515–
2519.
Zichao Li. 2021. Codewithzichao@ dravidianlangtecheacl2021: Exploring multimodal transformers for meme classification in tamil language. In *Proceedings of the First Workshop on Speech and Language* Technologies for Dravidian Languages, pages 352–
356.
Hui Liang, Yao Qian, and Frank K Soong. 2007. An hmm-based bilingual (mandarin-english) tts. *Proceedings of SSW6*.
Wei-Bin Liang, Chung-Hsien Wu, and Chun-Shan Hsu.
2013. Code-switching event detection based on deltabic using phonetic eigenvoice models. In *Proc. Interspeech 2013*, pages 1487–1491.
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, et al. 2020. Xglue: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008–6018.
Jindˇrich Libovicky, Rudolf Rosa, and Alexander Fraser. `
2020. On the language neutrality of pre-trained multilingual representations. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 1663–1674.
Chu-Cheng Lin, Waleed Ammar, Lori Levin, and Chris Dyer. 2014. The cmu submission for the shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 80–86.
Hou-An Lin and Chia-Ping Chen. 2021. Exploiting lowresource code-switching data to mandarin-english speech recognition systems. In Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021), pages 81–86.
Wei-Ting Lin and Berlin Chen. 2020. Exploring disparate language model combination strategies for mandarin-english code-switching asr. In *Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)*,
pages 346–358.
Hexin Liu, Leibny Paola García Perera, Xinyi Zhang, Justin Dauwels, Andy W.H. Khong, Sanjeev Khudanpur, and Suzy J. Styles. 2021. End-to-end language diarization for bilingual code-switching speech. In Proc. Interspeech 2021, pages 1489–1493.
Jiaxiang Liu, Xuyi Chen, Shikun Feng, Shuohuan Wang, Xuan Ouyang, Yu Sun, Zhengjie Huang, and Weiyue Su. 2020. Kk2018 at semeval-2020 task 9: Adversarial training for code-mixing sentiment classification.
In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 817–823.
Khaled Lounnas, Mourad Abbas, and Mohamed Lichouri. 2021. Towards phone number recognition for code switched algerian dialect. In Proceedings of The Fourth International Conference on Natural Language and Speech Processing (ICNLSP 2021),
pages 290–294.
Holy Lovenia, Samuel Cahyawijaya, Genta Winata, Peng Xu, Yan Xu, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J Barezi, et al. 2022. Ascend: A spontaneous chinese-english dataset for code-switching in multi-turn conversation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 7259–7268.
Holy Lovenia, Samuel Cahyawijaya, Genta Indra Winata, Peng Xu, Xu Yan, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J Barezi, et al.
2021. Ascend: A spontaneous chinese-english dataset for code-switching in multi-turn conversation. arXiv preprint arXiv:2112.06223.
Yizhou Lu, Mingkun Huang, Hao Li, Jiaqi Guo, and Yanmin Qian. 2020. Bi-encoder transformer network for mandarin-english code-switching speech recognition using mixture of experts. In *Proc. Interspeech* 2020, pages 4766–4770.
Dau-Cheng Lyu and Ren-Yuan Lyu. 2008. Language identification on code-switching utterances using multiple cues. In *Proc. Interspeech 2008*, pages 711–
714.
Dau-Cheng Lyu, Tien-Ping Tan, Eng Siong Chng, and Haizhou Li. 2010a. Seame: a mandarin-english code-switching speech corpus in south-east asia. In Eleventh Annual Conference of the International Speech Communication Association.
Dau-Cheng Lyu, Tien-Ping Tan, Eng Siong Chng, and Haizhou Li. 2010b. Seame: a mandarin-english codeswitching speech corpus in south-east asia. In *Proc.*
Interspeech 2010, pages 1986–1989.
Tetyana Lyudovyk and Valeriy Pylypenko. 2014. Codeswitching speech recognition for closely related languages. In Proc. 4th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU
2014), pages 188–193.
Yili Ma, Liang Zhao, and Jie Hao. 2020. Xlp at semeval2020 task 9: Cross-lingual models with focal loss for sentiment analysis of code-mixing language. In Proceedings of the fourteenth workshop on semantic evaluation, pages 975–980.
Koena Ronny Mabokela, Madimetja Jonas Manamela, and Mabu Manaileng. 2014. Modeling codeswitching speech on under-resourced languages for language identification. In *Spoken Language Technologies for Under-Resourced Languages*.
Manuel Mager, Özlem Çetinoglu, and Katharina Kann. ˘
2019. Subword-level language identification for intra-word code-switching. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2005–2011.
Sainik Mahata, Dipankar Das, and Sivaji Bandyopadhyay. 2021. Sentiment classification of code-mixed tweets using bi-directional rnn and language tags. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 28–35.
Piyush Makhija, Ankit Srivastava, and Anuj Gupta.
2020. hinglishnorm-a corpus of hindi-english code mixed sentences for text normalization. In *Proceedings of the 28th International Conference on Computational Linguistics: Industry Track*, pages 136–145.
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022a. Multiconer: A large-scale multilingual dataset for complex named entity recognition. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 3798–3809.
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022b. Semeval-2022 task 11: Multilingual complex named entity recognition
(multiconer). In *Proceedings of the 16th international workshop on semantic evaluation (SemEval2022)*, pages 1412–1437.
Aditya Malte, Pratik Bhavsar, and Sushant Rathi. 2020.
Team_swift at semeval-2020 task 9: Tiny data specialists through domain-specific pre-training on codemixed data. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 1310–1315.
Soumil Mandal and Karthick Nanmaran. 2018. Normalization of transliterated words in code-mixed data using seq2seq model & levenshtein distance. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 49–53.
Soumil Mandal and Anil Kumar Singh. 2018. Language identification in code-mixed data using multichannel neural networks and context capture. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 116–120.
Asrita Venkata Mandalam and Yashvardhan Sharma.
2021. Sentiment analysis of dravidian code mixed data. In *Proceedings of the First Workshop on* Speech and Language Technologies for Dravidian Languages, pages 46–54.
Sreeja Manghat, Sreeram Manghat, and Tanja Schultz.
2020. Malayalam-english code-switched: Grapheme to phoneme system. In *Proc. Interspeech 2020*, pages 4133–4137.
Sreeram Manghat, Sreeja Manghat, and Tanja Schultz.
2022. Normalization of code-switched text for speech synthesis. In *Proc. Interspeech 2022*, pages 4297–4301.
J. C. Marcadet, V. Fischer, and C. Waast-Richard. 2005.
A transformation-based learning approach to language identification for mixed-lingual text-to-speech synthesis. In *Proc. Interspeech 2005*, pages 2249–
2252.
Deepthi Mave, Suraj Maharjan, and Thamar Solorio.
2018. Language identification and analysis of codeswitched social media text. In *Proceedings of the* third workshop on computational approaches to linguistic code-switching, pages 51–61.
Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G Lee, Anish Acharya,
and Rajiv Shah. 2021. Gupshup: Summarizing opendomain code-switched conversations. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6177–6192.
Elena Álvarez Mellado and Constantine Lignos. 2022.
Borrowing or codeswitching? annotating for finergrained distinctions in language mixing. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 3195–3201.
Gideon Mendels, Victor Soto, Aaron Jaech, and Julia Hirschberg. 2018. Collecting code-switched data from social media. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas ReyVillamizar, Mona Diab, and Thamar Solorio. 2016.
Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 40–49.
Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. *arXiv preprint* arXiv:2211.01786.
Siddhartha Mukherjee, Vinuthkumar Prasan, Anish Nediyanchath, Manan Shah, and Nikhil Kumar. 2019.
Robust deep learning based sentiment classification of code-mixed text. In *Proceedings of the 16th International Conference on Natural Language Processing*, pages 124–129.
Saida Mussakhojayeva, Yerbolat Khassanov, and Huseyin Atakan Varol. 2022a. Kazakhtts2: Extending the open-source kazakh tts corpus with more data, speakers, and topics. In *Proceedings of the* Thirteenth Language Resources and Evaluation Conference, pages 5404–5411.
Saida Mussakhojayeva, Yerbolat Khassanov, and Huseyin Atakan Varol. 2022b. Ksc2: An industrialscale open-source kazakh speech corpus. In Proc.
Interspeech 2022, pages 1367–1371.
Carol Myers-Scotton. 1997. *Duelling languages: Grammatical structure in codeswitching*. Oxford University Press.
Carol Myers-Scotton. 2005. *Multiple voices: An introduction to bilingualism*. John Wiley & Sons.
Ravindra Nayak and Raviraj Joshi. 2022. L3cubehingcorpus and hingbert: A code mixed hindi-english dataset and bert language models. In *Proceedings of* the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference, pages 7–12.
Tomáš Nekvinda and Ondˇrej Dušek. 2020. One model, many languages: Meta-learning for multilingual textto-speech. In *Proc. Interspeech 2020*, pages 2972–
2976.
Li Nguyen and Christopher Bryant. 2020. Canvec-the canberra vietnamese-english code-switching natural speech corpus. In *Proceedings of the 12th Language* Resources and Evaluation Conference, pages 4121–
4129.
Thomas Niesler and Febe de Wet. 2008. Accent identification in the presence of code-mixing. In *Proc.*
The Speaker and Language Recognition Workshop
(Odyssey 2008), page paper 27.
Thomas Niesler et al. 2018. A first south african corpus of multilingual code-switched soap opera speech.
In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC*
2018).
José Carlos Rosales Núñez and Guillaume Wisniewski.
2018. Analyse morpho-syntaxique en présence d'alternance codique (pos tagging of code switching).
In *Actes de la Conférence TALN. Volume 1-Articles* longs, articles courts de TALN, pages 473–480.
Nathaniel Oco and Rachel Edita Roxas. 2012. Pattern matching refinements to dictionary-based codeswitching point detection. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 229–236.
Daniela Oria and Akos Vetek. 2004. Multilingual email text processing for speech synthesis. In Proc.
Interspeech 2004, pages 841–844.
Alissa Ostapenko, Shuly Wintner, Melinda Fricke, and Yulia Tsvetkov. 2022. Speaker information can guide models to better inductive biases: A case study on predicting code-switching. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3853–3867.
Billian Khalayi Otundo and Martine Grice. 2022. Intonation in advice-giving in kenyan english and kiswahili. In *Proc. Speech Prosody 2022*, pages 150–
154.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155.
¸Saziye Özate¸s, Arzucan Özgür, Tunga Güngör, and Özlem Çetinoglu. 2022. Improving code-switching ˘
dependency parsing with semi-supervised auxiliary tasks. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1159–1171.
¸Saziye Betül Özate¸s and Özlem Çetinoglu. 2021. A ˘
language-aware approach to code-switched morphological tagging. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic CodeSwitching, pages 72–83.
Daniel Palomino and José Ochoa-Luna. 2020.
Palomino-ochoa at semeval-2020 task 9: Robust system based on transformer for code-mixed sentiment classification. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 963–967.
Ayushi Pandey, Brij Mohan Lal Srivastava, and Suryakanth Gangashetty. 2017. Towards developing a phonetically balanced code-mixed speech corpus for hindi-english asr. In Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017), pages 95–101.
Ayushi Pandey, Brij Mohan Lal Srivastava, Rohit Kumar, Bhanu Teja Nellore, Kasi Sai Teja, and Suryakanth V Gangashetty. 2018. Phonetically balanced code-mixed speech corpus for hindi-english automatic speech recognition. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Kartikey Pant and Tanvi Dadu. 2020. Towards codeswitched classification exploiting constituent language resources. In *Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association* for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 37–43.
Evangelos Papalexakis, Dong Nguyen, and A Seza Dogruöz. 2014. Predicting code-switching in multi- ˘
lingual communication for immigrant communities.
In First Workshop on Computational Approaches to Code Switching (EMNLP 2014), pages 42–50. Association for Computational Linguistics (ACL).
Tanmay Parekh, Emily Ahn, Yulia Tsvetkov, and Alan W Black. 2020. Understanding linguistic accommodation in code-switched human-machine dialogues. In *Proceedings of the 24th Conference on* Computational Natural Language Learning, pages 565–577.
Apurva Parikh, Abhimanyu Singh Bisht, and Prasenjit Majumder. 2020. Irlab_daiict at semeval-2020 task 9: Machine learning and deep learning methods for sentiment analysis of code-mixed tweets. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1265–1269.
Dwija Parikh and Thamar Solorio. 2021. Normalization and back-transliteration for code-switched data. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 119–
124.
Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas Pykl, Björn Gambäck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das.
2020. Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets. In Proceedings of the fourteenth workshop on semantic evaluation, pages 774–790.
Nanyun Peng, Yiming Wang, and Mark Dredze.
2014. Learning polylingual topic models from codeswitched social media documents. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 674–679.
Beat Pfister and Harald Romsdorfer. 2003. Mixedlingual text analysis for polyglot tts synthesis. In Proc. 8th European Conference on Speech Communication and Technology (Eurospeech 2003), pages 2037–2040.
Akshata Phadte and Gaurish Thakkar. 2017. Towards normalising konkani-english code-mixed social media text. In *Proceedings of the 14th International* Conference on Natural Language Processing (ICON2017), pages 85–94.
Page Piccinini and Marc Garellek. 2014. Prosodic cues to monolingual versus code-switching sentences in english and spanish. In *Proc. Speech Prosody 2014*,
pages 885–889.
Mario Piergallini, Rouzbeh Shirvani, Gauri Shankar Gautam, and Mohamed Chouikha. 2016. Word-level language identification and predicting codeswitching points in swahili-english language data. In Proceedings of the second workshop on computational approaches to code switching, pages 21–29.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual bert? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001.
Shana Poplack. 1978. *Syntactic structure and social* function of code-switching, volume 2. Centro de Estudios Puertorriqueños,[City University of New York].
Shana Poplack. 1980. Sometimes i'll start a sentence in spanish y termino en espanol: toward a typology of code-switching. *Linguistics*, 18:581–618.
Anusha Prakash, Anju Leela Thomas, S. Umesh, and Hema A Murthy. 2019. Building multilingual endto-end speech synthesisers for indian languages. In Proc. 10th ISCA Workshop on Speech Synthesis (SSW
10), pages 194–199.
Archiki Prasad, Mohammad Ali Rehan, Shreya Pathak, and Preethi Jyothi. 2021. The effectiveness of intermediate-task training for code-switched natural language understanding. In *Proceedings of the 1st* Workshop on Multilingual Representation Learning, pages 176–190.
Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018a. Language modeling for code-mixing:
The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 1543–1553.
Adithya Pratapa and Monojit Choudhury. 2017. Quantitative characterization of code switching patterns in complex multi-party conversations: A case study on hindi movie scripts. In Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017), pages 75–84.
Adithya Pratapa and Monojit Choudhury. 2021. Comparing grammatical theories of code-mixing. In *Proceedings of the Seventh Workshop on Noisy Usergenerated Text (W-NUT 2021)*, pages 158–167.
Adithya Pratapa, Monojit Choudhury, and Sunayana Sitaram. 2018b. Word embeddings for code-mixed language processing. In *Proceedings of the 2018* conference on empirical methods in natural language processing, pages 3067–3072.
Aman Priyanshu, Aleti Vardhan, Sudarshan Sivakumar, Supriti Vijay, and Nipuna Chhabra. 2021. "something something hota hai!" an explainable approach towards sentiment analysis on indian code-mixed data. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 437– 444.
Yao Qian, Houwei Cao, and Frank K Soong. 2008.
Hmm-based mixed-language (mandarin-english)
speech synthesis. In *2008 6th International Symposium on Chinese Spoken Language Processing*, pages 1–4. IEEE.
Zimeng Qiu, Yiyuan Li, Xinjian Li, Florian Metze, and William M. Campbell. 2020. Towards context-aware end-to-end code-switching speech recognition. In Proc. Interspeech 2020, pages 4776–4780.
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022.
Robust speech recognition via large-scale weak supervision. *arXiv preprint arXiv:2212.04356*.
Tathagata Raha, Sainik Mahata, Dipankar Das, and Sivaji Bandyopadhyay. 2019. Development of pos tagger for english-bengali code-mixed data. In *Proceedings of the 16th International Conference on* Natural Language Processing, pages 143–149.
Ratnavel Rajalakshmi, Yashwant Reddy, and Lokesh Kumar. 2021. Dlrg@ dravidianlangtech-eacl2021:
Transformer based approachfor offensive language identification on code-mixed tamil. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 357–362.
SaiKrishna Rallabandi and Alan W. Black. 2017. On building mixed lingual speech synthesis systems. In Proc. Interspeech 2017, pages 52–56.
SaiKrishna Rallabandi and Alan W. Black. 2019. Variational attention using articulatory priors for generating code mixed speech using monolingual corpora.
In *Proc. Interspeech 2019*, pages 3735–3739.
SaiKrishna Rallabandi, Sunayana Sitaram, and Alan W
Black. 2018. Automatic detection of code-switching style from acoustics. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 76–81.
Vikram Ramanarayanan and David Suendermann-Oeft.
2017. Jee haan, i'd like both, por favor: Elicitation of a code-switched corpus of hindi–english and spanish–english human–machine dialog. In *Proc.*
Interspeech 2017, pages 47–51.
Banothu Rambabu and Suryakanth V Gangashetty.
2018. Development of iiith hindi-english code mixed speech database. In Proc. 6th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2018), pages 107–111.
Priya Rani, John Philip McCrae, and Theodorus Fransen.
2022. Mhe: Code-mixed corpora for similar language identification. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 3425–3433.
Priya Rani, Shardul Suryawanshi, Koustava Goswami, Bharathi Raja Chakravarthi, Theodorus Fransen, and John Philip McCrae. 2020. A comparative study of different state-of-the-art hate speech detection methods in hindi-english code-mixed data. In Proceedings of the second workshop on trolling, aggression and cyberbullying, pages 42–48.
Preeti Rao, Mugdha Pandya, Kamini Sabu, Kanhaiya Kumar, and Nandini Bondale. 2018. A study of lexical and prosodic cues to segmentation in a hindienglish code-switched discourse. In *Proc. Interspeech 2018*, pages 1918–1922.
Manikandan Ravikiran and Subbiah Annamalai. 2021.
Dosa: Dravidian code-mixed offensive span identification dataset. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 10–17.
Manikandan Ravikiran and Bharathi Raja Chakravarthi.
2022. Zero-shot code-mixed offensive span identification through rationale extraction. In *Proceedings* of the Second Workshop on Speech and Language Technologies for Dravidian Languages, pages 240–
247.
Manikandan Ravikiran, Bharathi Raja Chakravarthi, S Sangeetha, Ratnavel Rajalakshmi, Sajeetha Thavareesan, Rahul Ponnusamy, Shankar Mahadevan, et al.
2022. Findings of the shared task on offensive span identification fromcode-mixed tamil-english comments. In Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages, pages 261–270.
Xiaolin Ren, Xin He, and Yaxin Zhang. 2005. Mandarin/english mixed-lingual name recognition for mobile phone. In *INTERSPEECH*, pages 3373–3376.
Shruti Rijhwani, Royal Sequiera, Monojit Choudhury, Kalika Bali, and Chandra Shekhar Maddila. 2017.
Estimating code-switching on twitter with a novel generalized word-level language detection technique.
In *Proceedings of the 55th annual meeting of the* association for computational linguistics (volume 1: long papers), pages 1971–1982.
Harald Romsdorfer and Beat Pfister. 2005. Phonetic labeling and segmentation of mixed-lingual prosody databases. In *Proc. Interspeech 2005*, pages 3281–
3284.
Harald Romsdorfer and Beat Pfister. 2006. Character stream parsing of mixed-lingual text. In *Proc. Multilingual Language and Speech Processing (MULTILING 2006)*, page paper 021.
Mike Rosner and Paulseph-John Farrugia. 2007. A
tagging algorithm for mixed language identification in a noisy domain. In *Proc. Interspeech 2007*, pages 190–193.
Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, et al. 2021. Xtreme-r:
Towards more challenging and nuanced multilingual evaluation. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 10215–10245.
Caroline Sabty, Mohamed Islam, and Slim Abdennadher. 2020. Contextual embeddings for arabic-english code-switched data. In *Proceedings of the Fifth Arabic Natural Language Processing Workshop*, pages 215–225.
Younes Samih, Suraj Maharjan, Mohammed Attia, Laura Kallmeyer, and Thamar Solorio. 2016a. Multilingual code-switching identification via lstm recurrent neural networks. In Proceedings of the second workshop on computational approaches to code switching, pages 50–59.
Younes Samih and Wolfgang Maier. 2016. An arabicmoroccan darija code-switched corpus. In *Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)*, pages 4170–4175.
Younes Samih, Wolfgang Maier, and Laura Kallmeyer.
2016b. Sawt: Sequence annotation web tool. In *Proceedings of the second workshop on computational* approaches to code switching, pages 65–70.
David Sankoff. 1998. The production of code-mixed discourse. In COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics.
Sebastin Santy, Anirudh Srinivasan, and Monojit Choudhury. 2021. Bertologicomix: How does code-mixing interact with multilingual bert? In Proceedings of the Second Workshop on Domain Adaptation for NLP,
pages 111–121.
Sunil Saumya, Abhinav Kumar, and Jyoti Prakash Singh.
2021. Offensive language identification in dravidian code mixed social media text. In *Proceedings of the* first workshop on speech and language technologies for Dravidian languages, pages 36–45.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´
Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model.
arXiv preprint arXiv:2211.05100.
Royal Sequiera, Monojit Choudhury, and Kalika Bali.
2015. Pos tagging of hindi-english code mixed text from social media: Some machine learning experiments. In Proceedings of the 12th international conference on natural language processing, pages 237–246.
Sanket Shah, Basil Abraham, Sunayana Sitaram, Vikas Joshi, et al. 2020. Learning to recognize code-switched speech without forgetting monolingual speech recognition. arXiv preprint arXiv:2006.00782.
Sanket Shah, Pratik Joshi, Sebastin Santy, and Sunayana Sitaram. 2019. Cossat: Code-switched speech annotation tool. In Proceedings of the First Workshop on Aggregating and Analysing Crowdsourced Annotations for NLP, pages 48–52.
Omar Sharif, Eftekhar Hossain, and Mohammed Moshiul Hoque. 2021. Nlp-cuet@
dravidianlangtech-eacl2021: Offensive language detection from multilingual code-mixed text using transformers. In *Proceedings of the First Workshop* on Speech and Language Technologies for Dravidian Languages, pages 255–261.
Arnav Sharma, Sakshi Gupta, Raveesh Motlani, Piyush Bansal, Manish Shrivastava, Radhika Mamidi, and Dipti Misra Sharma. 2016. Shallow parsing pipelinehindi-english code-mixed social media text. In *Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 1340–1345.
Yash Sharma, Basil Abraham, Karan Taneja, and Preethi Jyothi. 2020. Improving low resource code-switched asr using augmented code-switched tts. In *Proc. Interspeech 2020*, pages 4771–4775.
Zhijie Shen and Wu Guo. 2022. An improved deliberation network with text pre-training for codeswitching automatic speech recognition. In Proc.
Interspeech 2022, pages 3854–3858.
Rouzbeh Shirvani, Mario Piergallini, Gauri Shankar Gautam, and Mohamed Chouikha. 2016. The howard university system submission for the shared task in language identification in spanish-english codeswitching. In *Proceedings of the second workshop on computational approaches to code switching*,
pages 116–120.
Philippa Shoemark, James Kirby, and Sharon Goldwater.
2018. Inducing a lexicon of sociolinguistic variables from code-mixed text. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 1–6.
Prajwol Shrestha. 2014. Incremental n-gram approach for language identification in code-switched text. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 133–138.
Prajwol Shrestha. 2016. Codeswitching detection via lexical features in conditional random fields. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 121–126.
Zhiwei Shuang, Shiyin Kang, Yong Qin, Lirong Dai, and Lianhong Cai. 2010. Hmm based tts for mixed language text. In Eleventh Annual Conference of the International Speech Communication Association.
Utpal Kumar Sikdar, Biswanath Barik, and Björn Gambäck. 2018. Named entity recognition on codeswitched data using conditional random fields. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 115–
119.
Utpal Kumar Sikdar and Björn Gambäck. 2016. Language identification in code-switched text using conditional random fields and babelnet. In *Proceedings of the Second Workshop on Computational Approaches to Code Switching*, pages 127–131.
Anand Singh and Tien-Ping Tan. 2018. Evaluating codeswitched malay-english speech using time delay neural networks. In Proc. 6th Workshop on Spoken Language Technologies for Under-Resourced Languages
(SLTU 2018), pages 197–200.
Kushagra Singh, Indira Sen, and Ponnurangam Kumaraguru. 2018. Language identification and named entity recognition in hinglish code mixed tweets. In Proceedings of ACL 2018, Student Research Workshop, pages 52–58.
Pranaydeep Singh and Els Lefever. 2020. Sentiment analysis for hinglish code-mixed tweets by means of cross-lingual word embeddings. In *Proceedings of* the The 4th Workshop on Computational Approaches to Code Switching, pages 45–51.
Sunayana Sitaram and Alan W Black. 2016. Speech synthesis of code-mixed text. In *Proceedings of* the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3422–
3428.
Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, and Alan W Black. 2019. A survey of code-switched speech and language processing.
arXiv preprint arXiv:1904.00784.
Sunayana Sitaram, Sai Krishna Rallabandi, Shruti Rijhwani, and Alan W. Black. 2016. Experiments with cross-lingual systems for synthesis of code-mixed text. In *Proc. 9th ISCA Workshop on Speech Synthesis Workshop (SSW 9)*, pages 76–81.
Sunit Sivasankaran, Brij Mohan Lal Srivastava, Sunayana Sitaram, Kalika Bali, and Monojit Choudhury. 2018. Phone merging for code-switched speech recognition. In Third Workshop on Computational Approaches to Linguistic Code-switching.
Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, et al. 2014. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62–72.
Thamar Solorio and Yang Liu. 2008a. Learning to predict code-switching points. In *Proceedings of the* 2008 Conference on Empirical Methods in Natural Language Processing, pages 973–981.
Thamar Solorio and Yang Liu. 2008b. Part-of-speech tagging for english-spanish code-switched text. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1051–1060.
Tongtong Song, Qiang Xu, Meng Ge, Longbiao Wang, Hao Shi, Yongjie Lv, Yuqin Lin, and Jianwu Dang.
2022. Language-specific characteristic assistance for code-switching speech recognition. In *Proc. Interspeech 2022*, pages 3924–3928.
Sanket Sonu, Rejwanul Haque, Mohammed Hasanuzzaman, Paul Stynes, and Pramod Pathak. 2022. Identifying emotions in code mixed hindi-english tweets.
In *Proceedings of the WILDRE-6 Workshop within* the 13th Language Resources and Evaluation Conference, pages 35–41.
Victor Soto, Nishmar Cestero, and Julia Hirschberg.
2018. The role of cognate words, pos tags and entrainment in code-switching. In *Proc. Interspeech* 2018, pages 1938–1942.
Victor Soto and Julia Hirschberg. 2017. Crowdsourcing universal part-of-speech tags for code-switching. In Proc. Interspeech 2017, pages 77–81.
Victor Soto and Julia Hirschberg. 2018. Joint part-ofspeech and language id tagging for code-switched data. In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 1–10.
Victor Soto and Julia Hirschberg. 2019. Improving code-switched language modeling performance using cognate features. In *Proc. Interspeech 2019*, pages 3725–3729.
Mithun Kumar SR, Lov Kumar, and Aruna Malapati.
2022. Sentiment analysis on code-switched dravidian languages with kernel based extreme learning machines. In Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages, pages 184–190.
Dama Sravani, Lalitha Kameswari, and Radhika Mamidi. 2021. Political discourse analysis: a case study of code mixing and code switching in political speeches. In *Proceedings of the Fifth Workshop* on Computational Approaches to Linguistic CodeSwitching, pages 1–5.
Anirudh Srinivasan. 2020. Msr india at semeval-2020 task 9: Multilingual models can do code-mixing too.
In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 951–956.
Anirudh Srinivasan, Sandipan Dandapat, and Monojit Choudhury. 2020. Code-mixed parse trees and how to find them. In Proceedings of the The 4th Workshop on Computational Approaches to Code Switching, pages 57–64.
Vamshi Krishna Srirangam, Appidi Abhinav Reddy, Vinay Singh, and Manish Shrivastava. 2019. Corpus creation and analysis for named entity recognition in telugu-english code-mixed social media data.
In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 183–189.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
Aditya Srivastava and V Harsha Vardhan. 2020. Hcms at semeval-2020 task 9: A neural approach to sentiment analysis for code-mixed texts. In *Proceedings* of the Fourteenth Workshop on Semantic Evaluation, pages 1253–1258.
Brij Mohan Lal Srivastava and Sunayana Sitaram. 2018.
Homophone identification and merging for codeswitched speech recognition. In Proc. Interspeech 2018, pages 1943–1947.
Vivek Srivastava and Mayank Singh. 2020a. Iit gandhinagar at semeval-2020 task 9: Code-mixed sentiment classification using candidate sentence generation and selection. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 1259–1264.
Vivek Srivastava and Mayank Singh. 2020b. Phinc: A
parallel hinglish social media code-mixed corpus for machine translation. In *Proceedings of the Sixth*
Workshop on Noisy User-generated Text (W-NUT
2020), pages 41–49.
Vivek Srivastava and Mayank Singh. 2021a. Challenges and limitations with the metrics measuring the complexity of code-mixed text. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 6–14.
Vivek Srivastava and Mayank Singh. 2021b. Hinge: A
dataset for generation and evaluation of code-mixed hinglish text. In *Proceedings of the 2nd Workshop on* Evaluation and Comparison of NLP Systems, pages 200–208.
Vivek Srivastava and Mayank Singh. 2021c. Quality evaluation of the low-resource synthetically generated code-mixed hinglish text. In Proceedings of the 14th International Conference on Natural Language Generation, pages 314–319.
Sara Stymne et al. 2020. Evaluating word embeddings for indonesian–english code-mixed text based on synthetic data. In *Proceedings of the The 4th Workshop* on Computational Approaches to Code Switching, pages 26–35.
Ahmed Sultan, Mahmoud Salim, Amina Gaber, and Islam El Hosary. 2020. Wessa at semeval-2020 task 9: Code-mixed sentiment analysis using transformers. In *Proceedings of the Fourteenth Workshop on* Semantic Evaluation, pages 1342–1347.
Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. *Foundations* and Trends® in Machine Learning, 4(4):267–373.
Krithika Swaminathan, K Divyasri, GL Gayathri, Thenmozhi Durairaj, and B Bharathi. 2022. Pandas@
abusive comment detection in tamil code-mixed data using custom embeddings with labse. In *Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages*, pages 112–119.
Chihiro Taguchi, Sei Iwata, and Taro Watanabe. 2022.
Universal dependencies treebank for tatar: Incorporating intra-word code-switching information. In Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesserresourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference, pages 95–104.
Chihiro Taguchi, Yusuke Sakai, and Taro Watanabe. 2021. Transliteration for low-resource codeswitching texts: Building an automatic cyrillic-tolatin converter for tatar. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 133–140.
Karan Taneja, Satarupa Guha, Preethi Jyothi, and Basil Abraham. 2019. Exploiting monolingual speech corpora for code-mixed speech recognition. In *Proc.*
Interspeech 2019, pages 2150–2154.
Ishan Tarunesh, Syamantak Kumar, and Preethi Jyothi.
2021. From machine translation to code-switching:
Generating high-quality code-switched text. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3154–
3169.
Anju Leela Thomas, Anusha Prakash, Arun Baby, and Hema Murthy. 2018a. Code-switching in indic speech synthesisers. In *Proc. Interspeech 2018*,
pages 1948–1952.
Anju Leela Thomas, Anusha Prakash, Arun Baby, and Hema A Murthy. 2018b. Code-switching in indic speech synthesisers. In *INTERSPEECH*, pages 1948–
1952.
Jinchuan Tian, Jianwei Yu, Chunlei Zhang, Yuexian Zou, and Dong Yu. 2022. Lae: Language-aware encoder for monolingual and multilingual asr. In Proc. Interspeech 2022, pages 3178–3182.
Shashwat Trivedi, Harsh Rangwani, and Anil Kumar Singh. 2018. Iit (bhu) submission for the acl shared task on named entity recognition on code-switched data. In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 148–153.
G Richard Tucker. 2001. A global perspective on bilingualism and bilingual education. *GEORGETOWN*
UNIVERSITY ROUND TABLE ON LANGUAGES
AND LINGUISTICS 1999, page 332.
Ewald van der Westhuizen and Thomas Niesler. 2017.
Synthesising isizulu-english code-switch bigrams using word embeddings. In *INTERSPEECH*, pages 72–76.
Charangan Vasantharajan and Uthayasanker Thayasivam. 2021. Hypers@ dravidianlangtech-eacl2021:
Offensive language identification in dravidian codemixed youtube comments and posts. In *Proceedings* of the first workshop on speech and language technologies for dravidian languages, pages 195–202.
Deepanshu Vijay, Aditya Bohra, Vinay Singh, Syed Sarfaraz Akhtar, and Manish Shrivastava. 2018. Corpus creation and emotion prediction for hindi-english code-mixed social media text. In Proceedings of the 2018 conference of the North American chapter of the Association for Computational Linguistics: student research workshop, pages 128–135.
David Vilares, Miguel A Alonso, and Carlos GómezRodríguez. 2016. En-es-cs: An english-spanish codeswitching twitter corpus for multilingual sentiment analysis. In *Proceedings of the Tenth International* Conference on Language Resources and Evaluation
(LREC'16), pages 4149–4153.
Martin Volk and Simon Clematide. 2014. Detecting code-switching in a multilingual alpine heritage corpus. In *Proceedings of the first workshop on computational approaches to code switching*, pages 24–33.
Martin Volk, Lukas Fischer, Patricia Scheurer, Bernard Silvan Schroffenegger, Raphael Schwitter, Phillip Ströbel, and Benjamin Suter. 2022. Nunc profana tractemus. detecting code-switching in a large corpus of 16th century letters. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2901–2908.
Clare Voss, Stephen Tratz, Jamal Laoudi, and Douglas Briesch. 2014. Finding romanized arabic dialect in code-mixed tweets. In *Proceedings of the Ninth International Conference on Language Resources and* Evaluation (LREC'14), pages 2249–2253.
Ngoc Thang Vu and Tanja Schultz. 2014. Exploration of the impact of maximum entropy in recurrent neural network language models for code-switching speech.
In *Proceedings of The First Workshop on Computational Approaches to Code Switching*, pages 34–41.
Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. Pos tagging of english-hindi code-mixed social media content.
In *Proceedings of the 2014 conference on empirical* methods in natural language processing (EMNLP),
pages 974–979.
Anshul Wadhawan and Akshita Aggarwal. 2021. Towards emotion recognition in hindi-english codemixed data: A transformer based approach. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 195–202.
Changhan Wang, Kyunghyun Cho, and Douwe Kiela.
2018. Code-switched named entity recognition with embedding attention. In *Proceedings of the Third* Workshop on Computational Approaches to Linguistic Code-Switching, pages 154–158.
Jisung Wang, Jihwan Kim, Sangki Kim, and Yeha Lee.
2020. Exploring lexicon-free modeling units for endto-end korean and korean-english code-switching speech recognition. In *Proc. Interspeech 2020*, pages 1072–1075.
Qinyi Wang, Emre Yılmaz, Adem Derinel, and Haizhou Li. 2019. Code-switching detection using asrgenerated language posteriors. In Proc. Interspeech 2019, pages 3740–3744.
Zhongqing Wang, Sophia Lee, Shoushan Li, and Guodong Zhou. 2015. Emotion detection in codeswitching texts via bilingual and sentimental information. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),
pages 763–768.
Zhongqing Wang, Yue Zhang, Sophia Lee, Shoushan Li, and Guodong Zhou. 2016. A bilingual attention network for code-switched emotion prediction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1624–1634.
Jochen Weiner, Ngoc Thang Vu, Dominic Telaar, Florian Metze, Tanja Schultz, Dau-Cheng Lyu, EngSiong Chng, and Haizhou Li. 2012a. Integration of language identification into a recognition system for spoken conversations containing code-switches. In Spoken Language Technologies for Under-Resourced Languages.
Jochen Weiner, Ngoc Thang Vu, Dominic Telaar, Florian Metze, Tanja Schultz, Dau-Cheng Lyu, EngSiong Chng, and Haizhou Li. 2012b. Integration of language identification into a recognition system for spoken conversations containing code-switches. In Proc. 3rd Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2012),
pages 76–79.
Christopher M White, Sanjeev Khudanpur, and James K
Baker. 2008. An investigation of acoustic models for multilingual code-switching. In Ninth Annual Conference of the International Speech Communication Association.
Matthew Wiesner, Mousmita Sarma, Ashish Arora, Desh Raj, Dongji Gao, Ruizhe Huang, Supreet Preet, Moris Johnson, Zikra Iqbal, Nagendra Goel, Jan Trmal, Leibny Paola García Perera, and Sanjeev Khudanpur. 2021. Training hybrid models on noisy transliterated transcripts for code-switched speech recognition. In *Proc. Interspeech 2021*, pages 2906–
2910.
Nick Wilkinson, Astik Biswas, Emre Yilmaz, Febe De Wet, Thomas Niesler, et al. 2020. Semisupervised acoustic modelling for five-lingual codeswitched asr using automatically-segmented soap opera speech. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages
(CCURL), pages 70–78.
Genta Winata, Shijie Wu, Mayank Kulkarni, Thamar Solorio, and Daniel Preo¸tiuc-Pietro. 2022. Crosslingual few-shot learning on unseen languages. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 777–791.
Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, Peng Xu, and Pascale Fung. 2020.
Meta-transfer learning for code-switched speech recognition. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 3770–3776.
Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, and Pascale Fung.
2021a. Are multilingual models effective in codeswitching? In *Proceedings of the Fifth Workshop* on Computational Approaches to Linguistic CodeSwitching, pages 142–153.
Genta Indra Winata, Zhaojiang Lin, and Pascale Fung.
2019a. Learning multilingual meta-embeddings for code-switching named entity recognition. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181–
186.
Genta Indra Winata, Zhaojiang Lin, Jamin Shin, Zihan Liu, and Pascale Fung. 2019b. Hierarchical metaembeddings for code-switching named entity recognition. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3541–3547.
Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021b. Language models are few-shot multilingual learners. In *Proceedings of the 1st Workshop on Multilingual Representation Learning*, pages 1–15.
Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018a. Code-switching language modeling using syntax-aware multi-task learning. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 62–
67.
Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019c. Code-switched language models using neural based synthetic data from parallel sentences. In Proceedings of the 23rd Conference on Computational Natural Language Learning
(CoNLL), pages 271–280.
Genta Indra Winata, Chien-Sheng Wu, Andrea Madotto, and Pascale Fung. 2018b. Bilingual character representation for efficiently addressing out-of-vocabulary words in code-switching named entity recognition.
In *Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching*,
pages 110–114.
Jane Wottawa, Amazouz Djegdjiga, Martine AddaDecker, and Lori Lamel. 2018. Studying vowel variation in french-algerian arabic code-switched speech.
In *Proc. Interspeech 2018*, pages 2753–2757.
Qi Wu, Peng Wang, and Chenghao Huang. 2020. Meistermorxrc at semeval-2020 task 9: Fine-tune bert and multitask learning for sentiment analysis of codemixed tweets. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 1294–1297.
Yi-Lun Wu, Chaio-Wen Hsieh, Wei-Hsuan Lin, ChunYi Liu, and Liang-Chih Yu. 2011. Unknown word extraction from multilingual code-switching sentences. In *ROCLING 2011 Poster Papers*, pages 349–360, Taipei, Taiwan. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP).
Meng Xuan Xia. 2016. Codeswitching language identification using subword information enriched word vectors. In *Proceedings of the second workshop on* computational approaches to code switching, pages 132–136.
Meng Xuan Xia and Jackie Chi Kit Cheung. 2016. Accurate pinyin-english codeswitched language identification. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 71–79.
Xinyuan Xia, Lu Xiao, Kun Yang, and Yueyue Wang.
2022. Identifying tension in holocaust survivors' interview: Code-switching/code-mixing as cues. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1490–1495.
Haihua Xu, Van Tung Pham, Zin Tun Kyaw, Zhi Hao Lim, Eng Siong Chng, and Haizhou Li. 2018.
Mandarin-english code-switching speech recognition.
In *Proc. Interspeech 2018*, pages 554–555.
Jitao Xu and François Yvon. 2021. Can you traducir this? machine translation for code-switched input. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 84–
94.
Qihui Xu, Magdalena Markowska, Martin Chodorow, and Ping Li. 2021. A network science approach to bilingual code-switching. Proceedings of the Society for Computation in Linguistics, 4(1):18–27.
Liumeng Xue, Wei Song, Guanghui Xu, Lei Xie, and Zhizheng Wu. 2019. Building a mixed-lingual neural tts system with only monolingual data. In Proc.
Interspeech 2019, pages 2060–2064.
Zhen Yang, Bojie Hu, Ambyera Han, Shen Huang, and Qi Ju. 2020. Csp: Code-switching pre-training for neural machine translation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2624–2636.
Lingxuan Ye, Gaofeng Cheng, Runyan Yang, Zehui Yang, Sanli Tian, Pengyuan Zhang, and Yonghong Yan. 2022. Improving recognition of out-ofvocabulary words in e2e code-switching asr by fusing speech generation methods. In Proc. Interspeech 2022, pages 3163–3167.
Yin-Lai Yeong and Tien-Ping Tan. 2010. Language identification of code switching malay-english words using syllable structure information. In *Proc. Spoken* Language Technologies for Under-Resourced Languages, pages 142–145.
Yin-Lai Yeong and Tien-Ping Tan. 2014. Language identification of code switching sentences and multilingual sentences of under-resourced languages by using multi structural word information. In Proc.
Interspeech 2014, pages 3052–3055.
E Yilmaz, H Heuvel, and DA van Leeuwen. 2018.
Acoustic and textual data augmentation for improved asr of code-switching speech. In Proceedings of Interspeech, pages 1933–1937. Hyderabad, India:
ISCA.
Emre Yilmaz, Maaike Andringa, Sigrid Kingma, Jelske Dijkstra, Frits van der Kuip, Hans Van de Velde, Frederik Kampstra, Jouke Algra, Henk van den Heuvel, and David van Leeuwen. 2016. A longitudinal bilingual frisian-dutch radio broadcast database designed for code-switching research. In *Proceedings* of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4666–
4669.
Emre Yılmaz, Astik Biswas, Ewald van der Westhuizen, Febe de Wet, and Thomas Niesler. 2018. Building a unified code-switching asr system for south african languages. *Proceedings of Interspeech*.
Emre Yilmaz, Henk Van Den Heuvel, and David Van Leeuwen. 2018. Code-switching detection with dataaugmented acoustic and language models. In Proc.
6th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2018), pages 127–131.
Zeynep Yirmibe¸soglu and Gül¸sen Eryi ˘ git. 2018. Detect- ˘
ing code-switching between turkish-english language pair. In *Proceedings of the 2018 EMNLP Workshop* W-NUT: The 4th Workshop on Noisy User-generated Text, pages 110–115.
Zheng-Xin Yong, Ruochen Zhang, Jessica Zosa Forde, Skyler Wang, Samuel Cahyawijaya, Holy Lovenia, Genta Indra Winata, Lintang Sutawika, Jan Christian Blaise Cruz, Long Phan, Yin Lin Tan, and Alham Fikri Aji. 2023. Prompting multilingual large language models to generate code-mixed texts: The case of south east asian languages.
Shan-Ruei You, Shih-Chieh Chien, Chih-Hsing Hsu, Ke-Shiu Chen, Jia-Jang Tu, Jeng Shien Lin, and SenChia Chang. 2004. Chinese-english mixed-lingual keyword spotting. In *2004 International Symposium* on Chinese Spoken Language Processing, pages 237–
240. IEEE.
Fu-Hao Yu and Kuan-Yu Chen. 2020. A preliminary study on leveraging meta learning technique for codeswitching speech recognition. In Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020), pages 136–
147.
Liang-Chih Yu, Wei-Cheng He, and Wei-Nan Chien.
2012. A language modeling approach to identifying code-switched sentences and words. In *Proceedings* of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing, pages 3–8.
Emre Yılmaz, Samuel Cohen, Xianghu Yue, David A.
van Leeuwen, and Haizhou Li. 2019. Multi-graph decoding for code-switching asr. In *Proc. Interspeech* 2019, pages 3750–3754.
Emre Yılmaz, Jelske Dijkstra, Hans Van de Velde, Frederik Kampstra, Jouke Algra, Henk van den Heuvel, and David Van Leeuwen. 2017a. Longitudinal speaker clustering and verification corpus with code-switching frisian-dutch speech. In *Proc. Interspeech 2017*, pages 37–41.
Emre Yılmaz, Henk van den Heuvel, Jelske Dijkstra, Hans Van de Velde, Frederik Kampstra, Jouke Algra, and David Van Leeuwen. 2016. Open source speech and language resources for frisian. In *Proc.*
Interspeech 2016, pages 1536–1540.
Emre Yılmaz, Henk van den Heuvel, and David Van Leeuwen. 2017b. Exploiting untranscribed broadcast data for improved code-switching detection. In Proc.
Interspeech 2017, pages 42–46.
Emre Yılmaz, Henk van den Heuvel, and David van Leeuwen. 2018. Acoustic and textual data augmentation for improved asr of code-switching speech. In Proc. Interspeech 2018, pages 1933–1937.
George-Eduard Zaharia, George-Alexandru Vlad, Dumitru-Clementin Cercel, Traian Rebedea, and Costin Chiru. 2020. Upb at semeval-2020 task 9:
Identifying sentiment in code-mixed social media texts using transformers and multi-task learning. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1322–1330.
Zhiping Zeng, Yerbolat Khassanov, Van Tung Pham, Haihua Xu, Eng Siong Chng, and Haizhou Li. 2019.
On the end-to-end solution to mandarin-english codeswitching speech recognition. In *Proc. Interspeech* 2019, pages 2165–2169.
Haobo Zhang, Haihua Xu, Van Tung Pham, Hao Huang, and Eng Siong Chng. 2020. Monolingual data selection analysis for english-mandarin hybrid codeswitching speech recognition. In *Proc. Interspeech* 2020, pages 2392–2396.
Shiliang Zhang, Yuan Liu, Ming Lei, Bin Ma, and Lei Xie. 2019. Towards language-universal mandarinenglish speech recognition. In Proc. Interspeech 2019, pages 2170–2174.
Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Ye Bai, Jianhua Tao, Xuefei Liu, and Zhengqi Wen. 2021a. Endto-end spelling correction conditioned on acoustic feature for code-switching speech recognition. In Proc. Interspeech 2021, pages 266–270.
Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Jianhua Tao, Yu Ting Yeung, and Liqun Deng. 2022. reducing multilingual context confusion for end-to-end codeswitching automatic speech recognition. In Proc.
Interspeech 2022, pages 3894–3898.
Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong Bing, and Wai Lam. 2021b. Cross-lingual aspectbased sentiment analysis with aspect term codeswitching. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 9220–9230.
Yi Zhang and Jian-Hua Tao. 2008. Prosody modification on mixed-language speech synthesis. In Proc.
International Symposium on Chinese Spoken Language Processing, pages 253–256.
Shengkui Zhao, Trung Hieu Nguyen, Hao Wang, and Bin Ma. 2020. Towards natural bilingual and codeswitched speech synthesis based on mix of monolingual recordings and cross-lingual voice conversion.
In *Proc. Interspeech 2020*, pages 2927–2931.
Xinyuan Zhou, Emre Yılmaz, Yanhua Long, Yijie Li, and Haizhou Li. 2020. Multi-encoder-decoder transformer for code-switching speech recognition. In Proc. Interspeech 2020, pages 1042–1046.
Yueying Zhu, Xiaobing Zhou, Hongling Li, and Kunjie Dong. 2020. Zyy1510 team at semeval-2020 task 9:
Sentiment analysis for code-mixed social media text with sub-word level representations. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1354–1359.
## A Annotation Catalog
We release the annotation of all papers we use in the survey.
## A.1 *Cl Anthology
Bilingual Table 7 shows the annotation for papers with African-English. Table 8 shows the annotation for papers with East Asian-English languages. Table 9 shows the annotation for papers with European-English languages. Table 10 shows the annotation for papers with Middle EasternEnglish languages. Table 11 and Table 12 show the annotation for papers with South Asian-English languages. Table 13 shows the annotation for papers with South East Asian-English languages. Table 14 shows the annotation for papers with a combination of a language with a dialect. Table 15 shows the annotation for papers with languages in the same family. Table 16 shows the annotation for papers with languages in different families.
Trilingual Table 17 shows the annotation for papers with three languages.
4+ Table 18 shows the annotation for papers with four or more languages.
## A.2 Isca Proceeding
Bilingual Table 19 shows the annotation for papers with African-English. Table 20 shows the annotation for papers with East Asian-English languages. Table 21 shows the annotation for papers with European-English languages. Table 22 shows the annotation for papers with Middle EasternEnglish languages. Table 23 shows the annotation for papers with South Asian-English languages. Table 24 shows the annotation for papers with South East Asian-English languages. Table 25 shows the annotation for papers with a combination of a language with a dialect. Table 26 shows the annotation for papers with languages in the same family.
Table 27 shows the annotation for papers with languages in different families.
Trilingual Table 28 shows the annotation for papers with three languages.
4+ Table 29 shows the annotation for papers with four or more languages.
| Paper | Proceeding | IsiZulu | Swahili | isiXhosa | Setswana | Sesotho |
|----------------------------|----------------|-----------|-----------|------------|------------|-----------|
| 5 | 1 | 3 | 3 | 3 | | |
| (Joshi, 1982) | COLING | ✓ | | | | |
| (Piergallini et al., 2016) | CALCS | ✓ | | | | |
| (Niesler et al., 2018) | LREC | ✓ | ✓ | ✓ | ✓ | |
| (Biswas et al., 2020) | CALCS | ✓ | | | | |
| (Wilkinson et al., 2020) | SLTU and CCURL | ✓ | ✓ | ✓ | ✓ | |
| (Biswas et al., 2020) | LREC | ✓ | ✓ | ✓ | ✓ | |
| Paper | Proceeding | Chinese | Cantonese | Korean |
|-----------------------------|--------------|-----------|-------------|----------|
| 20 | 1 | 1 | | |
| (Fung et al., 1999) | ACL | ✓ | | |
| (Chan et al., 2009) | IJCLCLP | ✓ | | |
| (Li et al., 2012) | LREC | ✓ | | |
| (Peng et al., 2014) | ACL-IJCNLP | ✓ | | |
| (Li and Fung, 2014) | EMNLP | ✓ | | |
| (Solorio et al., 2014) | CALCS | ✓ | | |
| (Chittaranjan et al., 2014) | CALCS | ✓ | | |
| (Lin et al., 2014) | CALCS | ✓ | | |
| (Jain and Bhat, 2014) | CALCS | ✓ | | |
| (King et al., 2014) | CALCS | ✓ | | |
| (Huang and Yates, 2014) | EACL | ✓ | | |
| (Wang et al., 2015) | ACL-IJCNLP | ✓ | | |
| (Gambäck and Das, 2016) | LREC | ✓ | | |
| (Wang et al., 2016) | COLING | ✓ | | |
| (Çetinoglu et al. ˘ , 2016) | CALCS | ✓ | | |
| (Xia and Cheung, 2016) | CALCS | ✓ | | |
| (Yang et al., 2020) | EMNLP | ✓ | | |
| (Calvillo et al., 2020) | EMNLP | ✓ | | |
| (Lin and Chen, 2020) | ROCLING | ✓ | | |
| (Cho et al., 2020) | CALCS | ✓ | | |
| (Lin and Chen, 2021) | ROCLING | ✓ | | |
| (Lovenia et al., 2021) | LREC | ✓ | | |
![31_image_0.png](31_image_0.png)
Paper Proceeding Spanish French Portugese Polish German Dutch Finnish
![32_image_0.png](32_image_0.png)
(Sankoff, 1998) COLING ✓ (Solorio and Liu, 2008a) EMNLP ✓
(Solorio and Liu, 2008b) EMNLP ✓ (Peng et al., 2014) ACL-IJCNLP ✓ (Solorio et al., 2014) CALCS ✓ (Chittaranjan et al., 2014) CALCS ✓ (Lin et al., 2014) CALCS ✓ (Jain and Bhat, 2014) CALCS ✓ (King et al., 2014) CALCS ✓ (Carpuat, 2014) CALCS ✓ (Barman et al., 2014b) CALCS ✓ (Shrestha, 2014) CALCS ✓ (Bar and Dershowitz, 2014) CALCS ✓ (Gambäck and Das, 2016) LREC ✓
(Vilares et al., 2016) LREC ✓
(Çetinoglu et al. ˘ , 2016) CALCS ✓ (Guzman et al., 2016) CALCS ✓ (Molina et al., 2016) CALCS ✓ (Samih et al., 2016a) CALCS ✓
(Jaech et al., 2016) CALCS ✓ (AlGhamdi et al., 2016) CALCS ✓ (Al-Badrashiny and Diab, 2016) CALCS ✓ (Chanda et al., 2016a) CALCS ✓ (Shirvani et al., 2016) CALCS ✓ (Shrestha, 2016) CALCS ✓ (Sikdar and Gambäck, 2016) CALCS ✓ (Xia, 2016) CALCS ✓ (Duong et al., 2017) CoNLL ✓
(Rijhwani et al., 2017) ACL ✓ ✓ ✓ ✓
(Choudhury et al., 2017) ICON ✓ (Núñez and Wisniewski, 2018) TALN PFIA ✓ (Pratapa et al., 2018b) EMNLP ✓ (Mendels et al., 2018) LREC ✓ (Soto and Hirschberg, 2018) CALCS ✓ (Mave et al., 2018) CALCS ✓ (Bullock et al., 2018a) CALCS ✓ (Rallabandi et al., 2018) CALCS ✓ (Bawa et al., 2018) CALCS ✓ (Jain et al., 2018) CALCS ✓ (Winata et al., 2018b) CALCS ✓ (Sikdar et al., 2018) CALCS ✓
(Janke et al., 2018) CALCS ✓ (Geetha et al., 2018) CALCS ✓
(Claeser et al., 2018) CALCS ✓ (Aguilar et al., 2018) CALCS ✓ (Trivedi et al., 2018) CALCS ✓ (Wang et al., 2018) CALCS ✓ (Gonen and Goldberg, 2019) EMNLP ✓ (Yang et al., 2020) EMNLP ✓ ✓ (Khanuja et al., 2020b) ACL ✓ (Aguilar and Solorio, 2020) ACL ✓ (Cameron, 2020) JEP ✓ (Ahn et al., 2020) SCiL ✓ (Srinivasan et al., 2020) CALCS ✓
(Patwa et al., 2020) SemEval ✓
(De Leon et al., 2020) SemEval ✓
(Aparaschivei et al., 2020) SemEval ✓ (Kong et al., 2020) SemEval ✓ (Angel et al., 2020) SemEval ✓ (Palomino and Ochoa-Luna, 2020) SemEval ✓ (Ma et al., 2020) SemEval ✓ (Kumar et al., 2020) SemEval ✓ (Advani et al., 2020) SemEval ✓ (Javdan et al., 2020) SemEval ✓ (Wu et al., 2020) SemEval ✓ (Zaharia et al., 2020) SemEval ✓ (Sultan et al., 2020) SemEval ✓ (Zhu et al., 2020) SemEval ✓ (Parekh et al., 2020) CoNLL ✓
(Gupta et al., 2020) Findings of EMNLP ✓ ✓ ✓
(Aguilar et al., 2020) LREC ✓ (Iliescu et al., 2021) CALCS ✓ (Xu and Yvon, 2021) CALCS ✓ ✓ (Gupta et al., 2021b) CALCS ✓ (Jayanthi et al., 2021) CALCS ✓ (Winata et al., 2021a) CALCS ✓ (Prasad et al., 2021) MRL ✓ (Chopra et al., 2021) Findings of EMNLP ✓ (Santy et al., 2021) AdaptNLP ✓ (Cheong et al., 2021) W-NUT ✓ (Pratapa and Choudhury, 2021) W-NUT ✓ ✓ ✓ ✓ (Xia et al., 2022) LREC ✓ ✓
(Mellado and Lignos, 2022) LREC ✓ (Ostapenko et al., 2022) ACL ✓
| Paper | Proceeding | Egyptian Arabic | Arabic | Turkish |
|-----------------------------------------|--------------|-------------------|----------|-----------|
| 3 | 1 | 2 | | |
| (Rijhwani et al., 2017) | ACL | ✓ | | |
| (Hamed et al., 2018) | LREC | ✓ | | |
| (Yirmibe¸soglu and Eryi ˘ git ˘ , 2018) | W-NUT | ✓ | | |
| (Sabty et al., 2020) | WANLP | ✓ | | |
| (Balabel et al., 2020) | LREC | ✓ | | |
| (Hamed et al., 2020) | LREC | ✓ | | |
Table 10: *CL Catalog in Middle Eastern-English.
| Paper | Proceeding | Hindi Marathi Konkani Bengali | Bengali | Nepali Telugu Bangla Gujarati Punjabi Tamil Malayalam Malayalam Kannada | | | | | | | | | |
|-----------------------------|--------------|---------------------------------|-----------|---------------------------------------------------------------------------|----|----|----|----|----|----|----|----|----|
| intra-word | scripts | | | | | | | | | | | | |
| 111 | 1 | 1 | 12 | 1 | 10 | 7 | 1 | 1 | 2 | 37 | 23 | 1 | 10 |
| (Sankoff, 1998) | COLING | ✓ | | | | | | | | | | | |
| (Solorio et al., 2014) | CALCS | ✓ | | | | | | | | | | | |
| (Chittaranjan et al., 2014) | CALCS | ✓ | | | | | | | | | | | |
| (Lin et al., 2014) | CALCS | ✓ | | | | | | | | | | | |
| (Jain and Bhat, 2014) | CALCS | ✓ | | | | | | | | | | | |
| (King et al., 2014) | CALCS | ✓ | | | | | | | | | | | |
| (Barman et al., 2014b) | CALCS | ✓ | | | | | | | | | | | |
| (Shrestha, 2014) | CALCS | ✓ | | | | | | | | | | | |
| (Gambäck and Das, 2016) | LREC | ✓ | ✓ | | | | | | | | | | |
| (Ghosh et al., 2016) | CALCS | ✓ | ✓ | ✓ | | | | | | | | | |
| (Banerjee et al., 2018) | COLING | ✓ | ✓ | ✓ | ✓ | | | | | | | | |
| (Gundapu and Mamidi, 2018) | PACLIC | ✓ | | | | | | | | | | | |
| (Gupta et al., 2018a) | LREC | ✓ | ✓ | | | | | | | | | | |
| (Chandu et al., 2019) | CALCS | ✓ | ✓ | ✓ | | | | | | | | | |
| (Chandu et al., 2019) | CALCS | ✓ | ✓ | ✓ | | | | | | | | | |
| (Srirangam et al., 2019) | SRW | ✓ | | | | | | | | | | | |
| (Chakravarthi, 2020) | PEOPLES | ✓ | ✓ | | | | | | | | | | |
| (Singh and Lefever, 2020) | ICON | ✓ | | | | | | | | | | | |
| (Bansal et al., 2020a) | ICON | ✓ | | | | | | | | | | | |
| (Aguilar and Solorio, 2020) | ACL | ✓ | ✓ | | | | | | | | | | |
(Wu et al., 2020) SemEval ✓
![35_image_0.png](35_image_0.png) (Baroi et al., 2020) SemEval ✓
(Gopalan and Hopkins, 2020) SemEval ✓
(Malte et al., 2020) SemEval ✓
(Zaharia et al., 2020) SemEval ✓
(Zhu et al., 2020) SemEval ✓ (Parekh et al., 2020) CoNLL ✓ (Chakravarthi et al., 2020a) SLTU & CCURL ✓ (Chakravarthi et al., 2020b) SLTU & CCURL ✓ (Gupta et al., 2020) Findings of EMNLP ✓ ✓ ✓ ✓ ✓ (Makhija et al., 2020) COLING ✓ (Aguilar et al., 2020) LREC ✓ ✓ (Chatterjere et al., 2020) LREC ✓ (Aggarwal et al., 2020) W-NUT ✓
(Srivastava and Singh, 2020b) W-NUT ✓
(Chakravarthy et al., 2020) W-NUT ✓ (Chinnappa, 2021) LTEDI ✓ ✓ (Dave et al., 2021) LTEDI ✓ ✓
(Hossain et al., 2021) LTEDI ✓ ✓
(Balouchzahi et al., 2021) LTEDI ✓ ✓
(Agarwal and Narula, 2021) SRW ✓ (Agarwal et al., 2021) NLP4ConvAI ✓ (Garg et al., 2021) Eval4NLP ✓ (Srivastava and Singh, 2021b) Eval4NLP ✓ (Tarunesh et al., 2021) ACL ✓ (Srivastava and Singh, 2021a) CALCS ✓
(Gautam et al., 2021a) CALCS ✓
(Dowlagar and Mamidi, 2021a) CALCS ✓ (Appicharla et al., 2021) CALCS ✓ (Jawahar et al., 2021) CALCS ✓ (Gautam et al., 2021b) CALCS ✓ (Gupta et al., 2021b) CALCS ✓ ✓ ✓ (Jayanthi et al., 2021) CALCS ✓ ✓
(Parikh and Solorio, 2021) CALCS ✓
(Winata et al., 2021a) CALCS ✓
(Mehnaz et al., 2021) EMNLP ✓
(Prasad et al., 2021) MRL ✓ ✓ ✓
(Gupta et al., 2021a) NAACL ✓
(Ravikiran and Annamalai, 2021) DravidianLangTech ✓ ✓
(Mahata et al., 2021) DravidianLangTech ✓ (Saumya et al., 2021) DravidianLangTech ✓ ✓ ✓ (Mandalam and Sharma, 2021) DravidianLangTech ✓ ✓ (Dowlagar and Mamidi, 2021b) DravidianLangTech ✓ ✓ (Gupta et al., 2021c) DravidianLangTech ✓ ✓ (Balouchzahi and Shashirekha, 2021) DravidianLangTech ✓ ✓
(Dowlagar and Mamidi, 2021c) DravidianLangTech ✓ ✓ ✓ (Li, 2021) DravidianLangTech ✓ ✓ ✓ (Andrew, 2021) DravidianLangTech ✓ ✓ ✓ (Vasantharajan and Thayasivam, 2021) DravidianLangTech ✓ ✓ ✓ (Huang and Bai, 2021) DravidianLangTech ✓ ✓ ✓
(Sharif et al., 2021) DravidianLangTech ✓ ✓ ✓
(Bharathi et al., 2021) DravidianLangTech ✓ ✓ ✓
(Balouchzahi et al., 2021) DravidianLangTech ✓ ✓ ✓
(Rajalakshmi et al., 2021) DravidianLangTech ✓
(Khan et al., 2021) Findings of EMNLP ✓
(Chopra et al., 2021) Findings of EMNLP ✓ ✓ ✓
(Srivastava and Singh, 2021c) INLG ✓
(Santy et al., 2021) AdaptNLP ✓
(Wadhawan and Aggarwal, 2021) WASSA ✓
(Priyanshu et al., 2021) W-NUT ✓
(Dutta, 2022) DCLRL ✓
(Biradar and Saumya, 2022) DravidianLangTech ✓
(Swaminathan et al., 2022) DravidianLangTech ✓
(SR et al., 2022) DravidianLangTech ✓ ✓ ✓ (Ravikiran and Chakravarthi, 2022) DravidianLangTech ✓ (Ravikiran et al., 2022) DravidianLangTech ✓ (Nayak and Joshi, 2022) WILDRE-6 ✓ (Gautam, 2022) WILDRE-6 ✓ ✓ ✓ (Sonu et al., 2022) WILDRE-6 ✓
| Paper | Proceeding | Vietnamese | Tagalog | Indonesian |
|---------------------------|--------------|--------------|-----------|--------------|
| 1 | 2 | 2 | | |
| (Oco and Roxas, 2012) | PACLIC | ✓ | | |
| (Stymne et al., 2020) | CALCS | ✓ | | |
| (Nguyen and Bryant, 2020) | LREC | ✓ | | |
| (Arianto and Budi, 2020) | PACLIC | ✓ | | |
| (Herrera et al., 2022) | LREC | ✓ | | |
![36_image_0.png](36_image_0.png)
Table 13: *CL Catalog in South East Asian-English.
Paper Proceeding Darija-MSA MSA-Egyptian MSA-Other Dialect Chinese-Taiwanese MSA-Levant Arabic MSA-Gulf Mixed-English
![36_image_1.png](36_image_1.png)
(Chu et al., 2007) ✓
(Solorio et al., 2014) CALCS ✓
(Chittaranjan et al., 2014) CALCS ✓
(Lin et al., 2014) CALCS ✓
(Jain and Bhat, 2014) CALCS ✓ (Elfardy et al., 2014) CALCS ✓
(King et al., 2014) CALCS ✓
(Gambäck and Das, 2016) LREC ✓
(Samih and Maier, 2016) LREC ✓
(Diab et al., 2016) LREC ✓ (Molina et al., 2016) CALCS ✓
(Samih et al., 2016a) CALCS ✓
(Jaech et al., 2016) CALCS ✓
(Samih et al., 2016b) CALCS ✓
(AlGhamdi et al., 2016) CALCS ✓ (Al-Badrashiny and Diab, 2016) CALCS ✓
(Shrestha, 2016) CALCS ✓
(Attia et al., 2018) CALCS ✓
(Janke et al., 2018) CALCS ✓
(Geetha et al., 2018) CALCS ✓
(Aguilar et al., 2018) CALCS ✓
(Wang et al., 2018) CALCS ✓ (Aguilar et al., 2020) LREC ✓
(Elmadany et al., 2021) CALCS ✓
(Winata et al., 2021a) CALCS ✓
Table 15: *CL Catalog in Two Languages in the same family.
| Paper | Proceeding | Komi-Zyrian - Russian | Arabizi-Arabic | Spanish-Catalan | Corsican-French | Frisian-Dutch |
|-----------------------------------|-------------------|-------------------------|------------------|-------------------|-------------------|-----------------|
| 1 | 1 | 1 | 1 | 3 | | |
| (Eskander et al., 2014) | CALCS | ✓ | | | | |
| (Yilmaz et al., 2016) | LREC | ✓ | | | | |
| (Braggaar and van der Goot, 2021) | AdaptNLP | ✓ | | | | |
| (Amin et al., 2022) | BioNLP | ✓ | | | | |
| (Özate¸s et al., 2022) | Findings of NAACL | ✓ | ✓ | | | |
| (Kevers, 2022) | SIGUL | ✓ | | | | |
| Paper | Proceeding | Russian-Tatar Russian-Tatar Turkish-German MSA-North African French - Arabic Dialect Dutch-Turkish French-Algerian Basque-Spanish Spanish–Wixarika intra-word intra-word 1 1 7 1 2 2 1 1 1 | |
|--------------------------------------------------------|-------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----|
| (Sankoff, 1998) | COLING | ✓ | |
| (Papalexakis et al., 2014) | CALCS | ✓ | |
| (Gambäck and Das, 2016) | LREC | ✓ | |
| (Çetinoglu ˘ , 2016) | LREC | ✓ | |
| (Çetinoglu et al. ˘ , 2016) | CALCS | ✓ | |
| (Djegdjiga et al., 2018) | LREC | ✓ | |
| (El-Haj et al., 2018) | LREC | ✓ | |
| (Çetinoglu and Çöltekin ˘ , 2019) TLT, SyntaxFest 2019 | ✓ | | |
| (Mager et al., 2019) | NAACL | ✓ | ✓ |
| (Özate¸s and Çetinoglu ˘ , 2021) | CALCS | ✓ | |
| (Taguchi et al., 2021) | CALCS | ✓ | |
| (Lounnas et al., 2021) | ICNLSP | ✓ | |
| (Aguirre et al., 2022) | LREC | ✓ | |
| (Özate¸s et al., 2022) | Findings of NAACL | ✓ | |
| (Taguchi et al., 2022) | EURALI | ✓ | ✓ |
| Paper | Proceeding | Tulu-Kannada-EN | Hindi-Bengali-EN | Greek-German-EN | Magahi-Hindi-EN | Arabic-EN-French | Darija-EN-French |
|-----------------------------|--------------|-------------------|--------------------|-------------------|-------------------|--------------------|--------------------|
| 1 | 1 | 1 | 1 | 1 | 1 | | |
| (Voss et al., 2014) | LREC | ✓ | | | | | |
| (Çetinoglu et al. ˘ , 2016) | CALCS | ✓ | | | | | |
| (Barman et al., 2016) | CALCS | ✓ | | | | | |
| (Abdul-Mageed et al., 2020) | EMNLP | ✓ | | | | | |
| (Taguchi et al., 2021) | CALCS | | | | | | |
| (Rani et al., 2022) | LREC | ✓ | | | | | |
| (Hegde et al., 2022) | ELRA | ✓ | | | | | |
Table 17: *CL Catalog in Trilingual.
![37_image_0.png](37_image_0.png)
![37_image_1.png](37_image_1.png)
| Paper | Proceeding | isiZulu | isiXhosa | Setsawa | Sesotho | Sotho |
|----------------------------------------|--------------|-----------|------------|-----------|-----------|---------|
| 6 | 4 | 3 | 3 | 1 | | |
| (Niesler and de Wet, 2008) | Odyssey | ✓ | ✓ | | | |
| (Mabokela et al., 2014) | SLTU | ✓ | | | | |
| (van der Westhuizen and Niesler, 2017) | Interspeech | ✓ | | | | |
| (Yılmaz et al., 2018) | Interspeech | ✓ | ✓ | ✓ | ✓ | |
| (Biswas et al., 2018a) | Interspeech | ✓ | | | | |
| (Biswas et al., 2018b) | SLTU | ✓ | ✓ | ✓ | ✓ | |
| (Biswas et al., 2019) | Interspeech | ✓ | ✓ | ✓ | ✓ | |
| Paper | Proceeding | Chinese | Cantonese | Korean | Japanese |
|---------------------------|--------------|-----------|-------------|----------|------------|
| 27 | 5 | 1 | 1 | | |
| (Fu and Shen, 2000) | ISCSLP | ✓ | | | |
| (Kojima and Tanaka, 2003) | Eurospeech | ✓ | | | |
| (You et al., 2004) | ISCSLP | ✓ | | | |
| (Chan et al., 2004) | ISCSLP | ✓ | | | |
| (Chan et al., 2005) | Interspeech | ✓ | | | |
| (Ren et al., 2005) | Interspeech | ✓ | | | |
| (Chan et al., 2006) | Interspeech | ✓ | | | |
| (Liang et al., 2007) | SSW | ✓ | | | |
| (White et al., 2008) | Interspeech | ✓ | | | |
| (Qian et al., 2008) | ISCSLP | ✓ | | | |
| (Gu et al., 2008) | ISCSLP | ✓ | | | |
| (Zhang and Tao, 2008) | ISCSLP | ✓ | | | |
| (Cao et al., 2009) | Interspeech | ✓ | | | |
| (Shuang et al., 2010) | Interspeech | ✓ | | | |
| (He et al., 2012) | Interspeech | ✓ | | | |
| (Liang et al., 2013) | Interspeech | ✓ | | | |
| (Li and Fung, 2013) | Interspeech | ✓ | | | |
| (Xue et al., 2019) | Interspeech | ✓ | | | |
| (Gao et al., 2019) | Interspeech | ✓ | | | |
| (Zhang et al., 2019) | Interspeech | ✓ | | | |
| (Lu et al., 2020) | Interspeech | ✓ | | | |
| (Hu et al., 2020) | Interspeech | ✓ | | | |
| (Fu et al., 2020) | Interspeech | ✓ | | | |
| (Wang et al., 2020) | Interspeech | ✓ | | | |
| (Zhang et al., 2020) | Interspeech | ✓ | | | |
| (Chandu and Black, 2020) | Interspeech | ✓ | | | |
| (Zhao et al., 2020) | Interspeech | ✓ | | | |
| (Zhang et al., 2021a) | Interspeech | ✓ | | | |
| (Shen and Guo, 2022) | Interspeech | ✓ | | | |
| (Ye et al., 2022) | Interspeech | ✓ | | | |
| (Tian et al., 2022) | Interspeech | ✓ | | | |
| (Song et al., 2022) | Interspeech | ✓ | | | |
| (Zhang et al., 2022) | Interspeech | ✓ | | | |
| (Li et al., 2022) | Interspeech | ✓ | | | |
| Paper | Proceeding | Spanish | French | German | Maltese |
|--------------------------------------------|---------------|-----------|----------|----------|-----------|
| (Pfister and Romsdorfer, 2003) | Eurospeech | ✓ | | | |
| (Romsdorfer and Pfister, 2005) | Interspeech | ✓ | | | |
| (Rosner and Farrugia, 2007) | Interspeech | ✓ | | | |
| (Piccinini and Garellek, 2014) | SpeechProsody | ✓ | | | |
| (Sitaram et al., 2016) | SSW | ✓ | | | |
| (Soto and Hirschberg, 2017) | Interspeech | ✓ | | | |
| (Ramanarayanan and Suendermann-Oeft, 2017) | Interspeech | ✓ | | | |
| (Guzmán et al., 2017) | Interspeech | ✓ | | | |
| (Bullock et al., 2018b) | Interspeech | ✓ | | | |
| (Soto et al., 2018) | Interspeech | ✓ | | | |
| (Soto and Hirschberg, 2019) | Interspeech | ✓ | | | |
| (Chandu and Black, 2020) | Interspeech | ✓ | | | |
Table 21: ISCA Catalog in European-English.
Paper Proceeding Modern Standard Arabic
(White et al., 2008) Interspeech ✓ (Ali et al., 2021) Interspeech ✓
(Chowdhury et al., 2021) Interspeech ✓
Table 22: ISCA Catalog in Middle Eastern-English.
![39_image_0.png](39_image_0.png)
Paper Proceeding Hindi Marathi Bengali Telugu Gujarati Tamil Malayalam Kannada (Sitaram et al., 2016) SSW ✓
(Ramanarayanan and Suendermann-Oeft, 2017) Interspeech ✓
(Ganji and Sinha, 2018) Interspeech ✓
(Rao et al., 2018) Interspeech ✓ (Thomas et al., 2018a) Interspeech ✓ ✓
(Srivastava and Sitaram, 2018) Interspeech ✓
(Rambabu and Gangashetty, 2018) SLTU ✓
(Taneja et al., 2019) Interspeech ✓
(Rallabandi and Black, 2019) Interspeech ✓ ✓ ✓ (Prakash et al., 2019) SSW ✓
(Sharma et al., 2020) Interspeech ✓
(Manghat et al., 2020) Interspeech ✓
(Bansal et al., 2020b) Interspeech ✓
(Chandu and Black, 2020) Interspeech ✓ (Kumar et al., 2021) Interspeech ✓ ✓ (Liu et al., 2021) Interspeech ✓ ✓ ✓
(Diwan et al., 2021) Interspeech ✓ ✓ (Klejch et al., 2021) Interspeech ✓ ✓ (Wiesner et al., 2021) Interspeech ✓ ✓
(Antony et al., 2022) Interspeech ✓ (Manghat et al., 2022) Interspeech ✓
Table 23: ISCA Catalog in South Asian-English.
Table 24: ISCA Catalog in South East Asian-English.
Table 25: ISCA Catalog in Language with Dialects.
| Paper | Proceeding | Malay |
|-----------------------|--------------|---------|
| (Yeong and Tan, 2010) | SLTU | ✓ |
| (Yeong and Tan, 2014) | Interspeech | ✓ |
| (Singh and Tan, 2018) | Interspeech | ✓ |
| Paper | Proceeding | Chinese-Taiwanese |
|---------------------|--------------|---------------------|
| (Lyu and Lyu, 2008) | Interspeech | ✓ |
Paper Proceeding Frisian-Dutch Russian-Ukrainan
| (Lyudovyk and Pylypenko, 2014) | Interspeech | ✓ |
|----------------------------------|---------------|-----|
| (Yılmaz et al., 2016) | Interspeech | ✓ |
| (Yılmaz et al., 2017b) | Interspeech | ✓ |
| (Yılmaz et al., 2017a) | Interspeech | ✓ |
| (Yılmaz et al., 2018) | Interspeech | ✓ |
| (Yilmaz et al., 2018) | SLTU | ✓ |
| (Wang et al., 2019) | Interspeech | ✓ |
| (Yılmaz et al., 2019) | Interspeech | ✓ |
Table 26: ISCA Catalog in Two Languages in the same family.
![40_image_0.png](40_image_0.png)
![40_image_1.png](40_image_1.png)
| Paper | Proceeding | Kazakh-Russian | Hindi-Tamil | French-Arabic |
|--------------------------------|--------------|------------------|---------------|-----------------|
| 1 | 1 | 4 | | |
| (Amazouz et al., 2017) | Interspeech | ✓ | | |
| (Thomas et al., 2018a) | Interspeech | ✓ | | |
| (Wottawa et al., 2018) | Interspeech | ✓ | | |
| (Chandu and Black, 2020) | Interspeech | ✓ | | |
| (Chowdhury et al., 2021) | Interspeech | ✓ | | |
| (Mussakhojayeva et al., 2022b) | Interspeech | ✓ | | |
Table 27: ISCA Catalog in Two Languages in different families.
| Paper | Proceeding | Italian-German-English | Kiswahili-Shen-English |
|--------------------------|---------------|--------------------------|--------------------------|
| 1 | 1 | | |
| (Knill et al., 2020) | Interspeech | ✓ | |
| (Otundo and Grice, 2022) | SpeechProsody | ✓ | |
Table 28: ISCA Catalog in Trilingual.
![40_image_2.png](40_image_2.png)
(Lyu et al., 2010b) Interspeech ✓
(Weiner et al., 2012b) SLTU ✓
(Adel et al., 2014c) Interspeech ✓ (Adel et al., 2014b) Interspeech ✓ (Giwa and Davel, 2014) Interspeech ✓ (Adel et al., 2014a) SLTU ✓
(Rallabandi and Black, 2017) Interspeech ✓ (Garg et al., 2018b) Interspeech ✓ (Xu et al., 2018) Interspeech ✓ (Guo et al., 2018) Interspeech ✓ (Chang et al., 2019) Interspeech ✓ (Khassanov et al., 2019) Interspeech ✓ (Lee et al., 2019b) Interspeech ✓ (Zeng et al., 2019) Interspeech ✓ (Hu et al., 2020) Interspeech ✓ (Li and Vu, 2020) Interspeech ✓ (Zhou et al., 2020) Interspeech ✓ (Qiu et al., 2020) Interspeech ✓ (Liu et al., 2021) Interspeech ✓
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations Section on page 9 A2. Did you discuss any potential risks of your work?
Not applicable. This is a survey paper. There is no potential negative risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 on page 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhou-etal-2023-learning | Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description | https://aclanthology.org/2023.findings-acl.186 | Personalizing dialogue agents is important for dialogue systems to generate more specific,consistent, and engaging responses. However, most current dialogue personalization approaches rely on explicit persona descriptions during inference, which severely restricts its application. In this paper, we propose a novel approach that learns to predict persona information based on the dialogue history to personalize the dialogue agent without relying on any explicit persona descriptions during inference. Experimental results on the PersonaChat dataset show that the proposed method can improve the consistency of generated responses when conditioning on the predicted profile of the dialogue agent (i.e. {``}self persona{''}), and improve the engagingness of the generated responses when conditioning on the predicted persona of the dialogue partner (i.e. {``}their persona{''}). We also find that a trained persona prediction model can be successfully transferred to other datasets and help generate more relevant responses. | # Learning To Predict Persona Information For Dialogue Personalization Without Explicit Persona Description
Wangchunshu Zhou∗ † Qifei Li∗ **Chenle Li**
Beihang University, Beijing, China [email protected]
## Abstract
Personalizing dialogue agents is important for dialogue systems to generate more specific, consistent, and engaging responses. However, most current dialogue personalization approaches rely on explicit persona descriptions during inference, which severely restricts its application. In this paper, we propose a novel approach that learns to predict persona information based on the dialogue history to personalize the dialogue agent without relying on any explicit persona descriptions during inference. Experimental results on the PersonaChat dataset show that the proposed method can improve the consistency of generated responses when conditioning on the predicted profile of the dialogue agent (i.e. "self persona"), and improve the engagingness of the generated responses when conditioning on the predicted persona of the dialogue partner (i.e. "their persona"). We also find that a trained persona prediction model can be successfully transferred to other datasets and help generate more relevant responses.
## 1 Introduction
Recently, end-to-end dialogue response generation models (Sordoni et al., 2015; Serban et al., 2016; Bordes et al., 2017) based on recent advances of neural sequence-to-sequence learning models (Sutskever et al., 2014; Vaswani et al., 2017) have gained increasing popularity as they can generate fluent responses. However, as the dialogue agent is trained with datasets containing dialogues from many different speakers, it can not generate personalized responses for the current speaker, making the generated responses less relevant and engaging (Li et al., 2016b).
To address this problem, recent studies attempt to personalize dialogue systems by generating dialogue responses conditioning on given persona descriptions have been shown to help dialogue agents perform better (Zhang et al., 2018; Mazare´
et al., 2018). However, a major drawback of the current dialogue agent personalization approaches is that they require explicit persona descriptions in both training and inference stages, which severely limits their application in real-world scenarios because detailed persona descriptions for current speakers are not available in most scenarios. Another problem is that current dialogue personalization approaches are not interpretable and the role of additional persona information is unclear.
In this paper, we propose a novel dialogue agent personalization approach that automatically infers the speaker's persona based on the dialogue history which implicitly contains persona information. Our model generates personalized dialogue responses based on the dialogue history and the inferred speaker persona, alleviating the necessity of the persona description during inference.
Specifically, we propose two different approaches to perform persona detection. The first approach learns a "persona approximator" which takes dialogue history as the input and is trained to approximate the output representation of a persona encoder that takes explicit persona description as the input. The second approach instead addresses the persona detection problem as a sequence-tosequence learning problem and learns a "persona generator" which takes the dialogue history as the input and generates the persona description of the speaker. This approach provides a stronger supervision signal compared with the first approach and is more interpretable as the encoded persona information can be decoded to reconstruct the detected persona description.
Our proposed approach can be used to incorporate both "self-persona" which is the persona information of the dialogue agent, and "theirpersona" which is the persona information of the dialogue partner. On one hand, generating dia-
∗Equal contribution. †Corresponding author
![1_image_0.png](1_image_0.png)
logue responses conditioning on the inferred "selfpersona" can help the dialogue agent maintain a consistent persona during the conversation, thus enhancing the consistency of generated responses without the need of a pre-defined persona description for every dialogue agent. On the other hand, generating dialogue responses conditioning on the predicted persona of the dialogue partner helps the dialogue model generate more engaging responses that are relevant to its dialogue partner. The ability to automatically infer the persona information of the dialogue partner is particularly attractive because in many real-world application scenarios, the persona information of the user is hardly available before the dialogue starts. In addition, to facilitate training and tackle the problem of lacking training data, we propose to train the persona detection model with multi-task learning by sharing layers and training jointly with the dialogue context encoder in both approaches.
Our experiments on dialogue datasets with and without the persona description demonstrate the effectiveness of the proposed approach and show that a trained persona detection model can be successfully transferred to datasets without persona description.
## 2 Related Work
Preliminary study on dialogue personalization (Li et al., 2016b) attempts to use a persona-based neural conversation model to capture individual characteristics such as background information and speaking style. However, it requires the current speaker during inference to have sufficient dialogue utterances included in the training set, which is quite restricted by the cold-start problem.
More recently, Zhang et al. (2018) released the PersonaChat dataset which incorporates *persona* of two speakers represented as multiple sentences of profile description to personalize dialogue agents. They propose a profile memory network by considering the dialogue history as input and then performing attention over the persona to be combined with the dialogue history.
Mazare et al. ´ (2018) proposed to train a persona encoder and combine the encoded persona embedding with context representation by concatenation. The combined representation is then fed into the dialogue decoder to generate personalized responses. (Yavuz et al., 2019) designed the DeepCopy model, which leverages copy mechanism to incorporate persona texts and Madotto et al. (2019) propose to use meta-learning to adapt to the current speaker quickly, their approach also requires several dialogues of the speaker to perform dialogue personalization, which is different from our approach. Welleck et al. (2019) propose a dialogue natural language inference dataset and use it to measure and improve the consistency of the dialogue system. More recently, Zheng et al.
(2019) propose personalized dialogue generation with diversified traits. Song et al. (2020) introduce a multi-stage response generation stage to improve the personalization of generated responses. Wu et al. (2020) propose a variational response generator to better exploit persona information. Different from the aforementioned works, our approach does not require persona information during test time, which makes it more generally applicable.
Concurrently, Ma et al. (2021) propose to infer implict persona base on dialogue histories.
## 3 Methodology
The motivation behind the proposed approach is that we can learn to detect the profile (i.e., persona) of dialogue speakers based on the dialogue history, which is demonstrated by experimental results in Zhang et al. (2018) that we can train a model to effectively distinguish the corresponding persona from randomly sampled negative persona based on the dialogue history.
The key idea is to jointly train a persona detection model with a conventional dialogue response generation model. The persona detection model is trained with persona description to infer the persona information based on the dialogue history, which provides persona information for the dialogue model, thus alleviating the necessity of provided persona information during test time. We propose two different persona detection models.
The first model is a "persona approximator" and the second is a "persona generator". An overview of the proposed models is illustrated in Figure 1. We describe them in detail in this section, together with a multi-task learning objective which facilitates the training stage of the model.
## 3.1 Task Definition
Given a dialogue dataset D with personas, an example of the dataset can be represented as a triplet
(*h, p, r*). Specifically, h = {u1, u2*, ..., u*nh},
which represents the dialogue history with nh utterances. p = {p1, p2*, ..., p*np}, which represents a persona with np profile sentences. r represents the ground-truth response. Existing personalized dialogue models learn a dialogue response generation model G which takes h and p as input during inference and generates a personalized response G(*h, p*). Our goal is to learn a persona detection model D which enables the dialogue model to generate personalized response G(*h, D*(h)) without relying on given persona description p during test time. In this way, the persona description in the dataset is used to train the personalized dialogue agent and after training, our model should be able to generate personalized dialogue responses without relying on persona description.
## 3.2 Persona Approximator
The idea of persona approximator is that given a trained personalized dialogue model with persona encoder which takes the persona description as input and outputs the persona embedding, we can train a persona approximator which takes the dialogue history as input and learns to output a persona embedding which is similar with that encoded by the trained persona encoder. Persona embedding approximation is possible as dialogue history is shown to be sufficient for discriminating the corresponding persona (Zhang et al., 2018).
Formally, given dialogue history h and persona description p, the persona encoder E takes p as input and outputs persona embedding emb(p) = E(p). The proposed persona approximator A
takes h as input and outputs the approximated persona embedding a = A(h). The training objective of A is to optimize the embedding similarity (e.g. cosine similarity) between a and emb(p). At the same time, we minimize the cosine similarity between a and the embedding of a randomly sampled persona embedding of another user, which serves as a negative example.
We discuss several pros and cons of the proposed persona approximator here. The advantage of this approach is that it alleviates the requirement of persona description during training and can incorporate several off-the-shelf personalized dialogue models with persona encoder seamlessly. However, as the persona encoder itself is far from perfect and non-interpretable, a persona approximator which is trained to approximate the persona encoder may also be sub-optimal and even less interpretable. Another issue is that the persona approximator can only be trained after training the dialogue model and persona encoder. To alleviate this problem and train an interpretable persona detection model more effectively, we propose another persona detection model which is named "persona generator".
## 3.3 Persona Generator
As dialogue history can be used to predict the corresponding persona, which is demonstrated by Zhang et al. (2018), we hypothesize that dialogue history implicitly contains the persona of dialogue partners. Therefore, we argue that a good persona detection model should be able to reconstruct the dialogue partners' persona descriptions based on the dialogue history. Based on this insight, we propose a "persona generator" model which formulates the persona detection problem as a sequence-to-sequence learning problem and train the persona generator to recover the textual persona description of dialogue partners from the dialogue history.
Formally, the persona generator receives the dialogue history h as input and is trained to generate the persona description p, which is a sequence of tokens pi of length n. The persona generator is trained by maximizing the likelihood of the ground-truth persona descriptions:
$$\mathrm{L}_{p g}=-\sum_{i=1}^{n}\log P(p_{i}|p_{<i},h)\qquad\mathrm{(1)}$$
As illustrated in Figure 1(b), the persona generator consists of a persona encoder and a persona decoder. During training, the persona encoder takes the dialogue history as input and outputs a persona embedding that represents the persona information of either the dialogue model or its dialogue partner. The persona embedding is then concatenated with the context embedding generated by the dialogue encoder and fed into the dialogue decoder to generate the response. In addition, the persona embedding is also fed into the persona decoder to generate the textual persona description of the dialogue partner. During inference, only the encoder of the trained persona generator will be used to provide persona information for the response generation model.
While previous dialogue personalization approaches, as well as the aforementioned persona approximator, generally train the persona encoder to maximize the likelihood of gold responses with MLE and can not ensure that the persona encoder actually captures useful persona information, the persona generator is directly trained to generate persona information from dialogue history, which enforces the persona information to be successfully captured. This approach also enhances the interpretability of the dialogue personalization procedure as the persona embedding encoded from dialogue history can be decoded into persona description with the decoder of trained persona generator.
## 3.4 Multi-Task Learning
Training the proposed persona detection models can be difficult because the available persona description is limited. To alleviate this problem, we propose to adopt multi-task learning (Argyriou et al., 2006) by training the dialogue encoder jointly with the persona detection model. This is possible because both the dialogue encoder and the persona detection model take dialogue history as input and outputs a latent vector. The difference is that the dialogue context encoder is trained to provide direct information for response generation while the persona detection model is trained to predict persona description. These two tasks both require dialogue understanding and commonsense reasoning ability, which can be shared and help each other generalize better. We thus propose to adopt the multi-task learning paradigm to facilitate training. Specifically, we share the parameter of the first layer, which can be viewed as a general-purpose dialogue information encoder, between the dialogue context encoder and the persona detection model.
In addition, we also train the persona detection model to maximize the likelihood of ground-truth responses together with the dialogue model, which ensures that the persona detection model not only encodes persona information but also helps generate more fluent dialogue responses. We control the relative importance between the original MLE objective and the training objectives of the proposed persona detection models by weighting the loss of persona detection objective with a hyperparameter α which is empirically set to 0.1 in our experiments.
## 4 Experiments 4.1 Dataset
We conduct our experiments on PersonaChat dataset (Zhang et al., 2018) which is a multi-turn chit-chat conversation dataset containing conversations between human annotators who are randomly assigned a "persona". We experiment with two settings where the models are trained either with the persona description of themselves (i.e., self persona) or with the persona description of their dialogue partner (i.e., their persona). We present an example of the dataset in the Appendix.
In addition, we also expect our approach to be able to perform personalized dialogue response generation on other datasets (application scenarios) where persona description is not available even in the training set. Therefore, we also conduct experiments on the Dailydialog dataset (Li et al., 2017), which is a multi-turn dialogue dataset in a similar domain with PersonaChat but without persona description, to explore the transferability of our approach.
## 4.2 Evaluation Metrics
For automated evaluation, we employ the following metrics following previous work:
- **Perplexity** Following Zhang et al. (2018), we use perplexity (ppl) to measure the fluency of responses. Lower perplexity means better fluency.
- **Distinct** Following (Li et al., 2016a), we calculate the token ratios of distinct bigrams
(Distinct-2, abbreviated as Dst for convenience). We use this metric to measure the diversity of the responses.
- **Hits@1** Following Zhang et al. (2018),
Hit@1 measures the percentage of correct identification of a gold answer from a set of 19 distractors.
- **Consistency** We also include the Consistency score proposed by Welleck et al.
(2019). It is calculated by subtracting the percentage of generated response entails or contradicts (predicted with a pretrained dialogue NLI model) the persona information.
- **P-Cover** We also include the P-Cover metric proposed by Song et al. (2019), which evaluates how well the generated responses covers the persona information.
As automated metrics generally fail to correlates well with human evaluation (Liu et al., 2016; Zhou and Xu, 2020). We also systematically conduct human evaluation to further evaluate the proposed method. Specifically, we invite 20 human annotators that are all graduate students with good English proficiency to evaluate the quality of the model. Following Zhang et al. (2018), we ask human annotators to interact with compared models and evaluate the fluency, engagingness, and consistency of the model (scored between 1- 5). In addition, the degree of personalization of the model is measured by the ability of human annotators to detect the model's profile after the conversation, which is measured by displaying the real persona description together with a randomly sampled persona description and asking the human annotator to select which is more likely to be the profile of the model. The persona detection metric is only available in PersonaChat where test persona is available.
## 4.3 Compared Models
To explore to what extent our proposed approach is able to personalize dialogue agents, we compare two variants of our model which incorporate the persona approximator method and the persona generator method with the following baseline models:
- **DialogGPT** A Transformer-based dialogue response generation based on the GPT2 architecture and pre-trained on 147M
conversation-like exchanges extracted from Reddit comment chains. It has 345M parameters and fine-tuned on Personachat by prepending all persona descriptions at the begining of the dialogue context.
- **DialogGPT w/o persona** The same DialogGPT model fine-tuned on Personachat dataset without using persona information during training or inference.
- **DialogGPT+PE** A transformer-based dialogue model based on pre-trained DialogGPT model and fine-tuned by training a transformer-based persona encoder to provide persona embedding information.
- **PersonaCVAE** Our re-implementation of the PersonaCVAE model (Song et al., 2019) with the pre-trained DialogGPT as the base model.
- **GPMN** Generative Profile Memory Network (Zhang et al., 2018) is an RNN-based model that encodes persona as memory representations in a memory network.
Both of our models (Persona Approximator and Persona Generator) are based on pre-trained DialogGPT (Zhang et al., 2020) and fine-tuned on Personachat. The model has the same architecture with GPT-2 and has 345M parameters.
Fine-tuning hyperparameters are kept the same with Zhang et al. (2020). To make the model compatible with the encoder-decoder architecture described in the method section, we consider the hidden state of the last token in the transformer model as the context embedding. For the persona encoder, we share all layers except the last layer in the multi-task setting. The RNN-based baselines are trained from scratch and we used their original
Method **Self Persona Their Persona**
ppl Dst Hits@1 Cons P-Cover ppl Dst Hits@1 Cons P-Cover
GPMN 36.11 13.5 54.9 0.15 .018 36.45 14.8 51.4 0.10 .021
DialogGPT 13.62 23.1 83.2 0.35 .052 14.03 23.9 78.9 0.27 .059
DialogGPT+PE 13.57 24.8 **84.5 0.38** .055 13.90 25.1 **79.3** 0.28 **.063**
PersonaCVAE 14.83 **25.7**∗ 84.0 0.37 .061 14.88 25.6 78.3 0.24 .066
DialogGPT w/o persona 15.49 19.6 72.9 0.13 .012 - - - -
Persona Approximator 14.42 24.2 83.3 0.33 .038 14.63 24.9 78.4 0.24 .040 Persona Generator 13.39∗ 25.2 84.2 **0.38** .049 **13.82 25.8** 79.1 **0.29** .057
architecture and training methods in the original paper.
## 4.4 Experimental Results
Results on PersonaChat We first present the experimental results on the PersonaChat dataset where persona description is available during training. In this scenario, the persona detection model is trained in the same domain as the response generation model.
The results of automated evaluation metrics are shown in Table 1. First, we can see that models explicitly incorporate textual persona descriptions, including the dialogue model that incorporate a persona encoder (i.e., **DialogGPT+PE**) or prepend persona descriptions (i.e., **DialogGPT**), outperform the baseline model that does not exploit persona information by a relatively large margin in all automated metrics. Also, dialogue models with a pre-trained Transformer model (i.e., DialogGPT)
substantially outperform RNN-based models.
As for our proposed approaches, we find that both persona detection models substantially improve the performance upon the baseline with the pre-trained DialogGPT model without using persona information. When comparing the proposed two persona detection models, it is clear that the persona generator method performs much better than the persona approximator. Moreover, we find that it outperforms the competitive **DialogGPT**
and **DialogGPT+PE** model on several automated metrics despite not using any persona information at test time. We hypothesis that it is because the persona generator is trained with the reconstruction loss, which is a useful supervision signal that is complementary to the MLE objective.
In contrast, the persona encoder is trained jointly with the dialogue model by simply maximizing the likelihood of gold responses and may not actually capture the persona information. Our approach performs slightly worse than the best model using persona information in some metrics. However, the difference is very marginal even though our model does not take the persona information as input.
When comparing the performance of our proposed approaches trained with either "self persona" and "their persona", we can see that training the persona detection to predict the persona information of the dialogue system itself helps the model to maintain a consistent persona, thus improving the consistency of generated responses. In contrast, training the persona detection model to predict the persona of its dialogue partner helps the model to generate more diverse responses.
Human evaluation results are shown in Table 2.
We can see that dialogue models which explicitly incorporate textual persona descriptions significantly improves all human evaluation metrics.
As for our proposed approaches, we find that both proposed persona detection models can improve the consistency, engagingness, and persona detection accuracy upon the baseline seq2seq model without sacrificing the fluency of generated responses. The persona generator performs better than the persona approximator, which is consistent with the results in the automated evaluation. In addition, the persona generator model performs comparably and even better when compared with the competitive **DialogGPT** baseline. This demonstrates that our proposed method can effectively personalize dialogue agents without relying on pre-defined persona descriptions at test time.
Similarly, we find that when conditioning on
"self persona" as incorporating the persona description helps dialogue agents maintain a consis-
| Model | Persona | Fluency | Engagingness | Consistency | Persona Detection |
|-----------------------|-----------|-----------|----------------|---------------|---------------------|
| DialogGPT | self | 3.56 | 3.57 | 3.63 | 0.88 |
| DialogGPT | their | 3.49 | 3.59 | 3.47 | 0.80 |
| DialogGPT | both | 3.63 | 3.69 | 3.60 | 0.88 |
| DialogGPT+PE | self | 3.62 | 3.49 | 3.61 | 0.87 |
| DialogGPT+PE | their | 3.57 | 3.51 | 3.52 | 0.82 |
| DialogGPT+PE | both | 3.69 | 3.65 | 3.68∗ | 0.90 |
| PersonaCVAE | self | 3.51 | 3.55 | 3.53 | 0.85 |
| PersonaCVAE | their | 3.50 | 3.52 | 3.42 | 0.77 |
| PersonaCVAE | both | 3.57 | 3.59 | 3.51 | 0.83 |
| DialogGPT w/o persona | − | 3.39 | 3.28 | 3.30 | 0.69 |
| Persona Approximator | self | 3.45 | 3.40 | 3.35 | 0.78 |
| Persona Approximator | their | 3.36 | 3.43 | 3.27 | 0.73 |
| Persona Generator | self | 3.67 | 3.61 | 3.58 | 0.89 |
| Persona Generator | their | 3.61 | 3.69 | 3.52 | 0.84 |
| Persona Generator | both | 3.72∗ | 3.74∗ | 3.63 | 0.90 |
| Model | Per | Fluen | Engag | Consis |
|-----------------------|-------|---------|---------|----------|
| DialogGPT w/o persona | − | 3.42 | 3.41 | 3.48 |
| w/ Persona Generator | self | 3.53 | 3.52 | 3.58 |
| w/ Persona Generator | their | 3.48 | 3.57 | 3.56 |
tent profile throughout the conversation. Again, when conditioned on "their persona", the dialogue agent learns to predict the profile of its dialogue partner, which helps generate more engaging and personalized responses. Based on this motivation, we also conduct experiment with both "their" and
"self" persona at the same time. We find this make significant future improvement and enabling dialogue agent to generate dialogue responses that are both engaging and consistent.
On the transferability of persona detection models As persona descriptions are not available in most scenarios and datasets, we aim to enable dialogue agent personalization for dialogue models trained in datasets where no persona description is available with a persona detection model pretrained on PersonaChat. To test the transferability of trained persona detection models, we combine persona detection models pretrained on the PersonaChat dataset with dialogue systems trained on the Dailydialog dataset. The pretrained persona detection models are fine-tuned jointly with the pretrained dialogue model by maximizing the likelihood of ground-truth responses.
The results are shown in Table 3. We can see that transferring pre-trained persona detection models in the target dialogue domain is able to improve the performance of dialogue models. Specifically, predicting self-persona improves the consistency of the dialogue agent while detecting the persona of the dialogue partner improves the engagingness of generated responses. The experimental result also confirms the effectiveness of the proposed persona generator model and the persona reconstruction loss.
## 4.5 Ablation Study
To further understand the proposed models, we conduct an ablation study that focuses on: 1) the effectiveness of the multi-task learning architecture and the multi-task objective of persona detection models, and 2) the effect of available dialogue history length on the performance of persona detection models. We employ the dialogue response generation model with persona generator with self persona as the full model and compare it with the following ablated variants: (1) **first half:**
The variant where only the first half of conversations are used as the test set, which makes the input dialogue history for persona generator shorter. (2)
second half: The counterpart of **first half** where the available dialogue histories for persona generator are longer. (3) **w/o shared layers:** The vari-
Model perplexity Dst Hits@1 Cons
DialogGPT w/o Persona 15.49 19.6 72.9 0.13
- first half 18.31 15.7 66.5 0.05
- second half 13.24 23.8 79.3 0.19
w/ Persona Generator 13.39 25.2 84.2 0.38
- first half 16.24 23.5 79.8 0.32
- second half 12.01 26.9 88.6 0.44 - w/o shared layers 13.92 24.9 83.5 0.35 - w/o joint training 14.05 24.7 83.8 0.36
ant where the persona generator does not share its first layer with the encoder of the dialogue model.
(4) **w/o joint training:** The variant where the persona generator is exclusively trained with the reconstruction loss without jointly training with the MLE objective.
The results of the ablation study are shown in Table 4. We can see that both sharing layers and joint training improve the performance of the persona detection model, which demonstrates the effectiveness of multi-task learning in our task. As for the influence of the length of the dialogue history, we find that the proposed persona generator model performs better when giving longer dialogue history (i.e., the second half of the conversation), which is demonstrated by a larger relative improvement compared with the sequenceto-sequence baseline given the same dialogue history. This is reasonable as longer dialogue history may provide richer information and help detect persona better. It also suggests that our approaches may be more effective for dialogue agents that aim to conduct relatively long dialogues with humans.
This problem is similar to the well-known coldstart problem in the field of recommend systems.
However, this does not suggest that our proposed approach is not useful for most application scenarios where the dialogue agent must start the dialogue from scratch. In contrast, our model will continually track the persona information of both the dialogue agent itself and the dialogue partner, thus maintaining a consistent persona throughout the progress of the dialogue and gradually improve the engagingness of generated responses with the dialogue going on. In addition, the ability to automatically infer the persona information of the dialogue partner is also beneficial for real-world applications, where although we can pre-define a persona for the dialogue agent, the users' persona is not always available.
| No persona | I don't know what you could not do ? |
|----------------------|-------------------------------------------|
| PE w/ self | I am going to the club now. |
| PE w/ their | Do you want to play frisbee or something? |
| PG w/ self | okay I am going to make a cake. |
| - Generated Persona: | ... I craving eating cake... |
| PG w/ their | I prefer that let's watch tv together. |
| - Generated Persona: | ... I like TV show... |
## 4.6 Qualitative Analysis
To better understand the proposed method intuitively, we conduct a case study by feeding different variants of the dialogue model with the dialogue history presented in the Appendix and generate different continuations of the conversation.
The next utterances generated by different model variants are shown in Table 5. We can see that the dialogue model without persona information generates an irrelevant response that is not engaging.
In contrast, both the persona encoder which takes the predefined persona description and the persona generator which infers the persona from dialogue history enables the dialogue agent to generate consistent and relevant responses, which are likely to be more engaging for the dialogue partner. In addition, we present the outputs of the decoder in the persona generator, which demonstrates that the proposed approach is more interpretable.
## 5 Conclusion
In this paper, we propose a dialogue personalization approach that automatically infers the current speakers' persona based on the dialogue history, which enables neural dialogue systems to generate personalized dialogue responses without using persona description at test time. Our experiments on the PersonaChat dataset show that the proposed models can improve the model's consistency and engagingness when conditioning on the inferred persona information of the dialogue agent itself or the dialogue partner. We also conduct experiments on the Dailydialog dataset where persona description is not available and find that pre-trained persona detection models can be successfully transferred to other datasets without annotated persona descriptions. This confirms the potential of our approach for dialogue personalization in domains where persona descriptions are not available or expensive to collect. Nevertheless, our method still requires annotated persona information during training, which can be hard to get for specific domains. We leave this for future work.
## Limitations
One limitation of this work is that while our approach alleviates the requirement of persona description during inference, it still requires persona description for the training corpus. A viable solution is to transfer the pre-trained persona detection models to other datasets without persona description in train set. However, the success of this approach may depend on the degree of similarity between the target dataset and the PersonaChat dataset.
## Ethics Considerations
Our proposed method can generate personalized dialogue responses to users and improve the engaginess of the dialogue systems. It faces several common ethics concerns that a neural dialogue system may generate unexpected responses that make human users uncomfortable. However, it is common for most neural dialogue systems.
Another potential risk is that the persona generator may generate unexpected persona information that makes user uncomfortable. This issue could be addressed by adding constraints on the generated persona information.
## References
Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2006. Multi-task feature learning.
In Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 47, 2006, pages 41–48. MIT Press.
Antoine Bordes, Y-Lan Boureau, and Jason Weston.
2017. Learning end-to-end goal-oriented dialog. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 2426, 2017, Conference Track Proceedings. OpenReview.net.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics.
Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A
persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994–1003, Berlin, Germany. Association for Computational Linguistics.
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In *Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1:*
Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016.
How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics.
Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, and Ji-Rong Wen. 2021. One chatbot per person: Creating personalized chatbots based on implicit user profiles. In *SIGIR*, pages 555–564. ACM.
Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. 2019. Personalizing dialogue agents via meta-learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5454–5459, Florence, Italy. Association for Computational Linguistics.
Pierre-Emmanuel Mazare, Samuel Humeau, Martin ´
Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics.
Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016.
Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 3776–3784. AAAI Press.
Haoyu Song, Yan Wang, Wei-Nan Zhang, Xiaojiang Liu, and Ting Liu. 2020. Generate, delete and rewrite: A three-stage framework for improving persona consistency of dialogue generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5821–
5831, Online. Association for Computational Linguistics.
Haoyu Song, Weinan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses.
In *IJCAI*, pages 5190–5196. ijcai.org.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015.
A neural network approach to context-sensitive generation of conversational responses. In *Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 196–205, Denver, Colorado. Association for Computational Linguistics.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada, pages 3104–
3112.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 49, 2017, Long Beach, CA, USA, pages 5998–6008.
Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3731–3741, Florence, Italy. Association for Computational Linguistics.
Bowen Wu, MengYuan Li, Zongsheng Wang, Yifu Chen, Derek F. Wong, Qihang Feng, Junhong Huang, and Baoxun Wang. 2020. Guiding variational response generator to exploit persona. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 53–65, Online. Association for Computational Linguistics.
Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. DeepCopy: Grounded response generation with hierarchical pointer networks. In *Proceedings of the 20th Annual SIGdial* Meeting on Discourse and Dialogue, pages 122–
132, Stockholm, Sweden. Association for Computational Linguistics.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In *Proceedings of the 56th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 2204–
2213, Melbourne, Australia. Association for Computational Linguistics.
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Largescale generative pre-training for conversational response generation. In *ACL, system demonstration*.
Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019. Personalized dialogue generation with diversified traits. *arXiv preprint* arXiv:1901.09672.
Wangchunshu Zhou and Ke Xu. 2020. Learning to compare for better training and evaluation of open domain natural language generation models. In AAAI, pages 9717–9724.
## Limitations
One limitation of this work is that while our approach alleviates the requirement of persona description during inference, it still requires persona description for the training corpus. A viable solution is to transfer the pre-trained persona detection models to other datasets without persona description in train set. However, the success of this approach may depend on the degree of similarity between the target dataset and the PersonaChat dataset. Our transfer experiments on the DialyDialog dataset and the additional Reddit dataset confirms the effectiveness of transferring a pre-trained persona detection model.
Another limitation of this work is that adding the persona detection module will increases the model size and slow down the inference. The size issue can be reduced by sharing parameters between the persona detection module and the dialogue model. The inference speed issue only results in approximately 1.03× inference latency compared to the original model because the majority inference time is on decoding which is less affected by the persona detection module.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
barale-etal-2023-automated | Automated Refugee Case Analysis: A {NLP} Pipeline for Supporting Legal Practitioners | https://aclanthology.org/2023.findings-acl.187 | In this paper, we introduce an end-to-end pipeline for retrieving, processing, and extracting targeted information from legal cases. We investigate an under-studied legal domain with a case study on refugee law Canada. Searching case law for past similar cases is a key part of legal work for both lawyers and judges, the potential end-users of our prototype. While traditional named-entity recognition labels such as dates are meaningful information in law, we propose to extend existing models and retrieve a total of 19 categories of items from refugee cases. After creating a novel data set of cases, we perform information extraction based on state-of-the-art neural named-entity recognition (NER). We test different architectures including two transformer models, using contextual and non-contextual embeddings, and compare general purpose versus domain-specific pre-training. The results demonstrate that models pre-trained on legal data perform best despite their smaller size, suggesting that domain-matching had a larger effect than network architecture. We achieve a F1- score superior to 90{\%} on five of the targeted categories and superior to 80{\%} on an additional 4 categories. | # Automated Refugee Case Analysis: An Nlp Pipeline For Supporting Legal Practitioners
Claire Barale and **Michael Rovatsos**
School of Informatics The University of Edinburgh
{claire.barale,michael.rovatsos}@ed.ac.uk Nehal Bhuta School of Law The University of Edinburgh [email protected]
## Abstract
In this paper, we introduce an end-to-end pipeline for retrieving, processing, and extracting targeted information from legal cases. We investigate an under-studied legal domain with a case study on refugee law in Canada.
Searching case law for past similar cases is a key part of legal work for both lawyers and judges, the potential end-users of our prototype.
While traditional named-entity recognition labels such as dates provide meaningful information in legal work, we propose to extend existing models and retrieve a total of 19 useful categories of items from refugee cases.
After creating a novel data set of cases, we perform information extraction based on state-ofthe-art neural named-entity recognition (NER).
We test different architectures including two transformer models, using contextual and noncontextual embeddings, and compare general purpose versus domain-specific pre-training.
The results demonstrate that models pre-trained on legal data perform best despite their smaller size, suggesting that domain matching had a larger effect than network architecture. We achieve a F1 score above 90% on five of the targeted categories and over 80% on four further categories.
## 1 Introduction
The retrieval of similar cases and their analysis is a task at the core of legal work. Legal search tools are widely used by lawyers and counsels to write applications and by judges to inform their decision-making process. However, this task poses a series of challenges to legal professionals: (i) it is an expensive and time-consuming task that accounts for 30% of the legal work on average
(Poje, 2014), (ii) databases can be very large, with legal search tools gathering billions of documents, and (iii) selection of cases can be imprecise and may return many irrelevant cases, which creates the need to read more cases than necessary.
In Canada, from the date of the first claim to the final decision outcome, a claimant can expect to wait 24 months for refugee claims and 12 months for refugee appeals1. Long processing times are due to a significant backlog and to the amount of work required from counsels that help claimants file their claims, and who are frequently legal aid or NGO employees.
We find that these challenges are well-suited for NLP-based solutions and investigate the feasibility of automating all steps of the legal search for past similar cases. We construct an end-to-end pipeline that aims at facilitating this multi-step process, thereby supporting and speeding up the work of both lawyers and judges in *Refugee Status Determination (RSD)*. We provide a level of granularity and precision that goes beyond that of existing legal search tools such as Westlaw, *LexisNexis*, or *Refworld*2(Custis et al., 2019), which operate at the document level. *Refworld* is an online database maintained by the United Nations which helps retrieve relevant precedent cases and legislation.
However, the level of precision with which one can search for cases is limited. Moreover, our pipeline guarantees increased transparency, enabling end users to choose the criteria of legal search they find most relevant to their task among the proposed categories that act as filters for a search.
Specific literature studying refugee law and AI
is sparse. Attention has been given to the classification and prediction of asylum cases in the United States (Chen and Eagel, 2017; Dunn et al., 2017).
On Canadian data, research has been conducted to analyze the disparities in refugee decisions using statistical analysis (Rehaag, 2007, 2019; Cameron et al., 2021). However, those studies rely mostly on tabular data. We propose to work directly on
![1_image_0.png](1_image_0.png)
the text of refugee cases. To the best of our knowledge, no previous work implements an end-to-end pipeline and state-of-the-art NLP methods in the field of refugee law.
We provide an NLP-based end-to-end prototype for automating refugee case analysis built on historical (already decided) cases, which are currently available only in unstructured or semi-structured formats, and which represent the input data to our pipeline. The end goal of our approach is to add structure to the database of cases by extracting targeted information described in table 1 from the case documents, and providing the results in a structured format to significantly enrich the search options for cases. Thereby, the input data set of cases is described in a structured manner based on our extracted categories of items, adding extensive capabilities for legal search.
The pipeline described in figure 1 begins by searching and downloading cases (information retrieval, paragraph 4.1), pre-processing them (paragraph 4.2), extracting items previously identified as relevant by legal professionals. It then outputs a structured, precise database of refugee cases (information extraction, paragraph 4.3). In the information extraction step, we test different training and pre-training architectures in order to determine the best methods to apply to the refugee case documents. We construct each step with the aim of minimizing the need for human effort in creating labeled training data, aiming to achieve the best possible accuracy on each extracted information item.
We discuss technical choices and methodologies in section 5. Finally, we evaluate the information extraction step on precision, recall, and F1 score, and present detailed results in section 6.
We demonstrate that annotation can be sped up by the use of a terminology base while incorporating domain knowledge and semi-automated annotation tools. We find that domain matching is important for training to achieve the highest possible scores. We reach satisfactory token classification results on a majority of our chosen categories. The contributions of this paper are as follows:
1. First, we retrieve 59,112 historic decision documents (dated from 1996 to 2022) from online services of the Canadian Legal Information Institute (CanLII) based on context-based indexing and metadata to curate a collection of federal *Refugee Status Determination* (RSD)
cases. Our automated retrieval process is exhaustive and comprises all available cases. It is superior to human-based manual retrieval in terms of error proneness and processing time.
2. Second, we proposed an information extraction pipeline that involves pre-processing, construction of a terminology base, labeling data, and using word vectors and NER models to augment the data with structured information.
We fine-tune state-of-the-art neural network models to the corpus of our retrieved cases by training on newly created gold-standard text annotations specific to our defined categories of interest.
3. Lastly, we extract the targeted category items from the retrieved cases and create a structured database from our results. We introduce structure to the world of unstructured legal RSD cases and thereby increase the transparency of stated legal grounds, judge reasoning, and decision outcomes across all processed cases.
## 2 Background And Motivation
At the core of the ongoing refugee crisis is the legal and administrative procedure of Refugee Status Determination (RSD), which can be summarized in three sub-procedures: (i) a formal claim for refugee protection by a claimant who is commonly supported by a lawyer, (ii) the decision-making
| Case cover Main text |
|------------------------|
Case **cover**
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
![2_image_2.png](2_image_2.png)
Label A Description **Example**
process of a panel of judges and (iii) the final decision outcome with written justification for granting refugee protection or not.
Refugee protection decisions are high-stakes procedures that target 4.6 million asylum seekers worldwide as of mid-2022. In Canada alone, 48,014 new claims and 10,055 appeals were filed in 20213. As stated in the introduction, processing times of refugee claims vary and range from a few months to several years. One of the reasons for the long processing times is the effort required for similar cases search. Case research is an essential part of the counsel's work in preparation for a new claim file. This search involves retrieving citations and references to previous, ideally successful RSD
cases that exhibit similarities to the case in preparation such as the country of origin or the reason for the claim. Equally, judges rely on researching previous cases to justify their reasoning and ensure coherency across rulings.
While each case exhibits individual characteristics and details, legal practitioners typically search for similarities based on the constitution of the panel, the country of origin and the characteristics of the claimant, the year the claim was made in relation to a particular geopolitical situation, the legal procedures involved, the grounds for the decision, the legislation, as well as other cases or reports that are cited.
Our work aims to support legal practitioners, both lawyers preparing the application file and judges having to reach a decision, by automating 3https://irb.gc.ca/en/statistics/Pages/index.
aspx the time-consuming search for similar legal cases referred to here as *refugee case analysis*. As a case study, we work on first instance and appeal decisions made by the *Immigration and Refugee* Board of Canada. A common approach used by legal practitioners is to manually search and filter past RSD cases on online services such as CanLII
or Refworld by elementary *document text* search, which is a keyword-based *find exact* search, or by date.
Our defined categories of interest are described in table 1. The labels have been defined and decided upon with the help of three experienced refugee lawyers. From the interviews, we curated a list of keywords, grounds, and legal elements determining a decision. Moreover, we analyzed a sample of 50 Canadian refugee cases recommended by the interviewees to be representative over years of the claim and tribunals.
We use the pre-defined labels provided by spaCy's state-of-the-art EntityRecognizer class including DATE, PERSON, GPE, ORG, NORP, LAW and extend this list with new additional labels that we created and trained from scratch.
Each case document comprises a *case cover* page (the first page) and the *main text* which differ in the type and format of their information content.
Therefore, we chose separate labels for the *case* cover. The *case cover* contains general information about the case (cf. example in Appendix A).
While the main text is presented as a full-body text, the *case cover* page consists of semi-structured information which could that could be roughly described as tabular, except it does not follow a clear layout. Based on the *case cover* page we aim to extract meta-information about each claim using four labels (table 1).
For the *main text*, we chose 15 labels that represent characteristics reflective of similarity among different cases. To link cases to each other and later facilitate similar case retrieval, we also extract three categories of citations i.e. LAW for legal texts, LAW_CASES for other mentioned past cases, and LAW_REPORT for external sources of information such as country reports. Additionally, the CREDIBILITY label retrieves mentions made of credibility concerns in the claimant's allegations, which tends to be among the deciding factors for the success of a claim and is hence essential to understand the reasoning that led to the legal determination at hand.
A successful implementation of a system capable of extracting this information reliably would provide several benefits to legal practitioners: (i) facilitating, speeding up, and focusing legal search, (ii)
reducing the time spent on a claim and on providing relevant references, potentially resulting in a file that has more chances of being accepted, and (iii)
for judges, to ensure consistent outcomes across time and different jurisdictions or claimant populations.
## 3 Research Approach
Our approach is guided by investigating the hypothesis that NER methods can be used to extract structured information from legal cases, i.e. we want to determine whether state-of-the-art methods can be used to improve the transparency and processing of refugee cases. Consistency of the decision-making process and thorough assessment of legal procedure steps are crucial aspects ensuring that legal decision outcomes are transparent, high-quality, and well-informed. Consequently, key research questions we need to address include:
Training data requirements How many labeled samples are needed? Can keyword-matching methods or terminology databases be leveraged to reduce the need for human annotation?
Extraction What methods are best suited to identify and extract the target information from legal cases?
Replicability To what extent might our methods generalize to other legal data sets (other legal fields or other jurisdictions)?
Pre-training How important is the pre-training step? How important is *domain matching*: do domain-specific pre-training perform better than general-purpose embeddings, despite their smaller sizes?
Architectures How important is the architecture applied to the information extraction tasks, in terms of F1 score, precision, and recall?
## 4 Pipeline Details And Experimental Setup
In this section, we detail each step of the pipeline as presented in figure 1 and how it compares to the current legal search process. Subsequently, in 5 we describe the training data creation process, and the network architectures tested. The code for our implementation and experiments can be found at https://github.com/clairebarale/ refugee_cases_ner.
## 4.1 Information Retrieval: Case Search
We retrieve 59,112 cases processed by the Immigration and Refugee Board of Canada that range from 1996 to 2022. The case documents have been collected from CanLII in two formats, PDF and HTML. The CanLII web interface serves queries through their web API accessible at the endpoint with URL https://www.canlii.org/ en/search/ajaxSearch.do. For meaningful queries, the web API exposes a number of HTTPGET request parameters and corresponding values which are to be appended to the URL but preceded by a single question mark and then concatenated by a single ampersand each. For instance, in the parameter=value pairs in the following example, the keyword search exactly matches the text REFUGEE, and we retrieve the second page of a paginated list of decisions from March 2004 sorted by descending date, which returns a JSON
object (Full query: https://www.canlii.org/
en/search/ajaxSearch.do?type=decision& ccId=cisr&text=EXACT(REFUGEE)&startDate= 2004-03-01&endDate=2004-03-31&sort=
decisionDateDesc&page=2). Note that CanLII
applies pagination to the search results in order to limit the size of returning objects per request.
## 4.2 Preprocessing
We obtain two sets: (1) a set of *case covers* that consists of semi-structured data and displays metainformation and (2) a set of *main text* that contains the body of each case, in full text.
Generally, the CanLII database renders the decision case documents as HTML pages for display in modern web browsers but also provides PDF
files. We use PyPDF24for parsing the contents of PDF files as text. To parse the contents of HTML
files as text input to our NLP pipeline, we use the BeautifulSoup5 python library.
The choice between PDF and HTML format is based on multiple reasons, as each format has its own advantages and disadvantages. First, depending on the text format PyPDF2 occasionally adds excessive white space between letters of the same word. Also, the PDF document is parsed line-byline from left to right, top to bottom. Therefore, multi-column text is often mistakenly concatenated as a single line of text. However, the available PDF documents are separated by pages and PyPDF2 provides functionality to select individual document pages which we used to select the case cover page that provides case details for each document.
HTML as markup language provides exact anchors with HTML tags, which, in most cases, are denoted by opening and closing tag parts such as <p> and
</p> for enclosing a paragraph.
When processing the *main text* of each case document, we parse the HTML files using BeautilfulSoup, remove the case cover to keep only the full-body text, and tokenize the text by sentence using the NLTK6. Our preference to tokenize by sentence facilitates the annotation process while keeping the majority of the context. We also experimented with splitting by paragraph which yielded relatively large chunks of text, whereas splitting by phrase did not keep enough context during the annotation process. To gather results, we create a pandas Dataframe, create a sentence per row, and save it to a CSV file.
For the *case cover*, we exploit PyPDF2's functionality to extract the text of the first page from the PDF format. In contrast to this, when using BeautifulSoup we could not rely on HTML tags
(neither through generic tag selection nor by CSS
identifier (ID) or CSS class), to retrieve the first page of the document robustly. After extracting this page for each case, we parse the PDF files as plain text. Combined with the metadata from the document retrieval provided by CanLII, we derive the case identifier number and assign it to the corresponding PDF file. As a next step and similar to the procedure for the main body of each document, we create a pandas Dataframe from the extracted data and save it as a CSV file with case identifier numbers and their associated case cover.
For both file formats, we perform basic text cleaning, converting letters to lowercase, and removing excessive white space and random newlines.
## 4.3 Information Extraction
The goal of our pipeline is not only to retrieve the cases but to structure them with a high level of precision and specificity, and to output a tabular file where each column stores specific information of each of our target types for each case. Using such structured information, legal practitioners can find similar cases with ease instead of reading carefully through several cases before finding a few cases similar to their own cases by selecting attributes in one or several of the extracted categories.
We chose to use neural network approaches to perform the information extraction step. After some experimentation, approaches such as simple matching and regular expressions search proved too narrow and unsuitable for our data. Given the diversity of formulations and layouts, phrasing that captures context is quite important. Similarly, we discard unsupervised approaches based on the similarity of the text at the document or paragraph level because we favor transparency to the end user in order to enable leveraging legal practitioners' knowledge and expertise.
Extraction of target information can be done using sequence-labeling classification. NER methods are well-suited to the task of extracting keywords and short phrases from a text. To this end, we create a training set of annotated samples as explained in the next section 5.1. Labeled sentences are collected in jsonlines format, which we convert to the binary spaCy-required format and use as training and validation data for our NER pipeline.
## 5 Methodology 5.1 Training Data Creation
We choose to use a machine learning component for text similarity to reinforce the consistency of the annotations. In line with our previous step of pre-processing, we annotate the case cover section and the main text separately. While we decided to annotate the whole page of the case cover because the semi-structured nature of the text makes tokenization approximate, we perform annotation of the main text as a sentence-based task, preserving some context. We use the Prodigy annotation tool7, which provides semi-automatic annotations and active learning in order to speed up and improve the manual labeling work in terms of consistency and accuracy of annotation. We convert the two pandas Dataframes containing the preprocessed text to jsonlines which is the preferred format for Prodigy. We annotate 346 case covers and 2,436 sentences for the main text, which are chosen from the corpus at random.
To collect annotated samples on traditional NER labels (DATE, ORG, GPE, PERSON, NORP,
LAW), we use suggestions from general purpose pre-trained embeddings8. For the remaining labels (CLAIMANT_INFO, CLAIMANT_EVENT,
PROCEDURE, DOC_EVIDENCE, EXPLANATION,
DETERMINATION, CREDIBILITY), and still with the aim of improving consistency of annotation, we create a terminology base (as shown on pipeline description figure 1). At annotation time, patterns are matched with shown sentences. The human annotator only corrects them, creating a gold standard set of sentences and considerably speeding up the labeling task.
To create a terminology base for each target category, we first extract keywords describing cases from CanLII metadata retrieved during the information retrieval step. To this initial list of tokens, we add a list of tokens that were manually flagged in cases by legal professionals. We delete duplicates and some irrelevant or too general words such as
"claimant" or "refugee", and manually assign the selected keywords to the appropriate label to obtain a list of tokens and short phrases per label. In order to extend our terminology base, we use the sense2vec model9(based on word2vec (Mikolov et al., 2013)) to generate similar words and phrases.
We select every word that is at least 70% similar to the original keyword in terms of cosine similarity and obtain a JSON file that contains 1,001 collected patterns. This method allows us to create a larger number of labeled data compared to fully manual annotation in the same amount of time.
Table 1 describes the breakdown of labels in 7Prodigy: https://prodi.gy/docs 8https://spacy.io/models/en 9Sense2vec:
https://github.com/explosion/sense2vec our annotated data. There is a clear imbalance across categories of items, with some labels being infrequent (NORP, DETERMINATION, PERSON,
LAW_REPORT, LAW_CASE). Some labels are present very few times per case: DETERMINATION occurs only once per case, PERSON does not occur frequently since most cases are anonymized.
## 5.2 **Experimental Conditions And Architectures**
Train, dev, test split We trained the NER models using 80% of the labeled data as our training set (276 case covers and 1,951 sentences for the main text, respectively), 10% of the labeled data as our development set (35 case covers and 244 sentences) and 10% of the labeled data as the test set for evaluation (35 case covers and 244 sentences).
Pre-training static and contextual embeddings As the first layer of the NER network, we add pre-trained character-level embeddings in order to isolate the effect of pre-training from the effect of the architecture and improve the F1 score on target items. We fine-tune GloVe vectors ((Pennington et al., 2014), 6B tokens, 400K vocabulary, uncased, 50 dimensions) on our data using the Mittens10 python package (Dingwall and Potts, 2018) and create 970 static vectors. On top of the generated static vectors, we add dynamic contextualized vectors using pre-training embeddings based on BERT (J. Devlin and Toutanova, 2019), updating weights on our corpus of cases. Because the text of the case cover is presented in a semi-structured format, we consider that it is unnecessary to perform pre-training given the lack of context around the target items.
Architectures We experiment with five different architectures on the case cover and seven different architectures on the main text: five based on convolutional neural networks (CNN) using different word embeddings and two transformer architectures. We train a CNN without added vectors as a baseline. Only the transformer architectures require training on a GPU. We use the spaCy pipelines11 (tokenizer, CNN and transformer) and the HuggingFace datasets12. All CNNs use an Figure 2: Example of an error in tokenization Adam optimizer function. Since the sentencelabeling task is well-suited to the masked language modeling objective, we chose to experiment with roBERTa (Liu et al., 2019) and LegalBERT
(Chalkidis et al., 2020) in order to compare performance between a general content and a legal content model.
We train separately on the case cover, the traditional NER labels (GPE, NORP, ORG, DATE,
PERSON, LAW), and the labels we created from scratch since it was observed that labels trained from scratch benefit from a lower learning rate
(0.0005 versus 0.001 for the traditional labels).
## 6 Results And Evaluation
Our experimental results are presented in table 2 in absolute terms and relative to the baseline in figure 3 below. Our chosen baseline is a CNN
with no additional vectors. We present them per label because of the disparities in the scores. The upper rows contain results on the case cover and the lower rows results on the main text. The evaluation metrics applied serve a duel purpose: for future research, achieving a high F1 score and precisionrecall balance is key, while for our potential legal end users we assume that the recall measure is much more important as it measures how many of the predicted entities are correct.
For the case cover, we obtain satisfactory results on all labels with F1 scores above 90% for three of them and 84.78% for name extraction. Apart from names, CNN architectures perform better, with dates achieving the highest score with randomly initialized embeddings. We explain this with the specific layout of this page (Annex A).
The only gain of using a transformer-based model is to achieve a higher recall compared to the CNNbased architectures.
For the main text, results vary across labels:
we obtain a score above 80% for DATE, GPE,
PERSON, ORG with the best score on roBERTa, but legal-bert-base-uncased scores lower than 60% on EXPLANATION, LAW,
![6_image_0.png](6_image_0.png) LAW_CASE. Overall, when using transformers, we observe a better precision-recall balance.
Results on three labels DETERMINATION,
LAW_REPORT, NORP are unreliable because of the limited sample both for training and testing.
DETERMINATION appears only once per case, and LAW_REPORT appears in a few cases only. Further annotation would require selecting the paragraphs of cases where these items appear to augment the size of the sample. We leave this task to future work.
Explanations for other low scores are partly to be found in the tokenization errors reported during the human-labeling task. Figure 2 shows an example of wrong tokenization on two categories LAW and LAW_CASE for which we believe bad tokenization is the primary explanation for low scores (similarly reported by Sanchez). In the first sentence of the figure, words are not correctly split between "under" and "section" and between the section number and "of". On the lower part of the figure, sentence tokenization does not correctly split the case reference as it is confused by the dot present in the middle. In this example, the case name is displayed as three different sentences, making the labeling task impossible.
The most appropriate pre-training varies across labels: For categories on which CNN performs best such as CREDIBILITY, DOC_EVIDENCE, LAW, we find that fine-tuning static vectors performs better than randomly initialized embeddings or dynamic vectors, which suggests that context was not essential when retrieving spans of text (pre-training relies on tri-grams). This could derive from the methods of annotation that were terminology-based for those labels. While the target items may contain particular vocabulary such as "personal information form" for DOC_EVIDENCE, context is of minimal importance since those phrases would not appear in another context or under another label. On the contrary, context seems much more important for retrieving procedural steps (PROCEDURE), which is the only category where the pre-training layer with contextual embeddings significantly increases the F1 score.
In the majority of categories, we find that the content of the pre-training is important (CLAIMANT_EVENT, CREDIBILITY,
DATE, DOC_EVIDENCE, EXPLANATION, LAW,
PROCEDURE). Results show that domain-specific
![7_image_0.png](7_image_0.png)
PROCEDURE 71.67 69.35 70.49 73.77 72.58 73.17 71.93 66.13 68.91 73.77 72.58 73.17 76.67 74.19 **75.41** 71.01 79.03 74.81 74.58 70.97 72.73
![7_image_2.png](7_image_2.png)
training data has a larger effect than network architecture difference. More precisely, it seems that, on some categories (CREDIBILITY, DOC_EVIDENCE,
LAW, PROCEDURE), pre-training on our own data is more effective than training on a general legal data set as in LegalBERT. This can be explained by the content LegalBERT is pre-trained on, which does not contain any Canadian but only US, European, and UK texts and does not include any refugee cases.
In other categories, roBERTa performs better than LegalBERT and CNNs, suggesting that the size of the pre-trained model is more important than domain matching. While LegalBERT has a size of 12GB, roBERTa is over 160GB and outperforms LegalBERT on traditional NER labels (GPE, ORG, PERSON and also CLAIMANT_INFO, LAW_CASE).
Looking at recall measures only, the superiority of transformer architectures against CNNs is more significant, with only 3 categories (DOC_EVIDENCE,
CLAIMANT_INFO, LAW) achieving their best recall score with a CNN architecture and legal pretraining. Comparing results on recall, we reach the same conclusion as with F1, i.e. that domain matching allows us to achieve higher scores on target categories. Indeed, for seven out of 12 categories analyzed for the main text, the best scores are achieved by two architectures that differ in their pre-training domain. Higher F1 and recall scores, obtained through comparison and observation, enable us to attribute the improved performance primarily to the domain of the training data.
7 Related work
![7_image_1.png](7_image_1.png)
![7_image_3.png](7_image_3.png)
![7_image_4.png](7_image_4.png)
Because of the importance of language and written text, applications of NLP in law hold great promise in supporting legal work, which has been extensively reviewed by Zhong et al.. However, because of the specificity of legal language and the diversity of legal domains, as demonstrated in our work with the results on LegalBERT-based transformer, general approaches aiming at structuring legal text such as *LexNLP* (Bommarito II et al., 2021) or general legal information extraction (Brüninghaus and Ashley, 2001) are unfit for specific domains such as international refugee law and are not able to achieve a high degree of granularity.
Earlier methods of statistical information extraction in law include the use of linear models such as maximum entropy models (Bender et al., 2003; Clark, 2003) and hidden Markov models (Mayfield et al., 2003). However, state-of-the-art results are produced by methods able to capture some context, with an active research community investigating the use of conditional random fields (Benikova et al.,
2015; Faruqui et al., 2010; Finkel et al., 2005) and BiLSTMs (Chiu and Nichols, 2016; Huang et al.,
2015; Lample et al., 2016; Ma and Hovy, 2016; Leitner et al., 2019) for legal applications.
Scope and performance increased with the introduction of new architectures of deep learning using recurrent neural networks (RNN), CNNs, and attention mechanisms as demonstrated by Chalkidis et al., even though we find that transformers do not always perform best on our data. We therefore fo-
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
cus in this work on statistical NER approaches. Attempts have been made to extract legal elements of the procedure using spaCy CNNs (Pais et al., 2021; Vardhan et al., 2021) with the latter achieving a total F1 score of 59.31% across labels, citations, as well as events, which is below our reported scores.
Similar case matching is a well-known application of NLP methods, especially in common law systems (Trappey et al., 2020) and in domains such as international law. The Competition on Legal Information Extraction/Entailment includes a task of case retrieval, which proves that there is much interest in this area both from researchers and the developers of commercial applications. While research has been conducted to match cases at paragraph level (Tang and Clematide, 2021; Hu et al.,
2022), we find that our approach is more transparent and shifts the decisions regarding which filters to choose to legal practitioners, which we believe is appropriate to enable productive human-machine collaboration in this high-stakes application domain.
## 8 Conclusion And Future Work
Our pipeline identifies and extracts diverse text spans, which may vary in quality across different categories. We acknowledge that certain entities we identify are more valuable than others for legal search purposes. Additionally, due to the complexity of the text, some noise is to be expected.
However, this noise does not hinder the search for relevant items. Users have the flexibility to search and retrieve cases using any combination of our 19 categories of flagged entities. Additionally, work is required for the evaluation of the prototype by legal practitioners beyond traditional machine learning metrics (Barale, 2022). However, we believe the work presented here is an important first step and has the potential to be used for future NLP applications in refugee law. Our approach provides significant contributions with newly collected data, newly created labels for NER, and a structure given to each case based on lawyers' requirements, with nine categories of information being retrieved with an F1 score higher than 80%. Compared to existing case retrieval tools, our pipeline enables endusers to decide what to search for based on defined categories and to answer the question: What are criteria of similarity to my new input case ?
## Limitations
In this section, we enumerate a few limitations of our work:
- We believe that the need to train transformer architectures on GPU is an obstacle to the use of this pipeline, which is destined not to be used in an academic environment but by legal practitioners.
- Because of the specificity of each jurisdiction, generalizing to other countries may not be possible on all labels with the exact same models (for example in extracting the names of tribunals).
- The manual annotation process is a weakness:
while it results in gold-standard annotations, it is very time-consuming. We do acknowledge that the amount of training data presented in this work is low and that collecting more annotations in the future would improve the quality of the results. We think it would be interesting to look at self-supervised methods, weak supervision, and annotation generation. The need for labeled data also prevents easy replication of the pipeline to new data sets, which would also require manually annotating.
- More precisely on the extracted categories, some categories lack precision and would require additional processing steps to achieve satisfactory results. For example, the category PERSON sometimes refers to the claimant or their family, but sometimes refers to the name of the judge.
## References
Claire Barale. 2022. Human-Centered Computing in Legal NLP - An Application to Refugee Status Determination. In Proceedings of the Second Workshop on Bridging Human–Computer Interaction and Natural Language Processing, pages 28–33.
Oliver Bender, Franz Josef Och, and Hermann Ney.
2003. Maximum entropy models for named entity recognition. In *Proceedings of the seventh conference on Natural language learning at HLT-NAACL*
2003, pages 148–151.
Darina Benikova, Seid Muhie, Yimam Prabhakaran, and Santhanam Chris Biemann. 2015. C.: Germaner: Free open german named entity recognition tool. In In: Proc. GSCL-2015. Citeseer.
Michael J Bommarito II, Daniel Martin Katz, and Eric M Detterman. 2021. Lexnlp: Natural language processing and information extraction for legal and regulatory texts. In Research Handbook on Big Data Law, pages 216–227. Edward Elgar Publishing.
Stefanie Brüninghaus and Kevin D Ashley. 2001. Improving the representation of legal case texts with information extraction methods. In Proceedings of the 8th international conference on Artificial Intelligence and Law, pages 42–51.
Hilary Evans Cameron, Avi Goldfarb, and Leah Morris.
2021. Artificial intelligence for a reduction of false denials in refugee claims. *Journal of Refugee Studies*,
page feab054.
Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2019. Extreme multi-label legal text classification: A case study in EU legislation. In *Proceedings of the Natural Legal Language Processing Workshop 2019*, pages 78–87, Minneapolis, Minnesota.
Association for Computational Linguistics.
Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos.
2020. LEGAL-BERT: The muppets straight out of law school. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2898–
2904, Online. Association for Computational Linguistics.
Daniel L. Chen and Jess Eagel. 2017. Can machine learning help predict the outcome of asylum adjudications? In *Proceedings of the 16th edition of* the International Conference on Articial Intelligence and Law, pages 237–240, London United Kingdom.
ACM.
Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. *Transactions of the association for computational linguistics*,
4:357–370.
Alexander Clark. 2003. Combining distributional and morphological information for part of speech induction. In 10th Conference of the European Chapter of the Association for Computational Linguistics.
Tonya Custis, Frank Schilder, Thomas Vacek, Gayle McElvain, and Hector Martinez Alonso. 2019. Westlaw edge ai features demo: Keycite overruling risk, litigation analytics, and westsearch plus. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, pages 256–257.
Nicholas Dingwall and Christopher Potts. 2018. Mittens: an extension of GloVe for learning domainspecialized representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 212–217, New Orleans, Louisiana. Association for Computational Linguistics.
Matt Dunn, Levent Sagun, Hale ¸Sirin, and Daniel Chen.
2017. Early predictability of asylum court decisions. In Proceedings of the 16th edition of the International Conference on Articial Intelligence and Law, pages 233–236, London United Kingdom. ACM.
Manaal Faruqui, Sebastian Padó, and Maschinelle Sprachverarbeitung. 2010. Training and evaluating a german named entity recognizer with semantic generalization. In *KONVENS*, pages 129–133.
Jenny Rose Finkel, Trond Grenager, and Christopher D
Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In *Proceedings of the 43rd annual meeting of the association for computational linguistics*
(ACL'05), pages 363–370.
Weifeng Hu, Siwen Zhao, Qiang Zhao, Hao Sun, Xifeng Hu, Rundong Guo, Yujun Li, Yan Cui, and Long Ma.
2022. Bert_lf: A similar case retrieval method based on legal facts. Wireless Communications and Mobile Computing, 2022.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.
Kenton Lee J. Devlin, Ming-Wei Chang and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition.
In *Proceedings of the 2016 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics.
Elena Leitner, Georg Rehm, and Julian MorenoSchneider. 2019. Fine-grained named entity recognition in legal documents. In *Semantic Systems. The* Power of AI and Knowledge Graphs, pages 272–287, Cham. Springer International Publishing.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF.
In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1:*
Long Papers), pages 1064–1074, Berlin, Germany.
Association for Computational Linguistics.
James Mayfield, Paul McNamee, and Christine Piatko.
2003. Named entity recognition using hundreds of thousands of features. In Proceedings of the seventh conference on Natural language learning at HLTNAACL 2003, pages 184–187.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. *arXiv preprint* arXiv:1301.3781.
Vasile Pais, Maria Mitrofan, Carol Luca Gasan, Vlad Coneschi, and Alexandru Ianov. 2021. Named entity recognition in the Romanian legal domain. In Proceedings of the Natural Legal Language Processing Workshop 2021, pages 9–18, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing
(EMNLP), pages 1532–1543.
Joshua Poje. 2014. Legal research. *American Bar Association Techreport*, 2014.
Sean Rehaag. 2007. Troubling patterns in canadian refugee adjudication. *Ottawa L. Rev.*, 39:335.
Sean Rehaag. 2019. Judicial review of refugee determinations (ii): Revisiting the luck of the draw. Queen's LJ, 45:1.
George Sanchez. 2019. Sentence boundary detection in legal text. In *Proceedings of the Natural Legal* Language Processing Workshop 2019, pages 31–38, Minneapolis, Minnesota. Association for Computational Linguistics.
Li Tang and Simon Clematide. 2021. Searching for legal documents at paragraph level: Automating label generation and use of an extended attention mask for boosting neural models of semantic similarity.
In *Proceedings of the Natural Legal Language Processing Workshop 2021*, pages 114–122, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Charles V. Trappey, Amy J.C. Trappey, and Bo-Hung Liu. 2020. Identify trademark legal case precedents -
Using machine learning to enable semantic analysis of judgments. *World Patent Information*, 62:101980.
Harsh Vardhan, Nitish Surana, and B. K. Tripathy. 2021.
Named-entity recognition for legal documents. In Advanced Machine Learning Technologies and Applications, pages 469–479, Singapore. Springer Singapore.
Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. How does NLP benefit legal system: A summary of legal artificial intelligence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5218–5230, Online. Association for Computational Linguistics.
## A Example Of A **Case Cover**
| RPD File No. / N° de dossier de la SPR : TA8-10977 TA8-10773 TA8-10812 TA8-10930 TA8-10931 Private Proceeding / Huis clos | 2010 CanLII 96289 (CA IRB) | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|------------------------------------------------------|
| Reasons and Decision − Motifs et décision | | |
| Claimant(s) | XXXXX XXXXX XXXXX XXXXX | |
| XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX (a.k.a. XXXXX XXXXX XXXXX XXXXX) XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX XXXXX | Demandeur(e)(s) d'asile | |
| Date(s) of Hearing | FEBRUARY 17, 2010 | Date(s) de l'audience |
| Place of Hearing | TORONTO, ONTARIO | Lieu de l'audience |
| Date of Decision | FEBRUARY 24, 2010 | Date de la décision |
| Panel | W. LIM | Tribunal |
| Counsel for the Claimant(s) | STEPHEN J. SCHMIDT | Conseil(s) du / de la / des demandeur(e)(s) d'asile |
| Tribunal Officer | K. GENJAGA | Agent(e) de tribunal |
| Designated Representative(s) | N/A | Représentant(e)(s) désigné(e)(s) |
| Counsel for the Minister | N/A | Conseil du ministre |
| RPD.15.7 (February 12, 2009) Disponible en français | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Not numbered, after the conclusion: 'Limitations' section
✓ A2. Did you discuss any potential risks of your work?
Section 8 and 'Limitations'
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And 5
✓ B1. Did you cite the creators of artifacts you used?
Section 4 and 5 (and footnotes)
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The tools and artifacts used are publicly available. Prodigy is the only tool used that is note freely available and we were granted an academic research license.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The retrieved documents are to be used for research purposes, which is the use we made of the dataset. The created artifacts (code and NER model) do not give access directly to the retrieve documents.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
This work does not directly give access to the data but to trained models of named-entity recognition.
The data is provided by the Canadian Legal Information Institute and is publicly available online.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Section 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 and 5
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 5.1
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
After agreeing on guidelines with refugee lawyers, the first author was the only annotator involved.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. same as above.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. same as above.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. same as above.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. same as above. |
li-etal-2023-recurrent | Recurrent Attention Networks for Long-text Modeling | https://aclanthology.org/2023.findings-acl.188 | Self-attention-based models have achieved remarkable progress in short-text mining. However, the quadratic computational complexities restrict their application in long text processing. Prior works have adopted the chunking strategy to divide long documents into chunks and stack a self-attention backbone with the recurrent structure to extract semantic representation. Such an approach disables parallelization of the attention mechanism, significantly increasing the training cost and raising hardware requirements. Revisiting the self-attention mechanism and the recurrent structure, this paper proposes a novel long-document encoding model, Recurrent Attention Network (RAN), to enable the recurrent operation of self-attention. Combining the advantages from both sides, the well-designed RAN is capable of extracting global semantics in both token-level and document-level representations, making it inherently compatible with both sequential and classification tasks, respectively. Furthermore, RAN is computationally scalable as it supports parallelization on long document processing. Extensive experiments demonstrate the long-text encoding ability of the proposed RAN model on both classification and sequential tasks, showing its potential for a wide range of applications. | # Recurrent Attention Networks For Long-Text Modeling
Xianming Li1†**, Zongxi Li**2†∗
, Xiaotian Luo1, Haoran Xie3**, Xing Lee**1, Yingbin Zhao1, Fu Lee Wang2**, Qing Li**4 1 Ant Group, Shanghai, China 2 School of Science and Technology, Hong Kong Metropolitan University, Hong Kong SAR
3 Department of Computing and Decision Sciences, Lingnan University, Hong Kong SAR
4 Department of Computing, Hong Kong Polytechnic University, Hong Kong SAR
{niming.lxm,lxt267638,lx250976,zyb166123}@antgroup.com
{zoli, pwang}@hkmu.edu.hk, [email protected], [email protected]
## Abstract
Self-attention-based models have achieved remarkable progress in short-text mining. However, the quadratic computational complexities restrict their application in long text processing.
Prior works have adopted the chunking strategy to divide long documents into chunks and stack a self-attention backbone with the recurrent structure to extract semantic representation.
Such an approach disables parallelization of the attention mechanism, significantly increasing the training cost and raising hardware requirements. Revisiting the self-attention mechanism and the recurrent structure, this paper proposes a novel long-document encoding model, Recurrent Attention Network (RAN), to enable the recurrent operation of self-attention. Combining the advantages from both sides, the welldesigned RAN is capable of extracting global semantics in both token-level and documentlevel representations, making it inherently compatible with both sequential and classification tasks, respectively. Furthermore, RAN is computationally scalable as it supports parallelization on long document processing. Extensive experiments demonstrate the long-text encoding ability of the proposed RAN model on both classification and sequential tasks, showing its potential for a wide range of applications.
## 1 Introduction
Recently, self-attention-based neural networks, such as Transformer (Vaswani et al., 2017), GPT
(Radford et al., 2018, 2019; Brown et al., 2020),
and BERT family (Devlin et al., 2019; Liu et al.,
2019; Lan et al., 2020), have demonstrated superior text encoding ability in many natural language processing (NLP) tasks with the help of large-scale pretraining. These models have set state-of-the-art benchmarks in classification tasks like text categorization (Li et al., 2021a) and sentiment analysis
(Naseem et al., 2020; Li et al., 2021c, 2023), and sequential tasks like question answering (Lee et al.,
2019; Karpukhin et al., 2020) and information extraction (Li et al., 2021b; Wu et al., 2022). The time and space complexities of self-attention computation are O(n 2) with respect to the sequence length, making it computationally expensive to encode long texts. Therefore, BERT models adopt an absolute positional encoding strategy to manage computational overhead. However, such a setting makes the BERT models unable to handle texts longer than 512 tokens, restricting their application in realistic scenarios like processing user comments, news articles, scientific reports, and legal documents with arbitrary lengths.
Current works focus on two solutions to enable self-attention-based models for handling longer texts. The first solution reduces the computing complexity of self-attention from quadratic to linear by approximating its softmax operation (Beltagy et al., 2020; Choromanski et al., 2021; Hua et al., 2022). These models can handle relatively long texts within the hardware capacity but also suffer from a performance drop (Schlag et al., 2021; Hutchins et al., 2022). Another solution is to divide the long document into chunks shorter than 512 tokens so that pretrained BERT models can be applied (Pappagari et al., 2019; Hutchins et al.,
2022). However, as the chunks are individually encoded, the resulted representations do not contain the crucial contextual information for sequential tasks. While a special recurrent mechanism can handle sequential tasks (Hutchins et al., 2022), it cannot produce a document-level representation for classification, limiting their generality as none works for both classification and sequential tasks.
Additionally, introducing recurrent modules disables the parallel computing feature, leading to unscalable implementation.
To address the aforementioned issues, this paper proposes the Recurrent Attention Network (RAN)1, 1The code is available at https://github.com/4AI/RAN.
∗Corresponding author; † Equal contribution.
![1_image_0.png](1_image_0.png)
a novel model architecture supporting recurrent self-attention operation over long sequences, enabling global dependency extraction and long-term memory. RAN iterates through the sequence by non-overlapping windows. Unlike token-level recurrent architectures such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al.,
2014), RAN applies positional multi-head selfattention (pMHSA) on a window area to extract local dependency. To propagate the information forward, the RAN model extracts the global perception cell (GPC) vector from the self-attention representation of the current window. The GPC
vector is then concatenated with tokens in the next window as the input of the self-attention layer. The new GPC vector will be passed to the subsequent windows with residual connection to alleviate the gradient vanishing (He et al., 2016) and updated in the same manner. Figure 1 depicts the difference between the recurrent neural network (RNN) and our proposed RAN.
The function of the GPC vector is twofold. First, like the [CLS] token in BERT, the GPC vector is a window-level contextual representation. But unlike the [CLS] token, the GPC vector is only applied to the self-attention layer, and no special token is inserted during text preprocessing. Second, the GPC vector, resembling the state cell in a recurrent architecture, maintains a long-distance memory over the sequence. For each window, the attended GPC vector encodes an aggregated representation of all the previous windows, which enables the window-level self-attention to perceive global semantics. With the help of a well-designed memory review mechanism, the GPC vector from the last window can be used as a document-level representation and serve the classification tasks.
Meanwhile, the memory review mechanism enhances the token representations of RAN in the sequence, encoding both contextual and global information, which can be leveraged for sequential tasks such as language modeling (LM) and named entity recognition (NER).
We pretrain the RAN model using a masked language modeling (MLM) objective from scratch, which outperforms other pretrained baselines in long document classification. The RAN framework also supports auto-regressive LM and achieves the lowest perplexity score compared with state-of-theart language models on the WikiText-103 dataset.
Furthermore, we apply RAN to different downstream tasks via finetuning and observe consistent improvements compared to baseline models.
RAN solely relies on self-attention, and no LSTM-style gate is involved when propagating information via GPC vectors. Therefore, RAN is computationally efficient as it supports parallelized GPU computing. Although the memory complexity is still quadratic, it is regarding the window size W rather than the whole text length L, where W ≪ L. Nevertheless, the window size can be adjusted based on hardware availability to achieve a relatively larger batch size for better training.
In summary, our contribution is to devise the RAN model for long document processing. RAN
allows for parallelization on GPU and provides the interfaces for serving both classification and sequential tasks. With pretraining, RAN can outperform the BERT-based models in various tasks.
## 2 Related Work
This section reviews the relevant works focusing on sequence modeling in NLP, especially long document processing. RNNs are widely used for sequential modeling by recursively updating a state cell to maintain a long-distance memory. Traditional recurrent networks, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014),
use the fully-connected layer as the basic encoding unit and apply the gate mechanism to update state memory. The recurrent operation is conducted on the token level, which is inefficient as such a framework cannot compute parallelly on GPU. Besides, it might suffer from gradient vanishing for long sequences during the backpropagation phase
(Hutchins et al., 2022).
Self-attention models are powerful in global representation learning. However, applying selfattention in long document processing is intractable due to the quadratic time and memory complexities. To address this issue, some works (Beltagy et al., 2020; Choromanski et al., 2021; Hua et al.,
2022) attempt to reduce the computing complexity of self-attention from quadratic to approximately linear complexity. Beltagy et al. (2020) propose a drop-in replacement of the softmax operation in self-attention with a sparse attention mechanism.
Similarly, Choromanski et al. (2021) rely on prior knowledge like sparsity and low-rankness to efficiently estimate the full-rank attention. However, these approaches face a trade-off between efficiency and accuracy, as approximations may lead to a performance drop (Schlag et al., 2021; Hutchins et al., 2022).
Other works leverage the power of full-rank selfattention as backbones, such as pretrained BERT
and RoBERTa. These works cope with the tokenlength limitation with different strategies. Ding et al. (2020) propose CogLTX framework to generate a brief summary of the document. The short summary is used for the classification task employing BERT. However, it is inevitable to lose information in the length compression. Pappagari et al.
(2019) segment the long text into smaller chunks so that BERT can be then used. A recurrent layer is employed to obtain the document-level representation upon chunk-level representations. These models can be applied for the classification task but cannot handle sequential tasks because of losing crucial contextual and sequential information.
Hutchins et al. (2022) adopt the chunking strategy and devise a specifically-designed gate mechanism to obtain token-level representations for sequential tasks. Similarly, Didolkar et al. (2022) propose a Transformer-based temporal latent bottleneck for image classification, reinforcement learning, and text classification, in which temporal states are updated using a recurrent function across chunks. In each Transformer block, temporal states update the chunk-level representation by cross-attention layers interleaved with self-attention layers. In general, these models with BERT backbones cannot simultaneously handle classification and sequential tasks for long documents. Meanwhile, the RNN-style gate architecture does not support parallel computing, so the computing efficiency is also impaired.
Our proposed RAN achieves recurrent operation of the self-attention model and hence supports parallelization. Moreover, like the traditional RNN architecture, RAN can produce both token-level and document-level representations, which can be leveraged for both sequential and classification tasks.
## 3 Methodology
![2_image_0.png](2_image_0.png)
This section introduces the proposed RAN
framework in terms of its components. Figure 2 depicts the structure of the basic RAN module.
In RAN, the primary encoder is the pMHSA, encoding the GPC vector and the current input with the rotary positional information carried (Su et al.,
2021). The GPC vector is employed to propagate information through the sequence.
## 3.1 Input Layer
We first employ the padding operation for the input documents to keep a uniform length L. Then we map each word into a D-dimensional continuous space and obtain the word embedding xi ∈ R
D.
The word vectors are concatenated to form the model input: X = [x1, x2*, . . . ,* xL] ∈ R
L×D. To feed the text into the RAN, we chunk the input document into m = ceil( LW ) windows, where W
is the window size. We use Xi ∈ RW×D to denote the i-th window input. In RAN, the GPC vector G0 ∈ R
D is initialized to 0 by default, following the common operation in RNN. We parameterize the GPC vector with the layer normalization (Ba et al., 2016) as follows:
$$\vec{\mathcal{H}}_{0})\in\mathbb{R}^{D}.$$
$\eqref{eq:walpha}$
## G0 = Layernorm(Wgg0) ∈ R D. (1) 3.2 Positional Multi-Head Self-Attention
As the positional space of long documents is prohibitively large, it is not feasible to use absolute positional embedding following Transformer families
(Vaswani et al., 2017) in long document processing.
Hence, we follow Su et al. (2021); Chowdhery et al.
(2022); Black et al. (2022) to incorporate the local positional information of the current window and leverage rotary position information as follows:
![3_image_0.png](3_image_0.png)
$$\mathrm{pMHSA}(\mathbf{X}_{i})=\mathbf{W}[\mathrm{Att}_{1}(\mathbf{X}_{i});...;\mathrm{Att}_{h}(\mathbf{X}_{i})]+b,\tag{2}$$
and
Attj (Xi) = SoftMax( RP(Qj ) · RP(KT j ) √d k + M)Vj Qj = Wq jXi + b q j Kj = Wk j Xi + b k j Vj = Wv j Xi + b v j, (3)
where Attj (·) is the j-th head of pMHSA, h denotes the head size, [; ] means the concatenation operation, M is the attention mask to adapt to different sequential tasks, and RP(·) stands for the rotary position function (Su et al., 2021).
## 3.3 Encoding And Updating Layer
To encode the i-th window, we concatenate the GPC vector from the previous window Gi−1 and the current window input Xi ∈ RW×D to form the model input Xin i = [Gi−1; Xi] ∈ R
(1+W)×D. A
layer normalization layer is applied to normalize the input.
We then apply the pMHSA to encode the concatenated input to obtain the outputs of the current window:
Oi = pMHSA(Xin i). (4)
After encoding, we extract the updated GPC vector G′i and the output corresponding to the tokens in the window:
window: $$\mathbf{G}_{i}^{\prime}=\text{SN}(\mathbf{O}_{i}^{[1:2]})\in\mathbb{R}^{D}\tag{5}$$ $$\mathbf{O}_{i}^{w}=\text{SN}(\mathbf{O}_{i}^{[2:1+W]})\in\mathbb{R}^{W\times D},$$
where [*start*:end]is the tensor slice operation, and SN(X) = X−X*mean* σstands for the standard normalization. To alleviate the gradient vanishing issue in modeling long sequences, we employ residual connection to connect the current GPC vector with the previous one, then pass it to a layer normalization layer to normalize the updated GPC vector,
$$\mathbf{G}_{i}=\mathrm{LayerNorm}(\mathbf{G}_{i}^{\prime}+\mathbf{G}_{i-1}).$$
$\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$.
The updated GPC vector Gi ∈ R
D will be propagated to the next window.
## 3.4 Memory Review And Output Layer
After encoding all windows, we can obtain the sequence output by concatenating all window outputs, as follows:
$$\mathbf{O}^{w}=[\mathbf{O}_{1}^{w};\mathbf{O}_{2}^{w};...;\mathbf{O}_{m}^{w}]\in\mathbb{R}^{L\times D},$$
$$\left(T\right)$$
L×D, (7)
where m is the number of windows. Ow has the same shape as the input X. To prevent history forgetting in handling long sequences, this paper proposes a novel memory review mechanism. Specifically, we first concatenate all updated GPC vectors to produce the history states vector:
$$\mathbf{S}=[\mathbf{G}_{1};\mathbf{G}_{2};...;\mathbf{G}_{m}]\in\mathbb{R}^{m\times D}.$$
m×D. (8)
We compute the cross attention of the concatenated output and the historical memory states to obtain the final output:
$$\begin{array}{l}{{\mathbf{O}=\mathrm{SoftMax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d^{k}}})\mathbf{V}}}\\ {{\mathbf{Q}=\mathbf{W}^{q}\mathbf{O}^{w}+b^{q}}}\\ {{\mathbf{K}=\mathbf{W}^{k}\mathbf{S}+b^{k}}}\\ {{\mathbf{V}=\mathbf{W}^{v}\mathbf{S}+b^{v}.}}\end{array}\tag{9}$$
This procedure mimics the human behavior of reviewing key points after reading an article, the way that humans naturally consolidate information and reinforce memory.
The sequence output O ≡ Oseq ∈ R
L×D can be used for sequential tasks like NER. Although the GPC vector of the last window, Gm, can serve as the document representation, it may lose crucial semantics and long-term memory during the propagation. Therefore, we also add the memory review mechanism to RAN for classification tasks by generating Oclf :
$${\bf O}_{d f}={\bf W}^{g}{\bf G}_{m}+{\bf W}^{o}{\bf O}^{p}+b^{o},$$
o, (10)
where Opis the pooling of the output O over time sequence. Our empirical results show that the max pooling works better than the average pooling in classification tasks. Therefore, we adopt max pooling to obtain Op:
$$\mathbf{O}^{p}=\mathrm{MaxPooling}(\mathbf{O}).$$
Op = MaxPooling(O). (11)
Figure 3 provides a visual illustration of the implementations for both classification and sequential tasks. It is noticeable that the model parameters of RAN are shared across all windows, allowing for efficient computation and reduced memory usage. Particularly, RAN supports multiple sequential tasks with different attention masks. For instance, it employs a causal attention mask (Vaswani et al., 2017) for LM tasks and a prefix causal attention mask (Dong et al., 2019) for the seq2seq tasks to prevent forward information exposure.
## 4 Experiment 4.1 Datasets And Evaluation Metrics
To comprehensively evaluate the model performance, we conduct experiments on three major tasks: text classification (TC), NER, and LM.
For the *TC task*, we attempt to test the model performance on datasets with various document lengths. Specifically, we extend the benchmarks from Park et al. (2022) by adding the long-text dataset Arxiv and the short-text dataset AGNews.
The extended benchmarks include (1) **AGNews**2,
(2) **20NewsGroups** (Lang, 1995), and (3) **Arxiv**
(He et al., 2019) for multi-class classification; (4)
Book Summary (Park et al., 2022; Bamman and Smith, 2013) (*abbr.* **B.S.**) and (5) **EURLEX-57K**
2http://groups.di.unipi.it/~gulli/AG_corpus_o f_news_articles
(Chalkidis et al., 2019) (*abbr.* **EUR.57K**) for multilabel classification; and **Hyperpartisan** (Kiesel et al., 2019) (*abbr.* **Hyper.**) for binary classification. Figure 4 depicts the text length distribution and the long-text ratio of the benchmark datasets.
For a fair comparison, following Park et al. (2022),
we report micro-F1 for multi-label classification and accuracy for binary and multi-class classification.
For the *LM task*, we adopt the commonly-used dataset **WikiText-103**3(Merity et al., 2017) and report the perplexity score following the baselines.
For the *NER task*, we experiment on two widelyadopted English datasets: **OntoNotesV5.0**4(*abbr.*
OntoV5, average length is 77.5) and **CoNLL2003**
(Tjong Kim Sang and De Meulder, 2003) (average length is 63.4). Noted that both datasets consist of short texts with an average length shorter than 100, as there are no available NER datasets of long documents. Accordingly, we adopt a small window size for the NER task to test the effectiveness of the recurrent architecture. We use *conlleval*5to measure the model performance and report the F1 score following the baselines.
## 4.2 Implementation Details
The primary experiments in Section 4.3 were conducted using the NVIDIA A100 GPU, while the remaining experiments were conducted using the NVIDIA Titan X GPU (12G memory). The code was implemented using TensorFlow and Keras. By default, we used two layers of RAN, with a window size of 256 for the TC and LM tasks and 64 for the NER task. The head number of pMHSA is set to 12, and the head size is 768. We trained the models for different tasks using the Adam optimizer (Kingma and Ba, 2015) by optimizing the corresponding objective function. For pretrained and non-pretrained RAN models, we set the learning rate to 2e−5 and 3e − 4, respectively.
## 4.3 Main Results 4.3.1 Long Text Classification
![5_image_0.png](5_image_0.png)
| Model | AGNews | 20NG | B.S. | Hyper. | EUR.57K | Arxiv | Avg. |
|--------------|----------|-----------|--------|-----------|-----------|---------|--------|
| Acc. | Acc. | F1(micro) | Acc. | F1(micro) | Acc. | | |
| BiLSTM+GloVe | 93.34 | 77.97 | 49.90 | 90.77 | 65.00 | 81.28 | 76.38 |
| BERT | 93.80 | 84.79† | 58.18† | 92.00† | 73.09† | 82.00 | 80.64 |
| Longformer | 93.22 | 83.39† | 56.53† | 95.69† | 54.53† | 84.24 | 77.93 |
| ToBERT | 93.80 | 85.52† | 58.16† | 89.54† | 67.57† | 83.75 | 79.72 |
| CogLTX | 93.68 | 84.63† | 58.27† | 94.77† | 70.13† | 83.56 | 80.84 |
| RAN+Random | 91.70 | 78.88 | 50.52 | 93.85 | 66.59 | 80.08 | 76.94 |
| RAN+GloVe | 93.46 | 79.16 | 51.58 | 95.38 | 67.21 | 83.36 | 78.36 |
| RAN+Pretrain | 93.83 | 85.41 | 58.43 | 96.92 | 73.94 | 85.92 | 82.41 |
models such as **BERT** (Devlin et al., 2019), **Longformer** (Beltagy et al., 2020), **ToBERT** (Pappagari et al., 2019), and **CogLTX** (Ding et al., 2020). For a comprehensive review, we adopt different initialization methods for RAN parameters. **RAN+Random**
indicates the weights of RAN are randomly initialized. **RAN+GloVe** stands for using the GloVe embedding (Pennington et al., 2014) as word representation. **RAN+Pretrain** is the RAN pretrained on the MLM task, following settings in Devlin et al.
(2019); Liu et al. (2019). We pretrained RAN on the BookCorpus (Zhu et al., 2015) (5GB) and C4
(Raffel et al., 2020) (RealNews-like subset, 15GB).
We present the results of long document benchmarks in Table 1. In general, the pretrained RAN
achieves the five best results among the six benchmarks except for the 20NG dataset and outperforms all the baselines regarding the average score. Note that the pretrained RAN has only 96M parameters which are fewer than other pretrained baselines, suggesting that RAN is more efficient and scalable than the baselines. Particularly, the pretrained RAN
achieves a 2.2% improvement compared with ToBERT on the super-long text dataset Arxiv, demonstrating the superiority of RAN in handling long documents.
It is worth noticing that the average performance of RAN is higher than that of the chunking-based ToBERT and the document summarization model CogLTX. These two models drop essential information in the chunking and summarizing processes, while RAN can preserve the sequence information with the help of the well-designed recurrent and memory review mechanisms. Moreover, the pretrained RAN achieves the best result on the shorttext dataset AGNews, indicating that RAN also performs well in short-text tasks.
Remarkably, even without pretraining, RAN can still yield competitive performance. For example, the randomly initialized RAN achieved better results than BiLSTM with pretrained GloVe word embedding. RAN with GloVe embedding outperforms pretrained BERT and ToBERT on the accuracy of the Hyper dataset and Longformer on average score.
Such observations illustrate that RAN is effective for text encoding and flexible in adopting different initialization methods for various scenarios. It also suggests that the recurrent attention-based architecture of RAN is more powerful than the recurrent architecture of LSTM in modeling texts.
| Model | #Params | PPL↓ |
|-------------------------------------|-----------|--------|
| LSTM (Grave et al., 2017) | 150M | 48.70 |
| TransformerXL (Dai et al., 2019) | 150M | 24.00 |
| B.R. Trans. (Hutchins et al., 2022) | 150M | 39.48† |
| Transformer (Zhong et al., 2022) | 150M | 29.14 |
| Com. Trans.(Zhong et al., 2022) | 150M | 24.56 |
| ∞-former (Zhong et al., 2022) | 150M | 24.22 |
| TRIMELM (Zhong et al., 2022) | 150M | 25.60 |
| RAN | 150M | 22.76 |
## 4.3.2 Language Modeling
The self-attention-based RAN can be employed for LM. Extensive experiments are conducted to evaluate RAN on language modeling. To avoid information exposure, we apply the causal attention mask to ensure the prediction for i-th position only depends on the known outputs before i, following Vaswani et al. (2017). We compare RAN with widely-adopted baselines that are shown in Table 2. The compared models have the same vocabulary and parameter sizes, and the parameters are randomly initialized. The experiment settings follow Zhong et al. (2022). Observing the results, we notice that RAN achieves the state-of-the-art result on the WikiText-103 dataset with 22.76 perplexity.
It suggests that RAN is efficient in handling the sequence generation task.
Table 2: Results of the LM task on the WikiText-103 dataset. Note that the parameter size for language modeling is much larger than that for classification tasks
(96M) as we used the same vocabulary for all baselines for a fair comparison. ↓ means the result is the lower the better. † denotes that the result is from our implementation of the official code.
4.3.3 Named Entity Recognition The NER task is a common information extraction task, and we conduct experiments on the NER
task to test RAN for information extraction. As the available NER datasets contain mostly short texts, we set the window size to 64 to test the effectiveness of RAN's recurrent structure. We compare with the following widely-used baselines: **IDCNN** (Strubell et al., 2017), **LSTM** (Langlais et al.,
2018), **LSTM-CNN** (Li et al., 2020), **ELMo** (Peters et al., 2018), and **BERT** (Devlin et al., 2019).
As shown in Table 3, we notice that RAN consistently outperforms LSTM-based baselines. Specifically, RAN without pretraining achieves 0.5% and 0.3% improvement compared with BERT on both datasets, indicating that the well-designed GPC vector is effective in handling information extraction of long sequences. Both the NER and LM tasks are sequential tasks, and the results demonstrate that RAN is effective in sequence modeling.
Table 3: F1 score of the NER task. The results of the baselines are retrieved from the original paper. † denotes results from our implementation by the official code.
| Model | OntoV5 | CoNLL2003 |
|--------------------------------|----------|-------------|
| ID-CNN (Strubell et al., 2017) | 86.84 | 90.54 |
| LSTM (Langlais et al., 2018) | 87.95 | 91.73 |
| LSTM-CNN (Li et al., 2020) | 88.40 | − |
| ELMo (Peters et al., 2018) | − | 92.22 |
| BERT (Devlin et al., 2019) | 88.88† | 92.40 |
| RAN (W = 64) | 89.38 | 92.68 |
## 4.4 Ablation Study
We conducted an ablation study to investigate the significance of each component of our proposed model on the Arxiv dataset. The results are presented in Table 4.
In the first ablation model, we substituted the max pooling layer depicted in Eq. 11 with an average pooling layer, resulting in a 1.12% drop in Accuracy. Moreover, our findings show that the residual connection between two windows is essential to alleviate gradient vanishing. When we removed it from RAN, the performance drops approximately 1.6%. We also ablated the rotary positional encoding from RAN, which leads to a 1.23%
performance drop.
When the memory review mechanism of RAN
was removed in the last ablation model, the result shows the most significant drop compared with other ablation models. RAN without the memory review mechanism suffers a 2.5% performance drop. Such an observation indicates that mitigating information forgetting in processing long documents is crucial, and our proposed memory review mechanism is effective in preserving long-distance memory over the sequence.
In general, the ablation study demonstrates the significance of each component in our proposed RAN model and highlights the importance of the memory review mechanism in processing long documents. Particularly, our observation accentuates the importance of maintaining long-distance memory in long document processing.
| Model | Accuracy (%) | ∆ (%) |
|-------------------------|----------------|---------|
| RAN+GloVe | 83.36 | |
| w/ avg pool | 82.24 | −1.12 |
| w/o residual connection | 81.76 | −1.60 |
| w/o memory review | 80.85 | −2.51 |
| w/o rotary position | 82.13 | −1.23 |
Table 4: Results of ablation models on the Arxiv dataset.
## 4.5 Discussion 4.5.1 Scalability Analysis Of Ran
This section discusses the scalability of RAN in actual implementation. The window size W determines the number of tokens that are encoded by the attention block. In theory, RAN with a larger window size can yield better performance, as we have fewer windows and less information loss when iterating over the windows. However, given the quadratic memory complexity, the hardware capacity limits the maximum batch size that can be used in training and hence imposes a ceiling to the performance improvement. We have conducted additional experiments with RAN+Glove on the Arxiv dataset. Figure 5 depicts the accuracy of the test set and the training time per epoch of RAN
with different window sizes. Results of each configuration are obtained with the maximum batch size runnable on the GPU. As expected, when window size increases, the accuracy also gains continuous improvements, albeit minor. The accuracy curve begins to flatten out when the size exceeds 256, partially due to the decreasing maximum batch size.
Such an observation indicates the performance is approaching the bottleneck caused by the hardware capacities.
On the other hand, the V-shape curve observed in the training time is an intriguing sign, and we
![7_image_0.png](7_image_0.png)
attribute it to the different time complexities of recurrent and self-attention operations of RAN. Although computing self-attention is of quadratic time, it is significantly accelerated owing to the tensor computation on GPU. In contrast, the recurrent component involves tensor manipulations, such as splitting and concatenation, and thus takes more time. Therefore, a smaller window size will lead to a longer training time as more recurrent operations need to perform. When the window size is large enough, the quadratic time of selfattention becomes significant and dominates the overall spent time. Hence, the green curve bounces back when the window size is 1024. Moreover, when the window size is even larger, such as 2048, the model becomes too large to be loaded on the GPU, and training becomes infeasible.
Furthermore, we compare the training time of pretrained RAN with other pretrained and nonpretrained baselines on the Arxiv dataset. The results in Table 5 indicate that the proposed RAN is highly scalable and efficient. Notably, RAN has six times the parameter size of LSTM but has a shorter training time, which indicates our devised recurrent attention structure is more efficient than the LSTM-style gate mechanism.
## 4.5.2 The Number Of Ran Layers
Similar to RNNs, RAN layers can be stacked to build a deep architecture. We adopt a serial manner to pass the previous layer's GPC output as the input to the subsequent hidden RAN layers. The GPC output at the last RAN layer will be con-
| Models | #Params | Time (s/epoch) |
|--------------------------|-----------|------------------|
| LSTM (w/o pretrain) | 15M | 7, 947 |
| LongFormer (w/ pretrain) | 148M | 26, 536 |
| ToBERT (w/ pretrain) | 110M | 6, 568 |
| CogLTX (w/ pretrain) | 110M | 21, 032 |
| RAN (w/ pretrain) | 96M | 5, 393 |
catenated with the following window input. Intuitively, with more RAN layers, the model will contain more parameters and is promising to produce higher performance. We compare the RANs with different depths and list the results in Table 6.
As expected, the accuracy improves as the number of layers increases. However, the average training time will significantly increase due to the serial connection between layers, and the improvements become marginal. Therefore, to balance the performance and the time consumption, we adopt the two-layer RAN in this paper by default. This also implies that the results presented in this paper could be further enhanced by increasing the depth of the RAN.
| Layers | #Params | Time (s/epoch) | Accuracy (%) |
|----------|-----------|------------------|----------------|
| 1 | 15M | 678 | 83.14 |
| 2 | 17M | 1, 311 | 83.36 |
| 3 | 19M | 2, 186 | 83.76 |
| 4 | 21M | 2, 967 | 83.93 |
## 5 Conclusion & Future Work
This paper has presented a novel RAN architecture for long-text modeling that combines the advantages of both recurrent and self-attention networks.
The use of a positional multi-head attention mechanism and GPC vector enhances the model's performance by capturing both local and global dependencies in the input sequence. Our ablation study also highlights the critical role of residual connection and memory review mechanisms in preserving long-distance memory.
With the well-designed recurrent self-attention mechanism, RAN's training can be accelerated by parallel computing on a GPU, making it highly efficient and scalable. We have conducted extensive experiments on TC, NER, and LM tasks. The extensive experiments demonstrate the effectiveness of the proposed RAN model on both classification and sequential tasks.
The flexibility and scalability of our proposed RAN make it a promising choice for future research, with broad potential applications in translation, summarization, conversation generation, and large language models. Additionally, we plan to extend the RAN to tasks involving multi-modality input and output like audio and video, to exploit RAN's long sequence handling capacity in different fields.
## 6 Limitations
The proposed model, Recurrent Attention Network
(RAN), effectively models long sequential data by propagating information window-by-window through the sequence via its well-designed recurrent architecture. However, the multi-head selfattention applied to each window is still limited to local attention, which prevents it from providing a global dependency relationship for the entire sequence. This limitation restricts RAN's application in scenarios where a global dependency relationship is necessary, such as visualizing attention weights for the entire document via a heatmap.
This limitation potentially reduces the interpretability of the model, although it does not affect the model's performance. Hence, exploring ways to incorporate global attention mechanisms into the RAN architecture is a promising research direction to improve its interpretability and expand its range of applications.
## Acknowledgments
Xianming Li, Xiaotian Luo, Xing Lee, and Yingbin Zhao's work has been supported by Ant Group.
Zongxi Li's work has been supported by a grant from Hong Kong Metropolitan University (Project Reference No. CP/2022/02). Haoran Xie's work has been supported by the Direct Grant (DR23B2)
and the Faculty Research Grant (DB23A3) of Lingnan University, Hong Kong. Qing Li's work has been supported by the Hong Kong Research Grants Council through the Collaborative Research Fund
(Project No. C1031-18G). We thank the anonymous reviewers for their careful reading of our manuscript. Their insightful comments and suggestions helped us improve the quality of our manuscript.
## References
Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E.
Hinton. 2016. Layer normalization. *CoRR*,
abs/1607.06450.
David Bamman and Noah A. Smith. 2013. New alignment methods for discriminative book summarization.
CoRR, abs/1305.1319.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer. *CoRR*, abs/2004.05150.
Sid Black, Stella Biderman, Eric Hallahan, et al. 2022.
Gpt-neox-20b: An open-source autoregressive language model. *CoRR*, abs/2204.06745.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. volume 33, pages 1877–1901.
Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019. Largescale multi-label text classification on EU legislation.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 6314–
6322, Florence, Italy. Association for Computational Linguistics.
Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J. Colwell, and Adrian Weller. 2021.
Rethinking attention with performers. In 9th International Conference on Learning Representations, ICLR 2021.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Jeff Dean, et al. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv preprint arXiv:1412.3555*.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc Viet Le, and Ruslan Salakhutdinov.
2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 2978–2988.
Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186.
Aniket Didolkar, Kshitij Gupta, Anirudh Goyal, Nitesh Bharadwaj Gundavarapu, Alex M Lamb, Nan Rosemary Ke, and Yoshua Bengio. 2022. Temporal latent bottleneck: Synthesis of fast and slow processing mechanisms in sequence learning. *Advances in Neural Information Processing Systems*,
35:10505–10520.
Ming Ding, Chang Zhou, Hongxia Yang, and Jie Tang.
2020. Cogltx: Applying BERT to long texts. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pretraining for natural language understanding and generation. In *Annual Conference on Neural Information* Processing Systems 2019, pages 13042–13054.
Edouard Grave, Armand Joulin, and Nicolas Usunier.
2017. Improving neural language models with a continuous cache. In 5th International Conference on Learning Representations, ICLR 2017.
Jun He, Liqun Wang, Liu Liu, Jiao Feng, and Hao Wu.
2019. Long document classification from local word glimpses via recurrent attention learning. *IEEE Access*, 7:40707–40718.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pages 770–
778.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735–
1780.
Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le.
2022. Transformer quality in linear time. In *International Conference on Machine Learning*, pages 9099–9117.
DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, and Behnam Neyshabur. 2022. Block-recurrent transformers. *CoRR*, abs/2203.07852.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 6769–6781. Association for Computational Linguistics.
Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829–839, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020.
Ken Lang. 1995. Newsweeder: Learning to filter netnews. In *Proceedings of the Twelfth International* Conference on Machine Learning, pages 331–339.
Morgan Kaufmann.
Abbas Ghaddar Philippe Langlais et al. 2018. Robust lexical features for improved neural network namedentity recognition. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, pages 1896–1907. Association for Computational Linguistics.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.
2019. Latent retrieval for weakly supervised open domain question answering. In *Proceedings of the 57th* Conference of the Association for Computational Linguistics, pages 6086–6096.
Peng-Hsuan Li, Tsu-Jui Fu, and Wei-Yun Ma. 2020.
Why attention? Analyze bilstm deficiency and its remedies in the case of NER. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI
2020, pages 8236–8244. AAAI Press.
Xianming Li, Zongxi Li, Haoran Xie, and Qing Li.
2021a. Merging statistical feature via adaptive gate for improved text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13288–13296.
Xianming Li, Xiaotian Luo, Chenghao Dong, Daichuan Yang, Beidi Luan, and Zhen He. 2021b. TDEER:
an efficient translating decoding schema for joint extraction of entities and relations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8055–8064.
Zongxi Li, Xianming Li, Haoran Xie, Fu Lee Wang, Mingming Leng, Qing Li, and Xiaohui Tao. 2023.
A novel dropout mechanism with label extension schema toward text emotion classification. *Information Processing & Management*, 60(2):103173.
Zongxi Li, Haoran Xie, Gary Cheng, and Qing Li. 2021c. Word-level emotion distribution with two schemas for short text emotion classification.
Knowledge-Based Systems, page 107163.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017.
Usman Naseem, Imran Razzak, Katarzyna Musial, and Muhammad Imran. 2020. Transformer based deep intelligent contextual embedding for twitter sentiment analysis. *Future Generation Computer Systems*,
113:58–69.
Raghavendra Pappagari, Piotr Zelasko, Jesús Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierarchical transformers for long document classification. In IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019, pages 838–844. IEEE.
Hyunji Hayley Park, Yogarshi Vyas, and Kashif Shah.
2022. Efficient classification of long documents using transformers. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics, ACL 2022, pages 702–709. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pages 2227–2237.
Association for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, pages 140:1–140:67.
Imanol Schlag, Kazuki Irie, and Jürgen Schmidhuber.
2021. Linear transformers are secretly fast weight programmers. In *International Conference on Machine Learning*, pages 9355–9366.
Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP
2017, pages 2670–2680. Association for Computational Linguistics.
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. *CoRR*, abs/2104.09864.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–
147.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, pages 5998–6008.
Xueqing Wu, Jiacheng Zhang, and Hang Li. 2022. Textto-table: A new way of information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 2518–
2533. Association for Computational Linguistics.
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✗ A2. Did you discuss any potential risks of your work?
This paper focuses on fundamental research of long sequence modeling in natural language processing. The proposed model does not contain any component that is potentially malicious or harmful to human beings. The model can only be applied for classification and information retrieval tasks, and in its current form, the model cannot be used for conducting adversarial attacks. The model was tested on language modeling with sentence generation, in which the pretraining corpus are commonly adopted ones for language model pretraining.
The work described does not aim to present any new dataset and hence is free of disseminating any false/biased/unfair/discriminative information. The model and its variants are trained with widely adopted and recognized data for different tasks, thus, to the best of our knowledge, it will not contribute to any fairness issue.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4.1, 4.3
✓ B1. Did you cite the creators of artifacts you used?
4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Most of the datasets we used are open for research purposes. Rest datasets such as OntoNotesV5.0, we have applied to use in this paper on the corresponding official sites, and we have been granted to use in this paper.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4.1
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The datasets we used have been widely adopted in the literature and are considered safe to use.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4.1
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4.2,4.3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4.3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4.2
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
gaschi-etal-2023-exploring | Exploring the Relationship between Alignment and Cross-lingual Transfer in Multilingual Transformers | https://aclanthology.org/2023.findings-acl.189 | Without any explicit cross-lingual training data, multilingual language models can achieve cross-lingual transfer. One common way to improve this transfer is to perform realignment steps before fine-tuning, i.e., to train the model to build similar representations for pairs of words from translated sentences. But such realignment methods were found to not always improve results across languages and tasks, which raises the question of whether aligned representations are truly beneficial for cross-lingual transfer. We provide evidence that alignment is actually significantly correlated with cross-lingual transfer across languages, models and random seeds. We show that fine-tuning can have a significant impact on alignment, depending mainly on the downstream task and the model. Finally, we show that realignment can, in some instances, improve cross-lingual transfer, and we identify conditions in which realignment methods provide significant improvements. Namely, we find that realignment works better on tasks for which alignment is correlated with cross-lingual transfer when generalizing to a distant language and with smaller models, as well as when using a bilingual dictionary rather than FastAlign to extract realignment pairs. For example, for POS-tagging, between English and Arabic, realignment can bring a +15.8 accuracy improvement on distilmBERT, even outperforming XLM-R Large by 1.7. We thus advocate for further research on realignment methods for smaller multilingual models as an alternative to scaling. | # Exploring The Relationship Between Alignment And Cross-Lingual Transfer In Multilingual Transformers
Félix Gaschi 1,2, Patricio Cerda 1, Parisa Rastin 2**, Yannick Toussaint** 2 1Posos, 2LORIA
{felix.gaschi,parisa.rastin,yannick.toussaint}@loria.fr [email protected]
## Abstract
Without any explicit cross-lingual training data, multilingual language models can achieve cross-lingual transfer. One common way to improve this transfer is to perform realignment steps before fine-tuning, i.e., to train the model to build similar representations for pairs of words from translated sentences. But such realignment methods were found to not always improve results across languages and tasks, which raises the question of whether aligned representations are truly beneficial for crosslingual transfer. We provide evidence that alignment is actually significantly correlated with cross-lingual transfer across languages, models and random seeds. We show that fine-tuning can have a significant impact on alignment, depending mainly on the downstream task and the model. Finally, we show that realignment can, in some instances, improve cross-lingual transfer, and we identify conditions in which realignment methods provide significant improvements. Namely, we find that realignment works better on tasks for which alignment is correlated with cross-lingual transfer when generalizing to a distant language and with smaller models, as well as when using a bilingual dictionary rather than FastAlign to extract realignment pairs. For example, for POS-tagging, between English and Arabic, realignment can bring a +15.8 accuracy improvement on distilmBERT, even outperforming XLM-R Large by 1.7. We thus advocate for further research on realignment methods for smaller multilingual models as an alternative to scaling.
## 1 Introduction
With the more general aim of improving the understanding of Multilingual Large Language Models
(MLLM), we study the link between the multilingual alignment of their representations and their ability to perform cross-lingual transfer learning, and investigate conditions for realignment methods to improve cross-lingual transfer.
MLLMs, like mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020a), are Transformer
![0_image_0.png](0_image_0.png)
encoders (Vaswani et al., 2017) which show an effective ability to perform Cross-lingual Transfer Learning (CTL). Despite the absence of any explicit cross-lingual training signal, mBERT and XLM-R can be fine-tuned on a specific task in one language and then provide high accuracy when evaluated on another language on the same task
(Pires et al., 2019; Wu and Dredze, 2019). By alleviating the need for training data for a specific task in all languages and for translation data which more than often lacks for non-English languages, CTL with MLLMs could help bridge the gap in NLP between English and other languages.
But the ability of MLLMs to generalize across languages is highly correlated with the similarity between the training language (often English) and the language to which we hope to transfer knowledge (Pires et al., 2019; Wu and Dredze, 2019).
For distant and low-resources languages, CTL with mBERT can give worse results than fine-tuning a Transformer from scratch (Wu and Dredze, 2020a).
Realignment methods (Wu and Dredze, 2020b),
sometimes called adjustment or explicit alignment, aim to improve the cross-lingual properties of an MLLM by trying to make similar words from dif3020 ferent languages have closer representations. Realignment methods typically require a translation dataset and an alignment tool, like FastAlign (Dyer et al., 2013), to extract contextualized pairs of translated words that will be realigned.
Despite some encouraging results on specific tasks, current realignment methods might not consistently improve cross-lingual zero-shot abilities of mBERT and XLM-R (Wu and Dredze, 2020b).
When tested with several seeds on various finetuning tasks, improvements brought by realignment are not always significant and do not compare with the gain brought by scaling the model, e.g., from XLM-R Base to XLM-R Large. However, these realignment methods were not tried on smaller models like distilmBERT as we do here.
The mitigated results of realignment methods raise the question of whether cross-lingual transfer is at all linked with multilingual alignment. If improving alignment does not necessarily improve CTL, then the two might not be correlated. Despite the ability of mBERT and XLM-R to perform CTL,
there lacks consensus on whether they actually hold aligned representations (Gaschi et al., 2022).
We thus investigate the link between alignment and CTL, with three contributions: (1) We find a high correlation between multilingual alignment and cross-lingual transfer for multilingual Transformers, (2) we show that, depending on the downstream task, fine-tuning on English can harm the alignment to different degrees, potentially harming cross-lingual transfer, and (3) we link our findings to realignment methods and identify conditions under which they seem to bring the most significant improvements to zero-shot transfer learning, particularly on smaller models as shown on Fig. 1.
## 2 Related Work
Current realignment methods are applied on a pretrained model before fine-tuning in one language
(typically English). Common tasks are Natural Language Inference (NLI), Named Entity Recognition
(NER), Part-of-speech tagging (POS-tagging) or Question Answering (QA). The model is then expected to generalize better to other languages for the task than without the realignment. Realignment methods rely on pairs of words extracted from translated sentences using a word alignment tool, usually FastAlign (Dyer et al., 2013), but other tools like AWESOME-align (Dou and Neubig, 2021)
could be used. Various realignment objectives are used to bring closer together the contextualized embeddings of words in such pairs: using a linear mapping (Wang et al., 2019), a ℓ2-based loss with regularization to avoid degenerative solutions (Cao et al., 2020; Zhao et al., 2021), or a contrastive loss
(Pan et al., 2021; Wu and Dredze, 2020b).
Existing realignment methods might not significantly improve cross-lingual transfer. Despite improvements on NLI (Cao et al., 2020; Zhao et al.,
2021; Pan et al., 2021) or on dependency parsing (Wang et al., 2019), the results might not hold across tasks and languages. A comparative study by Kulshreshtha et al. (2020) showed that methods based on linear mapping are effective only on "moderately close languages", whereas ℓ2-based loss improves results for "extremely distant languages".
This latter ℓ2-loss was shown to work well on a NLI
task, but not for all languages on a NER task, and to be even detrimental for QA tasks (Efimov et al., 2022). Finally, Wu and Dredze (2020b) compared linear mapping realignment, ℓ2-based realignment and contrastive learning on several tasks, languages and models, performing several runs. They found that existing methods do not bring consistent improvements over no realignment.
Expecting realignment methods to succeed implies a direct link between the multilingual alignment of the representations produced by a model and its ability to perform CTL. However, there isn't any strong consensus on whether multilingual Transformers have well-aligned representations (Gaschi et al., 2022), let alone on whether better-aligned representations lead to better CTL.
Assessing the multilingual alignment of contextualized representations can take many forms. Pairs of words are extracted from translated sentences, usually with FastAlign or a bilingual dictionary
(Gaschi et al., 2022). Then, after building contextualized representations of the words of each pair, the distribution of their similarity can be compared with that of random pairs of words (Cao et al., 2020). But this method can lead to incorrect conclusions (Efimov et al., 2022). A high overlap in the distribution of similarities between related and random pairs means that sometimes random pairs can have higher similarities than related pairs. But since those pairs do not necessarily involve the same words, a high overlap does not mean that any word is closer to an unrelated one than to a related one. An alternative is to compare a related pair to its neighbors (Efimov et al., 2022), which shows that realignment methods indeed improve multilingual alignment and that fine-tuning can harm this alignment. Another similar approach consists in designing a nearest-neighbor search criterion. This was done for sentence-level representations (Pires et al., 2019) and for word-level alignment (Conneau et al., 2020b; Gaschi et al., 2022), showing that MLLMs like mBERT have a multilingual alignment that is competitive with static embeddings
(Bojanowski et al., 2017) explicitly aligned with a supervised alignment method (Joulin et al., 2018).
## 3 Method
To study the link between multilingual alignment and cross-lingual transfer (CTL), we need a way to evaluate alignment and CTL. We use a relative difference to evaluate CTL, we discuss different methods for evaluating alignment, and describe the realignment method used in our experiments.
## 3.1 Evaluating Cross-Lingual Transfer
A model has high CTL abilities when, after finetuning for one language, it can obtain a high evaluation score on other languages. To evaluate it for a given task, we compute the relative difference between the evaluation metric men on the English development set and the evaluation metric mtgt on the target language:
$$\mathrm{cross-lingual~transfer}}={\frac{m_{\mathrm{tgt}}-m_{\mathrm{en}}}{m_{\mathrm{en}}}}$$
(1)
The monolingual metric is a score between 0 and 1, like accuracy or f1-score, where higher is better.
Then our metric gives scores between -1 and +∞.
A negative score is obtained if and only if mtgt <
men, which should always be the case in practice.
Values closer to 0 then indicate better CTL for a specific task and language.
It must be noted that for datasets where the target language test set is a translation of the English one, the normalization in Equation 1 allows the metric to boild down roughly to minus the proportion of correct answers in English that were misclassified when translated, assuming there isn't not to many misclassified English examples that were correctly classified in the target language, which should be the case since there are not that much misclassified English examples in general.
## 3.2 Evaluating Alignment
To evaluate multilingual alignment, we use the same method for extracting pairs of translated words with their context as Gaschi et al. (2022).
Provided a source of related pairs of words from both languages, a fixed number of pairs of words are randomly selected and a nearest-neighbor search with cosine similarity is performed. The top-1 accuracy of the nearest-neighbor search is the score of the alignment evaluation.
To extract contextualized pairs of translated words from a translation dataset, FastAlign is the most widely used word aligner in realignment methods (Wu and Dredze, 2020b; Cao et al., 2020; Zhao et al., 2021; Wang et al., 2019), but it is prone to errors and thus generates noisy training realignment data (Pan et al., 2021; Gaschi et al., 2022).
Following Gaschi et al. (2022), we use a bilingual dictionary to extract matching pairs of words in translated sentences, discarding any ambiguity to obtain the most accurate pairs possible.
It is worth noting that the accuracy of a nearestneighbor search is not symmetric. We use the convention that an A-B alignment means that we look for the translation of each word of language A among its nearest neighbors. Two types of alignment can be evaluated: strong and weak alignment
(Roy et al., 2020). Weak alignment is the expected way to compute alignment: when evaluating A-B
weak alignment, we search a translation for a given word of A only among nearest-neighbors belonging to B. But with such an evaluation, there can be situations with highly measured alignment where representations from both languages are far apart with respect to intra-language similarity. Strong alignment remedies to this by including language A in the search space. With A-B strong alignment, we search a translation for a given word of A among its nearest-neighbors belonging to *both* language B
and A. For a given pair of related words to be considered close enough, the word from language B
must be closer to its translation in A than any other word from B and A. We show in our experiments that strong alignment is more correlated with CTL
than weak alignment.1
## 3.3 Realignment Loss
A realignment task consists in making the representations of related pairs closer to each other. The method used to extract related pairs for alignment evaluation can be used for computing the realignment loss. Following Wu and Dredze (2020b), we 1Although strong alignment can be affected by synonyms, restricting the search space to the words from the sampled extracted pairs reduces the risk of founding a synonym.
![3_image_0.png](3_image_0.png)
![3_image_2.png](3_image_2.png)
minimize a contrastive loss using the framework of Chen et al. (2020), encouraging strong alignment for pairs within a batch. A batch is composed of a set of representations H of all words in a few pairs of translated sentences and a set *P ⊆ H × H*
containing the pairs of translated words extracted with a bilingual dictionary (or a word aligner). The realignment loss can then be written as:
![3_image_4.png](3_image_4.png)
POS NER XNLI
![3_image_1.png](3_image_1.png)
![3_image_3.png](3_image_3.png)
en-train 12,543 20,000 392,703 en-dev 2,002 10,000 2,490 en-test 2,077 10,000 5,010 ar-test 680 10,000 5,010 es-test 426 10,000 5,010 fr-test 416 10,000 5,010 ru-test 601 10,000 5,010 zh-test 500 10,000 5,010
Table 1: Number of examples ing a softmax with a temperature hyper-parameter
(T = 0.1), following Wu and Dredze (2020b),
bringing closer together translated pairs of words with respect to other pairs in the batch.
## 3.4 Experimental Details
We evaluate cross-lingual transfer with three multilingual tasks, the sizes of which are reported in Table 1:
- Part-of-speech tagging (POS-tagging) with the Universal Dependencies dataset (Zeman et al., 2020). Similarly to Wu and Dredze
(2020b), we use the following treebanks:
Arabic-PADT, English-EWT, Spanish-GSD,
French-GSD, Russian-GSD, and ChineseGSD.
- Named Entity Recognition (NER) with the WikiANN dataset (Pan et al., 2017).
- Natural Language Inference (NLI) with the XNLI dataset (Conneau et al., 2018).
It must be noted that XNLI is the only dataset with translated test sets, and thus the only one for
![4_image_0.png](4_image_0.png)
which the cross-lingual transfer metric is strictly comparable across languages. In our experiments, high correlation will nonetheless be observed between CTL and alignment for the two other tasks, suggesting that the CTL metrics is not so much affected by difference in size and domain between the test sets.
Further details about implementation can be found in Appendix B And in the source code2.
## 4 Correlation Between Alignment And Ctl
We measure the correlation between multilingual alignment and cross-lingual transfer (CTL) across models, languages and seeds. We also compare the correlation between alignment before fine-tuning and after fine-tuning with CTL and with different alignment measures.
Spearman's rank correlation is measured between alignment before or after fine-tuning and CTL. The English-target alignment is computed for each target language with the method described in Section 3.2 and is compared with the transfer ability from English to that same target language with the metric described in Section 3.1.
Table 2 shows correlations between CTL and different types of alignment. It is computed separately for each different task (POS, NER, NLI), for the alignment at the last and second to last layer
(last and penult), before and after fine-tuning on the given task, and with weak and strong alignment. Comparing other layers for models of different sizes is less relevant, since the correlation is computed across models with various number of layers. And a model-by-model analysis of the correlation with the alignment in various layers did not reveal contradictory results (cf. Appendix E). Each correlation value is obtained from 100 samples with four different models (distilmBERT,
mBERT, XLM-R Base and Large), five target languages (Arabic, Spanish, French, Russian and Chinese) and five seeds for initialization of the classification head and shuffling of the fine-tuning data. Results show that strong alignment is better correlated to cross-lingual transfer than weak alignment. With the exception of two tasks after finetuning (NER and NLI), strong alignment has a marginally higher correlation with CTL. This is particularly noticeable when looking at alignment before fine-tuning on the last layer, going from a correlation between 0.51 and 0.69 for weak alignment to one ranging from 0.82 to 0.86 for strong alignment.
Tab. 2 also shows that for NLI, the alignment on the penultimate layer seems better correlated to cross-lingual transfer than with the last layer. A
relatively important gap in correlation is measured between the last and the second-before-last layer for all cases except for strong alignment before fine-tuning. The fact that alignment on the penultimate layer would correlate better than the last for NLI can be explained by the sentence-level nature of the task. For sentence classification tasks, the classification head reads only the representation of the first token of the last layer, which is computed from the representations of all the tokens at the previous layer, leading to a pooling of the penultimate layer.
Despite the different values observed, there seems to be no significant difference between correlation for alignment measured before and after fine-tuning, and a careful analysis of confidence interval obtained with bootstrapping (Efron and Tibshirani, 1994) can confirm this (cf. Appendix C
for detailed results).
Fig. 2 shows the relation between CTL and English-target strong alignment measured after fine-tuning measured in four situations to further illustrate the link between alignment and transfer.
Fig. 2b shows one of the cases with higher correlation (0.92). The correlation seems to hold well across models (forms) and languages (colors).
However, for a given model and language, the random seed for fine-tuning seems to be detrimental to the correlation, although at a small scale. Hence, alignment might not be the only factor to affect cross-lingual generalization as the model initialization or the data shuffling seems to play a smaller role.
Fig. 3 shows a case with one of the lowest correlations between strong alignment and CTL (0.70).
It seems that models and initialization seeds have a higher impact on alignment than on CTL. For example, in the case of English-French alignment
(green), CTL is between 0.0 and -0.1, whatever the model and seed, not overlapping with other target-English language pairs, but alignment varies between approximately 0.05 and 0.5, overlapping with all other language pairs. Interestingly, the penultimate layer has a higher accuracy (0.82), suggesting that for NER the last layer is not necessarily the one for which alignment correlates the most with CTL.
For two of the three tested tasks (NER and POStagging), it must be noted that the CTL metric is not strictly comparable across languages since the test sets for each language are of different domains and sizes (cf. Section 3.4). However, for the third task
(NLI), each test set is a translation of the English one, and thus the CTL metric is strictly comparable in that case. This might explain why correlations are higher for the NLI task than the others. Nevertheless, the observed correlation for the two other tasks is still significantly high, which suggests that the general tendency might not be affected by the differences in domains and sizes in the test sets.
## 5 The Impact Of Fine-Tuning On Alignment
To study the link between alignment and crosslingual transfer (CTL), we also look at the impact of fine-tuning over alignment. We've already shown that strong alignment is highly correlated with CTL. However, we weren't able to conclude whether alignment measured before or after finetuning was better correlated to CTL abilities. To understand the difference between both measures, we study in this section the impact of fine-tuning on the alignment of MLLMs representations. We use the same fine-tuning runs as in the previous section (4).
Tab. 3 shows the relative variation in alignment before and after fine-tuning for all tasks and models tested and for three languages for clarity (complete results in Appendix D). The relative difference is built in the same way as the cross-lingual transfer evaluation (Eq. 1). Negative values indicate a drop in alignment. Alignment is measured at the last layer. Fig. 4 and 5 show a breakdown by layer for
| task | model | en-ar | en-es | en-ru |
|-------------|---------|---------|---------|---------|
| distilmBERT | -0.74 | -0.86 | -0.87 | |
| mBERT | -0.90 | -0.86 | -0.95 | |
| XLM-R Base | -0.43 | -0.46 | -0.70 | |
| XLM-R Large | -0.30 | 0.23 | -0.44 | |
| distilmBERT | 0.00 | -0.61 | -0.33 | |
| mBERT | -0.28 | -0.36 | -0.27 | |
| XLM-R Base | 5.88 | 0.22 | 1.32 | |
| XLM-R Large | 16.34 | 2.22 | 3.10 | |
| distilmBERT | 5.49 | 0.30 | 2.28 | |
| mBERT | 5.65 | 0.99 | 1.45 | |
| XLM-R Base | 11.17 | 1.01 | 2.67 | |
| XLM-R Large | 25.36 | 1.78 | 2.99 | |
## A Few Cases.
For certain combinations of models and tasks, fine-tuning is detrimental to multilingual alignment. distilmBERT and mBERT mainly show a decrease in alignment for POS-tagging and NER,
and smaller improvements than other models on NLI. However, POS-tagging is the only of the three tasks which shows dramatic drops where alignment can be reduced by as much as 96%.
The drop in alignment can be explained by catastrophic forgetting. If the model is only trained on a monolingual task, it might not retain information about other languages or about the link between English and other languages.
What is more surprising is the increase in alignment obtained in other cases. XLM-R Base and Large, which are larger models than mBERT and distilmBERT, have a relative increase that can go as high as 25.36 on the NLI task for distant languages. And although these increases are from a small alignment measure, we still observe a large increase for middle layers where the initial alignment is already quite high (cf. Fig. 5).
The alignment of larger models being less harmed by fine-tuning is coherent with the fact that those same larger models have been shown to have better CTL abilities. Fig. 4 shows that more layers seem to mitigate the potentially negative impact of fine-tuning on alignment, as it affects mainly the layers closest to the last one and as the initial alignment measure is globally higher for XLM-R than distilmBERT (before fine-tuning:
≈0.25 against ≈0.008).
Giving a definitive answer as to why different tasks have different impacts on alignment might need further research. But one could already argue
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
that each task corresponds to different levels of abstraction in NLP. Tasks with a low level of abstraction like POS-tagging might rely on the word form itself and thus on more language-specific components of the representations, which when enhanced, decreases alignment. On the other hand, NLI has a higher level of abstraction, requiring the meaning rather than the word form, which might be encoded in deeper layers (Tenney et al., 2019) which are more aligned.
Fine-tuning MLLMs on a downstream task has an impact on the multilingual alignment of the representations produced by the model. For "smaller" language models, it is systematically detrimental, as well as for certain tasks like POS-tagging.
This might explain why some realignment methods might not work for all models nor all tasks (Wu and Dredze, 2020b).
## 6 Impact Of Realignment On Cross-Lingual Transfer
We have already shown that the correlation between multilingual alignment and cross-lingual transfer
(CTL) is high (Section 4). But we do not know whether they are more directly linked. In this section, we try to identify the conditions under which improving alignment in multilingual models leads to improvement in CTL.
Sequential realignment is the usual way to perform realignment: realignment steps are performed on the pre-trained model before fine-tuning. We propose to compare it with joint alignment, where we optimize simultaneously for the realignment and the downstream task (more details in Appendix A), to try and identify whether alignment before or after fine-tuning is more strongly related to CTL.
In the same settings as the previous experiments (tasks, models and languages, and number of seeds), we fine-tune models in English with different realignment methods and evaluate CTL on different languages. Following a similar setting as
(Wu and Dredze, 2020b), realignment data from the five pairs of languages (English-target) is interleaved to form a single multilingual realignment dataset. Models are fine-tuned on POS-tagging or NER for five epochs and 2 epochs for NLI because its training data is larger. We use the opus100 translation dataset (Zhang et al., 2020) from which we extract pairs of words using bilingual dictionaries. We also tested with the multiUN translation data (Ziemski et al., 2016), which conditioned our choice of languages, and with other ways to extract alignment pairs: FastAlign (Dyer et al., 2013)
POS NER NLI
distilmBERT 73.1 57.4 *64.2*
+ before **78.0** 58.9 64.3
+ joint 77.9 59.6 **65.0**
mBERT 74.3 62.2 *69.7*
+ before 78.2 62.7 68.7
+ joint 78.3 64.8 **70.0**
XLM-R base 78.8 60.4 *74.0*
+ before 79.9 **63.9** 72.7 + joint 79.4 63.0 **74.6**
XLM-R large 79.6 65.0 *80.0*
POS ar es fr ru zh
distilmBERT 51.0 84.1 85.3 81.2 *64.1*
+ before 65.5 86.5 85.8 84.7 **67.4** + joint 66.8 **85.8** 86.5 84.1 66.4
XLM-R base 62.5 86.6 86.9 **86.9** *70.9* + before 67.3 86.9 **87.3** 86.8 **71.2**
+ joint 66.6 86.6 87.2 86.0 70.6
XLM-R large 65.1 87.0 87.5 87.0 *71.5*
and AWESOME-align (Dou and Neubig, 2021).
Changing the translation dataset does not fundamentally change the results, and using probabilistic alignment tools made realignment methods less effective. The results presented in this section were handpicked for the sake of clarity, but the reader can refer to Appendix F.
Condensed results are reported on Tab. 4, averaged on the five languages. A breakdown by languages for the POS-tagging task and two models is shown on Tab. 5. It shows that realignment methods improve performance only on certain tasks, models and language pairs.
Realignment methods, either sequential or joint, provide significant improvement for all models for the POS-tagging task, but less significant ones for NER, and no significant improvement for NLI. The positive impact of realignment on cross-lingual transfer seems to be mirrored by the negative impact of fine-tuning over alignment. Indeed, POStagging is also the task for which fine-tuning is the most detrimental to multilingual alignment, as shown in the previous section.
The same parallel can be drawn for models. distilmBERT is the model that benefits the most from realignment. It is also the one whose alignment suffers the most from fine-tuning. Smaller multilingual models seem to benefit more from realignment, as well as they see their multilingual alignment reduced after fine-tuning. In the same way that fine-tuning mainly affects the deeper layers, it is possible that realignment might affect only those deeper layers. This would mean that most layers would have their alignment significantly improved for small models like distilmBERT (6 layers), while larger models might be only superficially realigned.
Finally, besides tasks and models, it can also be observed that the impact of realignment varies across language pairs (Tab. 5). Although we did not test on many language pairs, results are coherent with the idea that realignment methods tend to work better on distant pairs of languages (Kulshreshtha et al., 2020).
On a side note, our controlled experiment does not allow us to conclude whether it is more important to improve alignment before fine-tuning or after. It seems that alignment measured before and after fine-tuning are equally important to crosslingual transfer.
Realignment methods unsurprisingly provide better results when the alignment is lower, be it before or after fine-tuning. Distant languages and small models have lower alignment, and POS-tagging is a task where alignment decreases after fine-tuning. Realignment helps only up to a certain point where representations are already well aligned, and CTL gives already good results.
For distilmBERT on POS-tagging for transfer from English to Arabic, it provides a +15.8 improvement over baseline, even outperforming XLM-R Large by 1.7 points. In such conditions, realignment is an interesting alternative to scaling for multilingual models.
If realignment succeeds in some favorable conditions, then how can we explain that realignment methods were shown to not be significantly improving CTL on several tasks, including POS-tagging
(Wu and Dredze, 2020b)? Firstly, to the best of our knowledge, realignment was never tried on distilmBERT or other models of equivalent size.
Secondly, Tab. 6 shows that it might be partly due to an element of the realignment methods that was overlooked: the source of related pairs of words.
The way pairs are extracted seems to be crucial to the success of realignment methods. Tab. 6 shows the effect of different types of pairs extraction in realignment methods. Realignment methods using pairs extracted with FastAlign or
| POS | NER | |
|--------------------|-------|------|
| XLM-R base | 78.8 | 60.4 |
| + before fastalign | 78.6 | 61.4 |
| + before awesome | 78.6 | 62.0 |
| + before dico | 79.9 | 63.9 |
| + joint fastalign | 78.0 | 62.1 |
| + joint awesome | 77.8 | 62.3 |
| + joint dico | 79.4 | 63.0 |
AWESOME-align do not provide significant improvements over the baseline, whereas using a bilingual dictionary does. Using a bilingual dictionary might be more accurate for extracting translated pairs (Gaschi et al., 2022). Another explanation could be that the type of words contained in a dictionary might help since it might contain more lexical words holding meaning and fewer grammatical words.
## 7 Conclusion
We have shown that multilingual alignment, measured using a nearest-neighbor search among translated pairs of contextualized words, is highly correlated with the cross-lingual transfer abilities of multilingual models (or at least multilingual Transformers). Strong alignment was also revealed to be better correlated to cross-lingual transfer than weak alignment.
Then we investigated the impact of fine-tuning
(necessary for cross-lingual transfer) on alignment as well as the impact of realignment methods on cross-lingual transfer. Fine-tuning was revealed to have a very different impact on alignment depending on the downstream task and the model.
Where lower-level tasks seemed to have the most impact and smaller models seemed to be the most affected. Conversely, realignment methods were shown to work better on those same tasks and models. Ultimately, realignment works unsurprisingly better when the baseline alignment (before or after fine-tuning) is lower.
We also showed that using a bilingual dictionary for extracting pairs for realignment methods improves over the commonly used FastAlign and over a more precise neural aligner (AWESOME-align).
It's worth noting that realignment works particularly well for a small model like distilmBERT
(66M parameters), allowing it in some cases to obtain competitive results with XLM-R Large (354M
parameters). This advocates for further research on realignment for small Transformers to build more compute-efficient multilingual models.
Finally, further research is needed to investigate additional questions, like whether cross-lingual transfer is more directly linked to alignment before or after fine-tuning, or to alignment at certain layers for certain tasks. To answer these questions, more large-scale experiments could be performed on more tasks and especially on more languages to obtain correlation values with smaller confidence intervals.
## 8 Limitations
We worked with only five language pairs, all involving English and another language: Arabic, Spanish, French, Russian and Chinese. This is due to using the multiUN dataset (Ziemski et al., 2016) for evaluating alignment and performing realignment. We also used the opus100 dataset (Zhang et al., 2020),
which contains more pairs and is the dataset that eventually figured in our paper, but we stuck to the same language pairs for a fair comparison with multiUN in Appendix F. This narrow choice of language limits our ability to understand why realignment methods work well for some languages and not others. And we believe that making a similar analysis with many language pairs, not necessarily involving English, would be a good lead for further research investigating the link between the success of the realignment method and how two languages relate to each other.
We chose a strong alignment objective with contrastive learning for our realignment task. Several other objectives could have been tried, like learning an orthogonal mapping between representations
(Wang et al., 2019) or simply using a ℓ2-loss to collapse representations together (Cao et al., 2020),
but both methods require an extra regularization step (Wu and Dredze, 2020b) since they do not leverage any negative samples. For the sake of simplicity, we focused on a contrastive loss, as trying different methods would have led to an explosion in the number of runs for the controlled experiment.
This also explains why we used the same hyperparameters and pre-processing steps of Wu and Dredze (2020b). A more thorough search for the optimal parameters, and realignment loss, might lead to better results.
## 9 Acknowledgements
We would like to thank the anonymous reviewers for their comments, as well as Shijie Wu, who kindly explained some details in the implementation of his paper Wu and Dredze (2020b). We are also grateful for discussion and proof-reading brought by our colleagues at Posos: François Plesse, Xavier Fontaine and Baptiste Charnier.
Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations3.
## References
Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural Language Processing with Python*, 1st edition.
O'Reilly Media, Inc.
Anthony J. Bishara and James B. Hittner. 2017. Confidence intervals for correlations when data are not normal. *Behavior Research Methods*, 49(1):294–309.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146.
Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations.
In *International Conference on Learning Representations*.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. *CoRR*, abs/2002.05709.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020a. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020b. Emerging cross-lingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022–6034, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112–2128, Online.
Association for Computational Linguistics.
Yunshu Du, Wojciech M. Czarnecki, Siddhant M.
Jayakumar, Mehrdad Farajtabar, Razvan Pascanu, and Balaji Lakshminarayanan. 2018. Adapting auxiliary losses using gradient similarity.
Chris Dyer, Victor Chahuneau, and Noah A. Smith.
2013. A simple, fast, and effective reparameterization of IBM model 2. In *Proceedings of the 2013* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics.
Pavel Efimov, Leonid Boytsov, Elena Arslanova, and Pavel Braslavski. 2022. The impact of cross-lingual adjustment of contextual word representations on zero-shot transfer.
Bradley Efron and Robert Tibshirani. 1994. An introduction to the bootstrap.
Félix Gaschi, François Plesse, Parisa Rastin, and Yannick Toussaint. 2022. Multilingual transformer encoders: a word-level task-agnostic evaluation.
Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E.
Oliphant. 2020. Array programming with NumPy.
Nature, 585(7825):357–362.
Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a 3see https://www.grid5000.fr
retrieval criterion. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 2979–2984, Brussels, Belgium.
Association for Computational Linguistics.
Saurabh Kulshreshtha, Jose Luis Redondo Garcia, and Ching-Yun Chang. 2020. Cross-lingual alignment methods for multilingual BERT: A comparative study. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 933–942, Online. Association for Computational Linguistics.
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018.
Word translation without parallel data. In *International Conference on Learning Representations*.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Lukas Liebel and Marco Körner. 2018. Auxiliary tasks in multi-task learning. *CoRR*, abs/1805.06334.
Fenglin Liu, Meng Gao, Yuanxin Liu, and Kai Lei. 2019.
Self-adaptive scaling for learnable residual structure.
In *Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)*, pages 862–870, Hong Kong, China. Association for Computational Linguistics.
Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.
Lin Pan, Chung-Wei Hang, Haode Qi, Abhishek Shah, Saloni Potdar, and Mo Yu. 2021. Multilingual BERT
post-pretraining alignment. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 210–219, Online.
Association for Computational Linguistics.
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual BERT? In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA:
Language-agnostic answer retrieval from a multilingual pool. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 5919–5930, Online. Association for Computational Linguistics.
John Ruscio. 2008. Constructing confidence intervals for spearman's rank correlation with ordinal data:
A simulation study comparing analytic and bootstrap methods. *Journal of Modern Applied Statistical* Methods, 7:7.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *ArXiv*,
abs/1910.01108.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019.
BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593–
4601, Florence, Italy. Association for Computational Linguistics.
Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the Fourth SIGHAN
Workshop on Chinese Language Processing.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *CoRR*, abs/1706.03762.
Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual BERT transformation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5721–5727, Hong Kong, China. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas:
The surprising cross-lingual effectiveness of BERT.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics.
Shijie Wu and Mark Dredze. 2020a. Are all languages created equal in multilingual BERT? In *Proceedings* of the 5th Workshop on Representation Learning for NLP, pages 120–130, Online. Association for Computational Linguistics.
Shijie Wu and Mark Dredze. 2020b. Do explicit alignments robustly improve multilingual encoders? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 4471–4482, Online. Association for Computational Linguistics.
Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, Noëmi Aepli, Željko Agic, Lars Ahren- ´
berg, Chika Kennedy Ajede, Gabriele Aleksan- ˙
draviciˇ ut¯ e, Lene Antonsen, Katya Aplonova, An- ˙
gelina Aquino, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, Colin Batchelor, John Bauer, Kepa Bengoetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agne Bielinskien ˙ e, Ro- ˙
gier Blokland, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokaite, Aljoscha Burchardt, Marie Can- ˙
dito, Bernard Caron, Gauthier Caron, Tatiana Cavalcanti, Gül¸sen Cebiroglu Eryi ˘ git, Flavio Massimil- ˘
iano Cecchini, Giuseppe G. A. Celano, Slavomír Cé- ˇ
plö, Savas Cetin, Fabricio Chalub, Ethan Chi, Jinho Choi, Yongseok Cho, Jayeol Chun, Alessandra T.
Cignarella, Silvie Cinková, Aurélie Collomb, Çagrı ˘
Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaž Erjavec, Aline Etienne, Wograine Evelyn, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Griciut¯ e,˙
Matias Grioni, Loïc Grobol, Normunds Gruz¯ ¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Tunga Güngör, Nizar Habash, Jan Hajic, Jan Haji ˇ c jr., Mika ˇ
Hämäläinen, Linh Hà My, Na-Rae Han, Kim Har- ˜ ris, Dag Haug, Johannes Heinecke, Oliver Hellwig, Felix Hennig, Barbora Hladká, Jaroslava Hlavácová, ˇ
Florinel Hociung, Petter Hohle, Jena Hwang, Takumi Ikeda, Radu Ion, Elena Irimia, O.lájídé Ishola, Tomáš Jelínek, Anders Johannsen, Hildur Jónsdóttir, Fredrik Jørgensen, Markus Juutinen, Hüner Ka¸sıkara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerová, Jesse Kirchner, Elena Klementieva, Arne Köhn, Abdullatif Köksal, Kamil Kopacewicz, Timo Korkiakangas, Natalia Kotsyba, Jolanta Kovalevskaite, Si- ˙
mon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong Lê Hông, Alessandro Lenci, Saran Lert- `
pradit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Yuan Li, Nikola Ljubešic, Olga Loginova, Olga Lyashevskaya, ´
Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cat˘ alina M ˘ ar˘ anduc, David Mare ˘ cek, Katrin ˇ Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Hiroshi Matsuda, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Niko Miekka, Margarita Misirpashayeva, Anna Missilä, Cat˘ alin Mititelu, Maria Mitrofan, ˘
Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Tomohiko Morioka, Shinsuke Mori, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Berzkalne, ¯
Luong Nguy˜ên Thi
., Huy`ên Nguy˜ên Thi
. Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo. Olúòkun, Mai Omura, Emeka Onwuegbuzia, Petya Osenova, Robert Östling, Lilja Øvrelid, ¸Saziye Betül Özate¸s, Arzucan Özgür, Balkız Öztürk Ba¸saran, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Łapinska, Siyao ´
Peng, Cenel-Augusto Perez, Guy Perrier, Daria Petrova, Slav Petrov, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Petru Rebeja, Siva Reddy, Georg Rehm, Ivan Riabov, Michael Rießler, Erika Rimkute, Larissa Rinaldi, ˙
Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Valentin Ros, ca, Davide Rovati, Olga Rudina, Jack Rueter, Shoval Sadde, Benoît Sagot, Shadi Saleh, Alessio Salomoni, Tanja Samardžic,´
Stephanie Samson, Manuela Sanguinetti, Dage Särg, Baiba Saul¯ıte, Yanin Sawanakunanon, Salvatore Scarlata, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Takaaki Tanaka, Samson Tella, Isabelle Tellier, Guillaume Thomas, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdenka Urešová, Larraitz Uria, Hans Uszko- ˇ
reit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Aya Wakasa, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M.
Yavrumyan, Zhuoran Yu, Zdenek Žabokrtský, Amir ˇ
Zeldes, Hanzhi Zhu, and Anna Zhuravleva. 2020.
Universal dependencies 2.6. LINDAT/CLARIAHCZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628–
1639, Online. Association for Computational Linguistics.
Sheng Zhang, Kevin Duh, and Benjamin Van Durme.
2018. Fine-grained entity typing through increased discourse context and adaptive classification thresholds. In *Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics*, pages 173–179, New Orleans, Louisiana. Association for Computational Linguistics.
Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2021. Inducing language-agnostic multilingual representations. In *Proceedings of *SEM*
2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 229–240, Online.
Association for Computational Linguistics.
Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations parallel corpus v1.0. In *Proceedings of the Tenth International* Conference on Language Resources and Evaluation
(LREC'16), pages 3530–3534, Portorož, Slovenia.
European Language Resources Association (ELRA).
## A Joint Realignment
Existing realignment methods proceed in a sequential manner. The pre-trained model is first optimized for the realignment loss, before any finetuning. This assumes that the alignment before fine-tuning is positively linked to the cross-lingual transfer abilities of the model and that improving alignment before fine-tuning will improve transfer.
However, fine-tuning itself might have an impact on alignment (Efimov et al., 2022).
To compare the importance of alignment before and after fine-tuning for CTL, we introduce a new realignment method where realignment and finetuning are performed jointly. We optimize simultaneously for a realignment loss and the fine-tuning loss. In practice, for each optimization step, we compute the loss Ltask for a batch of the fine-tuning task and the loss Lrealign for a batch of the alignment data. The total loss for each backward pass is then written as:
L = Ltask + Lrealign (3)
This joint realignment can be framed as multi-task learning. The fine-tuning task would be the main task and the realignment task an auxiliary one.
There are more elaborate methods for training a model with an auxiliary task (Liebel and Körner, 2018; Du et al., 2018; Zhang et al., 2018; Liu et al., 2019) but our aim is to propose the simplest method possible to compare joint and sequential alignment in a controlled setting.
## B Experimental Details B.1 Scientific Artifacts Used
We relied on the following scientific Python packages for our experiments: the HuggingFace's libraries transformers (Wolf et al., 2020),
datasets (Lhoest et al., 2021) and evaluate4, PyTorch (Paszke et al., 2019), NLTK (Bird et al.,
2009) and its implementation of the Stanford Chinese Segmenter (Tseng et al., 2005), seqeval
(Nakayama, 2018) for evaluating NER, NumPy (Harris et al., 2020), and AWESOME-align (Dou and Neubig, 2021), FastAlign (Dyer et al., 2013), and MUSE dictionaries (Lample et al., 2018) for extracting alignment pairs.
We used the following datasets: two translation datasets for building evaluating alignment and realigning, multiUN (Ziemski et al., 2016) and 4https://huggingface.co/docs/evaluate/index
| model | # parameters |
|-------------|----------------|
| distilmBERT | 66M |
| mBERT | 110M |
| XLM-R Base | 125M |
| XLM-R Large | 345M |
Table 7: Number of parameters opus100 (Zhang et al., 2020), XNLI (Conneau et al., 2018), the Universal Dependencies dataset
(Zeman et al., 2020) for POS-tagging, and the WikiANN dataset for NER (Pan et al., 2017).
Finally, we worked with four different models:
distilmBERT, which was released with distilBERT
(Sanh et al., 2019), mBERT, which was released with BERT (Devlin et al., 2019) and XLM-R Base and Large (Conneau et al., 2020a).
## B.2 Multilingual Alignment Data
From a translation dataset, pairs were extracted either using a bilingual dictionary, following
(Gaschi et al., 2022), with FastAlign (Dyer et al., 2013) or AWESOME-align (Dou and Neubig, 2021). For FastAlign, we produced alignments in both direction and symmetrize with the grow-diag-final-and heuristic provided with FastAlign, following the setting of Wu and Dredze
(2020b). For all methods of extraction, we kept only one-to-one alignment and discard trivial cases where both words are identical, again following Wu and Dredze (2020b).
## B.3 Experimental Setup We Performed Two Experiments:
1. Fine-tuning all models on all tasks for 5 epochs and measuring alignment before and after fine-tuning. This experiment provided the results for Section 4 and 5.
2. Performing different realignment methods before fine-tuning for 5 epochs (or 2 for XNLI),
providing results for section 6.
For both experiments, we reused the experimental setup from Wu and Dredze (2020b). Fine-tuning on a downstream task is done with Adam, with a learning rate of 2e-5 with a linear decay and warmup for 10% of the steps. Fine-tuning is performed on 5 epochs, and 32 batch size, except for XNLI in the second experiment, where we trained for 2 epochs, which still leads to more fine-tuning steps than any of the two other tasks (cf. Table 1).
For the realignment methods, still following Wu and Dredze (2020b), we train in a multilingual fashion, where each batch contains examples from all target languages. However, we use the same learning rate and schedule as for fine-tuning for a fair comparison between joint and sequential realignment, since the same optimizer is used for fine-tuning and realignment when performing joint realignment. We use a maximum length of 96 like Wu and Dredze (2020b) but a batch size of 16 instead of 128 because of limited computing resources.
## B.4 Discussion On The Number Of Realignment Samples
It is worth noting that our method uses fewer realignment samples. Since we alternate batches of 16 realignment samples and batches of 32 finetuning samples for joint realignment, this fixes the number of realignment samples we will use for a specific downstream task, for a fair comparison.
This gives 31,358 sentence pairs for POS-tagging, 50,000 for NER, and 392,703 for NLI. For comparison, Wu and Dredze (2020b) used 100k steps of batches of size 128. The number of realignment samples used could have been a factor explaining why realignment works well for POS-tagging and less for NER and NLI, and why Wu and Dredze
(2020b) do not find that realignment methods improve results significantly on any task. It could be argued that training on too many realignment samples might hurt performances. However, when testing on the POS-tagging task, we found that the number of realignment samples did not have significant impact on performances.
## B.5 Computational Budget
The first experiment was performed on Nvidia A40 GPUs for an equivalent of 3 days for a single GPU
(including all models, tasks and seeds). For the second experiment, training (fine-tuning and/or realignment) was performed on various smaller GPUs (RTX 2080 Ti, GTX 1080 Ti, Tesla T4)
for distilmBERT, mBERT and XLM-R Base, and on a Nvidia A40 for XLM-R Large. The experiment took more than 10 GPU-days on the smaller GPUs, combining all models, realignment methods (including baseline), random seeds, translation datasets and pairs extraction methods. For XLM-R
Large, for which we only trained the baseline, it still required 30 GPU-hours on Nvidia A40.
## C Confidence Intervals For Correlation
In Section 4 we compared correlation for different tasks, before and after fine-tuning, for Englishtarget and target-English alignment and for the last and penultimate layer. These correlations where computed across several models, languages and seeds. From this correlation statistics, we have drawn three conclusions:
1. Strong alignment is better correlated with cross-lingual transfer than weak alignment.
2. The NLI task, because of its sentence-level nature, have a cross-lingual transfer that correlates better with the penultimate layer than the last one.
3. The results do not significantly attribute higher correlation of cross-lingual transfer with alignment before or after fine-tuning neither with English-target compared to target-English alignment.
We verify here that these conclusions hold when looking at the confidence intervals (Tab. 8 and Tab. 9). Confidence intervals are obtained using the Bias-Corrected and Accelerated (BCA) bootstrap method, where several subsets (2000) subsets of our 100 points for each measure of the correlation coefficient are sampled to obtain an empirical distribution of the correlation from which the confidence interval can be deduced (Efron and Tibshirani, 1994). Since we are dealing with ordinal data
(the rank in Spearman's rank correlation), bootstrap confidence intervals are expected to have better properties than methods based on assumptions about the distribution (Ruscio, 2008; Bishara and Hittner, 2017)
Is strong alignment significantly better correlated with cross-lingual transfer than weak alignment? comparing both tables cell-by-cell reveals that confidence intervals for the last layer before fine-tuning hardly never overlap, and when they do it's with a small overlap. So in the case of alignment of the last layer before fine-tuning, strong alignment is significantly better correlated with cross-lingual transfer than weak alignment. For other situations, confidence interval overlap. But the fact that strong alignment has almost systematically a higher correlation makes our correlation still relevant.
Does the penultimate layer correlate better than the last one for NLI? For this task, we observe
| task | layer | en-X | X-en | | |
|---------|--------------------|--------------------|--------------------|--------------------|--------------------|
| before | after | before | after | | |
| POS | last | 0.58 (0.43 - 0.70) | 0.84 (0.77 - 0.89) | 0.63 (0.48 - 0.74) | 0.83 (0.74 - 0.89) |
| penult. | 0.78 (0.68 - 0.85) | 0.84 (0.76 - 0.89) | 0.80 (0.71 - 0.87) | 0.85 (0.79 - 0.90) | |
| NER | last | 0.69 (0.55 - 0.79) | 0.72 (0.59 - 0.81) | 0.75 (0.64 - 0.83) | 0.84 (0.73 - 0.89) |
| penult. | 0.82 (0.73 - 0.88) | 0.71 (0.58 - 0.81) | 0.88 (0.83 - 0.92) | 0.72 (0.58 - 0.82) | |
| NLI | last | 0.51 (0.32 - 0.67) | 0.75 (0.61 - 0.85) | 0.54 (0.36 - 0.68) | 0.73 (0.59 - 0.83) |
| penult. | 0.74 (0.59 - 0.84) | 0.95 (0.90 - 0.97) | 0.79 (0.66 - 0.87) | 0.94 (0.90 - 0.97) | |
Table 8: 95% confidence interval for Spearman rank correlation between weak alignment and CTL, obtained with BCA bootstraping with 2000 resamples.
| task | layer | en-X | X-en | | |
|---------|--------------------|--------------------|--------------------|--------------------|--------------------|
| before | after | before | after | | |
| POS | last | 0.80 (0.73 - 0.86) | 0.85 (0.81 - 0.88) | 0.83 (0.77 - 0.87) | 0.87 (0.83 - 0.91) |
| penult. | 0.86 (0.79 - 0.89) | 0.85 (0.79 - 0.89) | 0.87 (0.82 - 0.91) | 0.86 (0.82 - 0.90) | |
| NER | last | 0.85 (0.78 - 0.90) | 0.66 (0.53 - 0.77) | 0.86 (0.82 - 0.90) | 0.74 (0.65 - 0.82) |
| penult. | 0.87 (0.83 - 0.91) | 0.75 (0.65 - 0.84) | 0.88 (0.84 - 0.92) | 0.76 (0.66 - 0.84) | |
| NLI | last | 0.74 (0.63 - 0.82) | 0.81 (0.72 - 0.87) | 0.89 (0.83 - 0.93) | 0.84 (0.77 - 0.90) |
| penult. | 0.84 (0.79 - 0.88) | 0.92 (0.87 - 0.95) | 0.90 (0.86 - 0.93) | 0.94 (0.90 - 0.96) | |
that the confidence intervals of the penultimate and last layer do not overlap when the alignment is measured after fine-tuning. Otherwise, before finetuning, we can still observe that the measured correlation for the penultimate layer is systematically above the confidence interval for the last layer, except for target-English strong alignment.
We can see that confidence intervals overlap too much when comparing before and after fine-tuning, except in two cases. When looking at POS-tagging for the last layer, weak alignment after fine-tuning gives a significantly better correlation than before, but this does not translate to strong alignment which correlates better with cross-lingual transfer overall. The same observation can be made about NLI for the penultimate layer. On the other hand, for the NER task, strong alignment after-fine tuning gives a significantly worse correlation than before.
It is thus difficult to conclude on whether alignment before or after fine-tuning is better correlated to cross-lingual transfer.
Finally, comparing target-English and Englishtarget alignment does not give significant results.
If all other parameters are kept identical, every situation leads to an overlap between confidence intervals except for the last layer before fine-tuning for NLI, which might just be fortuitous since it's the second before last layer that correlates better
## With Cross-Lingual Transfer For This Task. D Detailed Results For Alignment Drop
Tab. 10 contains the detailed results when measuring the relative drop in strong alignment after fine-tuning. This is a detailed version of Tab. 3 in Section 5, with standard deviation measured over 5 different seeds for model initialization and data shuffling for fine-tuning, and all tested languages.
This confirms that the observed increases and decreases in alignment are significant. It also seems to show that alignment for distant languages (en-ar, en-zh) is more affected by fine-tuning than other pairs.
## E Breaking Down Correlation By Models And Layers
Tab. 12 shows a breakdown of the correlation between strong alignment and CTL across layers and models. These results tend to show that smaller models (distilmBERT and mBERT) have a better correlation at the last layer than larger models. It is also interesting to note that several correlation values are identical for alignment before fine-tuning, this might be explained by the fact that the seed of fine-tuning has unsurprisingly no effect on alignment measured before fine-tuning and by the possibility that alignment measured at one layer might be almost perfectly correlated with alignment at another, especially when the correlation is measured across few languages.
| before | after | |
|----------|--------------------|--------------------|
| last | 0.89 (0.64 - 0.96) | 0.83 (0.68 - 0.94) |
| -1 | 0.79 (0.63 - 0.89) | 0.79 (0.64 - 0.88) |
| -2 | 0.79 (0.65 - 0.89) | 0.78 (0.65 - 0.89) |
| -3 | 0.79 (0.64 - 0.89) | 0.82 (0.69 - 0.91) |
| -4 | 0.79 (0.65 - 0.89) | 0.76 (0.62 - 0.86) |
| -5 | 0.79 (0.64 - 0.89) | 0.79 (0.64 - 0.91) |
| -6 | 0.79 (0.66 - 0.90) | 0.77 (0.62 - 0.87) |
| task | model | en-ar | en-es | en-fr | en-ru | en-zh |
|-------------|-------------|------------|------------|------------|------------|---------|
| distilmBERT | -0.74±0.11 | -0.86±0.04 | -0.87±0.03 | -0.87±0.01 | -0.96±0.04 | |
| mBERT | -0.90±0.04 | -0.86±0.04 | -0.93±0.02 | -0.95±0.01 | -0.96±0.02 | |
| XLM-R Base | -0.43±0.18 | -0.46±0.10 | -0.46±0.17 | -0.70±0.05 | 0.69±0.40 | |
| XLM-R Large | -0.30±0.17 | 0.23±0.28 | 0.44±0.30 | -0.44±0.14 | 0.26±0.25 | |
| distilmBERT | 0.00±0.37 | -0.61±0.05 | -0.60±0.05 | -0.33±0.09 | 0.00±0.22 | |
| mBERT | -0.28±0.19 | -0.36±0.12 | -0.49±0.11 | -0.27±0.20 | -0.25±0.13 | |
| XLM-R Base | 5.88±2.69 | 0.22±0.29 | 0.62±0.47 | 1.32±0.50 | 21.99±6.44 | |
| XLM-R Large | 16.34±8.76 | 2.22±0.44 | 3.17±0.89 | 3.10±0.72 | 12.67±3.97 | |
| distilmBERT | 5.49±0.69 | 0.30±0.11 | 0.88±0.14 | 2.28±0.36 | 9.78±0.90 | |
| mBERT | 5.65±1.45 | 0.99±0.70 | 1.08±0.63 | 1.45±0.63 | 5.85±0.62 | |
| XLM-R Base | 11.17±0.95 | 1.01±0.12 | 1.55±0.28 | 2.67±0.24 | 27.58±3.33 | |
| XLM-R Large | 25.36±10.33 | 1.78±0.77 | 2.96±1.18 | 2.99±1.15 | 13.57±5.86 | |
However, drawing any conclusion from those figures might be irrelevant. By breaking down results by model, we measure correlation only from 25 samples, with five languages and five seeds. Furthermore, those latter seeds have no effect on alignment measured before. Tab. 11 shows a focus on distilmBERT for the same results with confidence intervals obtained with BCA bootstrapping.
It demonstrates that the measured correlation is not precise enough to draw any conclusion on which layer has an alignment that is better correlated with CTL, or to determine whether alignment before or after fine-tuning is more relevant to CTL abilities.
As a matter of fact, the results are so inconclusive that almost all correlation values in Tab. 12 lie in any of the confidence intervals in Tab. 11.
| POS NER NLI |
|---------------|
## F Detailed Results Of The Controlled Experiment
This section provides detailed results of realignment methods for POS-tagging and NER, for all tested models, languages, translation datasets, and methods of extraction for realignment data. It also contains results for XNLI, for which only one translation dataset (opus100) and one extraction method
(bilingual dictionaries) were tested. Results are shown on Tab. 13 (POS, opus100), 14 (POS, multiUN), 15 (NER, opus100), 16 (NER, multi-UN), 17
(NLI, opus100).
A light gray cell indicates that the realignment method obtained an average score that is closer to the baseline with the same model than the standard deviation of the said baseline. A dark gray cell indicates that the realignment method provokes a decrease w.r.t. the baseline that is bigger than the standard deviation.
Those detailed results emphasize on the conclusions of Section 6. Using bilingual dictionaries seems to provide significant improvements more often than other methods to extract realignment pairs of words. This is particularly visible for the POS-tagging tasks, where realigning with a bilingual dictionary, with joint or sequential realignment, provides the best results. For the NER task, this is less visible, but we've already seen that, on average, bilingual dictionaries give better results
(Tab. 5).
The detailed results also confirm that for smaller models and certain tasks like POS-tagging, realignment methods work better. Realignment methods for POS-tagging on distilmBERT bring significant
distilmBERT mBERT XLM-R Base XLM-R Large
before after before after before after before after
last 0.89 0.83 0.89 0.82 0.59 0.67 0.75 0.78
-1 0.79 0.79 0.80 0.88 0.59 0.68 0.75 0.75
-2 0.79 0.78 0.80 0.84 0.59 0.68 0.75 0.76
-3 0.79 0.82 0.89 0.86 0.59 0.69 0.75 0.76
-4 0.79 0.76 0.89 0.83 0.59 0.68 0.75 0.74 -5 0.79 0.79 0.80 0.81 0.59 0.65 0.65 0.72
-6 0.79 0.77 0.80 0.80 0.69 0.64 0.65 0.70 -7 - - 0.80 0.77 0.69 0.66 0.65 0.65 -8 - - 0.80 0.77 0.69 0.66 0.65 0.62 -9 - - 0.80 0.77 0.69 0.66 0.65 0.65
-10 - - 0.80 0.79 0.69 0.68 0.65 0.69 -11 - - 0.80 0.80 0.69 0.66 0.65 0.67
-12 - - 0.89 0.89 0.69 0.68 0.65 0.67 -13 - - - - - - 0.75 0.68
-14 - - - - - - 0.75 0.76 -15 - - - - - - 0.75 0.76
-16 - - - - - - 0.75 0.76 -17 - - - - - - 0.75 0.71
-18 - - - - - - 0.75 0.72 -19 - - - - - - 0.75 0.72 -20 - - - - - - 0.75 0.73 -21 - - - - - - 0.75 0.70
-22 - - - - - - 0.75 0.70 -23 - - - - - - 0.75 0.71
-24 - - - - - - 0.75 0.75
improvement for all languages. When using a bilingual dictionary, it also brings a systematically significant improvement over the baseline for mBERT
on POS-tagging. For NER, the improvement is less often significant, but realignment methods still obtain some significant improvements for some languages like Arabic. For NLI, the only model on which there are some significant improvements for some languages is distilmBERT.
Using a supposedly higher quality translation dataset like multi-UN does not provide improvement over using opus100, which is said to be better reflecting the average quality of translation datasets
(Wu and Dredze, 2020b). It might even seem that using multi-UN provide slightly worse results than opus100. There are more cases of unsignificant increase of results for multi-UN for POS-tagging and NER and also more cases of apparently significant degradation of results with respect to the baseline.
This might be explained by the fact that multi-UN
is a corpus obtained from translation of documents in the United Nations, which might lack diversity in their content.
Finally, we observe that realignment methods, at least with the small number of realignment steps we performed here, do not impact the evaluation on the fine-tuning language (English). Indeed, even if they sometimes provoke a decrease, namely on POS-tagging, this decrease is small, rarely of more than 0.1 points.
en ar es fr ru zh
distilmBERT **96.1**±0.1 51.0±1.3 84.1±0.8 85.3±0.2 81.2±0.7 *64.1*±1.5
+ before fastalign 96.1±0.0 63.4±0.5 85.6±0.1 86.5±0.1 83.7±0.5 66.3±0.5
+ before awesome 96.1±0.1 63.3±0.9 85.4±0.2 86.3±0.1 82.9±0.4 66.1±0.5 + before dico 96.1±0.1 65.5±0.5 85.8±0.2 86.5±0.2 84.7±0.3 **67.4**±0.7 + joint fastalign 96.0±0.1 62.9±0.9 85.5±0.2 86.2±0.2 82.0±0.5 65.0±0.6 + joint awesome 96.1±0.1 63.4±0.3 85.4±0.1 86.4±0.1 82.1±0.5 64.8±0.5 + joint dico 96.1±0.0 66.8±0.6 **85.8**±0.2 86.5±0.2 84.1±0.6 66.4±0.7 mBERT **96.7**±0.0 51.7±1.0 85.6±0.3 86.0±0.5 82.1±0.7 *66.0*±0.8 + before fastalign 96.6±0.1 64.2±1.2 85.8±0.2 86.5±0.3 83.9±0.8 67.3±0.9 + before awesome 96.6±0.1 64.0±1.2 85.9±0.3 86.5±0.4 83.4±1.0 66.7±0.9 + before dico 96.6±0.1 65.1±0.6 86.2±0.2 86.9±0.3 84.4±0.3 **68.3**±0.6 + joint fastalign 96.6±0.0 62.8±1.7 85.3±0.3 86.5±0.3 81.4±0.4 65.2±0.4
+ joint awesome 96.6±0.1 63.3±1.3 85.3±0.2 86.4±0.5 81.6±0.5 65.7±0.7 + joint dico 96.6±0.1 **66.5**±0.8 86.1±0.2 86.9±0.3 83.9±0.6 68.1±0.8
XLM-R base 95.9±0.1 62.5±1.3 86.6±0.3 86.9±0.1 **86.9**±0.6 *70.9*±0.6
+ before fastalign 95.9±0.1 64.2±0.7 86.7±0.1 87.3±0.1 86.2±0.6 68.5±0.7
+ before awesome **96.0**±0.1 64.9±1.5 86.8±0.1 87.2±0.1 86.3±0.7 68.0±0.7 + before dico 96.0±0.1 67.3±1.5 86.9±0.2 **87.3**±0.1 86.8±0.7 **71.2**±0.6
+ joint fastalign 95.9±0.1 63.5±0.4 86.0±0.2 86.6±0.2 84.6±0.2 69.1±0.5 + joint awesome 96.0±0.1 63.1±0.7 85.8±0.1 86.4±0.1 84.5±0.3 69.2±0.2 + joint dico 95.9±0.1 66.6±0.4 86.6±0.2 87.2±0.1 86.0±0.3 70.6±0.2 XLM-R large 97.7±0.0 65.1±0.6 87.0±0.6 87.5±0.6 87.0±0.9 *71.5*±0.2
Table 13: Controlled experiment with realignment for POS-tagging with opus100 translation dataset.
en ar es fr ru zh
distilmBERT 96.1±0.1 51.0±1.3 84.1±0.8 85.3±0.2 81.2±0.7 *64.1*±1.5
+ before fastalign 96.0±0.1 61.8±0.6 85.2±0.2 86.1±0.2 82.0±0.4 65.7±0.7
+ before awesome 96.0±0.1 62.1±0.4 85.3±0.2 86.1±0.4 82.2±0.5 65.3±0.5 + before dico 96.1±0.0 64.1±0.9 **85.8**±0.1 86.5±0.1 84.6±0.6 **67.4**±1.0 + joint fastalign 96.1±0.1 62.8±0.6 85.2±0.1 86.1±0.1 81.2±0.4 64.7±0.6 + joint awesome **96.1**±0.1 61.8±0.8 85.2±0.2 86.0±0.2 81.2±0.4 64.6±0.5 + joint dico 96.1±0.0 **65.5**±0.4 85.8±0.2 **86.7**±0.1 83.7±0.6 65.9±0.8 mBERT 96.7±0.0 51.7±1.0 85.6±0.3 86.0±0.5 82.1±0.7 *66.0*±0.8 + before fastalign 96.6±0.0 63.1±0.6 85.6±0.1 86.4±0.2 82.7±0.6 66.9±0.9 + before awesome **96.7**±0.1 61.8±1.0 85.6±0.3 86.5±0.3 82.1±0.8 66.6±0.8
+ before dico 96.6±0.1 64.2±1.4 **86.0**±0.3 86.9±0.4 84.1±0.8 **69.0**±0.8 + joint fastalign 96.7±0.0 62.6±1.0 85.4±0.3 86.3±0.5 80.7±0.8 65.0±0.6
+ joint awesome 96.7±0.0 61.9±0.8 85.2±0.3 86.1±0.3 80.9±0.4 64.9±0.6 + joint dico 96.6±0.1 **64.5**±1.4 86.0±0.4 **86.9**±0.5 83.9±0.9 67.3±0.7
XLM-R base 95.9±0.1 62.5±1.3 86.6±0.3 86.9±0.1 **86.9**±0.6 *70.9*±0.6
+ before fastalign 95.9±0.1 64.0±1.0 86.3±0.1 87.0±0.3 85.8±0.7 68.6±0.9 + before awesome **96.0**±0.1 64.7±0.9 86.4±0.1 86.8±0.2 85.8±0.4 66.8±0.0 + before dico 96.0±0.1 66.5±1.2 86.8±0.2 **87.2**±0.2 86.3±0.6 **71.1**±0.6 + joint fastalign 95.9±0.1 62.5±1.1 85.7±0.2 86.3±0.2 84.1±0.4 69.1±0.3 + joint awesome 95.9±0.1 62.1±0.9 85.5±0.1 86.2±0.2 83.9±0.3 69.0±0.4 + joint dico 95.9±0.1 65.8±0.7 86.4±0.3 87.1±0.1 85.3±0.6 70.5±0.2 XLM-R large 97.7±0.0 65.1±0.6 87.0±0.6 87.5±0.6 87.0±0.9 *71.5*±0.2
en ar es fr ru zh
distilmBERT 82.9±0.4 34.5±1.6 69.2±3.1 76.1±0.7 60.2±0.9 *46.8*±1.9
+ before fastalign 82.9±0.3 39.1±2.0 70.1±2.7 75.9±0.3 60.1±0.8 46.5±2.5 + before awesome 82.7±0.3 39.3±3.7 72.5±2.5 75.6±0.5 60.3±0.3 **49.7**±1.1
+ before dico 82.9±0.4 41.6±1.7 67.9±2.7 76.4±0.9 60.3±1.2 48.3±1.6
+ joint fastalign 83.0±0.2 41.5±3.1 **73.5**±2.1 76.6±0.7 **61.4**±0.8 48.3±1.5 + joint awesome **83.1**±0.1 41.4±0.4 72.3±0.1 **77.6**±0.7 60.9±0.4 49.5±0.4 + joint dico 83.0±0.5 **42.2**±2.7 69.5±2.3 76.6±1.0 61.2±0.8 48.8±1.7 mBERT 84.4±0.4 40.7±2.9 74.3±1.4 79.9±1.3 63.9±2.0 *52.1*±1.7 + before fastalign 84.3±0.4 42.0±2.9 70.5±3.0 79.3±0.6 **65.5**±1.6 51.7±1.1 + before awesome **84.8**±0.2 40.2±2.6 72.3±2.8 79.4±0.7 63.0±1.8 51.2±0.6 + before dico 84.3±0.6 42.1±1.7 73.4±2.8 80.1±1.4 64.9±1.7 52.8±1.2 + joint fastalign 84.3±0.3 42.7±1.9 75.6±2.0 80.5±1.0 65.4±1.5 54.3±1.2
+ joint awesome 84.1±0.4 44.2±2.2 75.6±1.6 80.2±0.2 64.8±2.4 54.6±1.1 + joint dico 84.2±0.3 46.0±3.2 76.6±1.9 **81.1**±0.9 65.5±0.9 **54.9**±0.9
XLM-R base 80.0±0.3 46.4±2.7 71.8±3.7 75.0±1.4 61.6±0.8 *47.4*±2.1
+ before fastalign **80.2**±0.4 51.5±3.1 71.7±1.4 75.9±1.0 62.1±1.2 45.7±1.3
+ before awesome 80.1±0.3 52.2±3.4 74.2±1.3 76.1±0.8 61.0±1.7 46.6±1.0 + before dico 80.0±0.2 55.8±3.6 76.9±1.6 **77.3**±0.7 62.0±0.4 47.5±0.7
+ joint fastalign 79.8±0.2 47.7±5.0 74.2±1.2 75.6±0.7 63.0±0.8 50.2±1.5 + joint awesome 79.7±0.3 47.6±3.1 73.9±1.2 75.4±0.4 63.5±0.8 **51.2**±1.1
+ joint dico 79.9±0.3 50.3±3.2 75.2±1.1 75.9±0.8 **63.6**±0.6 50.1±1.3
XLM-R large 83.8±1.0 45.1±1.4 75.6±3.7 80.7±0.8 70.5±3.4 *53.0*±2.1
Table 15: Controlled experiment with realignment for NER with opus100 translation dataset.
same ar es fr ru zh
distilmBERT 82.9±0.4 34.5±1.6 69.2±3.1 76.1±0.7 60.2±0.9 *46.8*±1.9 + before fastalign 82.9±0.3 37.8±0.6 71.1±3.8 76.2±1.3 59.8±1.7 46.9±2.0 + before awesome 83.0±0.4 39.4±1.1 **71.7**±3.6 75.7±0.3 61.2±1.7 46.3±1.3
+ before dico 83.0±0.1 **43.3**±2.9 69.0±3.5 **77.6**±1.0 59.9±0.9 47.1±1.5 + joint fastalign 83.0±0.2 39.5±0.4 70.5±2.3 76.6±0.5 61.3±1.0 48.4±1.7
+ joint awesome 82.9±0.3 38.8±1.8 69.6±1.0 76.7±0.8 60.6±1.4 **49.2**±1.1 + joint dico 83.0±0.3 41.9±1.5 71.3±1.7 77.5±1.5 **61.9**±1.4 48.4±1.1
mBERT 84.4±0.4 40.7±2.9 74.3±1.4 79.9±1.3 63.9±2.0 *52.1*±1.7
+ before fastalign 84.3±0.4 **46.2**±3.6 69.4±4.2 78.4±0.8 64.3±1.1 51.2±1.9
+ before dico **84.6**±0.4 42.5±2.0 71.9±2.6 79.9±0.8 64.0±1.1 52.1±1.1 + joint fastalign 84.1±0.3 46.0±1.0 74.4±2.6 80.7±1.0 66.6±2.4 **54.8**±0.4 + joint awesome 84.1±0.5 42.2±2.2 **74.9**±0.5 80.3±0.6 65.5±2.5 54.5±0.9 + joint dico 84.3±0.4 43.6±3.2 74.5±1.2 80.4±0.8 65.6±3.3 53.4±1.5
XLM-R base 80.0±0.3 46.4±2.7 71.8±3.7 75.0±1.4 61.6±0.8 *47.4*±2.1
+ before fastalign **80.2**±0.3 53.8±3.4 72.0±2.8 76.0±1.4 61.8±1.5 45.8±1.3 + before awesome 80.2±0.3 **54.8**±1.3 70.4±2.0 76.5±1.4 61.9±0.0 45.5±0.7 + before dico 80.0±0.2 54.1±1.7 76.1±1.4 **76.5**±0.7 61.8±1.0 47.6±1.4 + joint fastalign 79.8±0.2 46.8±3.0 73.5±2.4 75.6±1.0 61.8±1.5 50.0±1.8 + joint awesome 79.8±0.3 49.0±3.1 75.8±1.7 76.1±1.1 63.0±1.5 **50.8**±0.8
+ joint dico 79.8±0.3 48.4±3.0 74.3±1.6 75.8±0.8 62.7±0.7 50.2±0.1
XLM-R large 83.8±1.0 45.1±1.4 75.6±3.7 80.7±0.8 70.5±3.4 *53.0*±2.1
Table 16: Controlled experiment with realignment for NER with multiUN translation dataset.
en ar es fr ru zh
distilmBERT 76.0±0.7 58.2±1.4 68.5±0.6 **68.7**±0.6 62.3±1.2 *63.4*±0.9
+ before dico **76.2**±0.6 58.4±1.2 69.2±0.8 68.0±0.8 62.6±1.1 63.1±0.9 + joint dico 76.2±0.7 59.8±1.1 **69.2**±1.0 68.6±1.1 63.0±1.0 **64.4**±1.2
mBERT **80.2**±0.7 65.2±1.2 73.8±0.8 **72.9**±0.7 68.3±1.2 *68.5*±1.3
+ before dico 79.0±0.6 63.2±0.9 72.9±0.8 71.7±0.5 66.9±0.9 68.7±0.7
+ joint dico 79.9±0.9 65.6±0.9 **73.8**±1.3 72.5±1.2 68.8±1.3 **69.0**±0.8
XLM-R base 82.8±1.6 70.1±1.4 77.4±1.6 76.5±1.3 74.2±1.3 *71.7*±1.3
+ before dico 81.2±2.4 68.4±2.9 76.0±2.2 74.9±1.9 72.8±2.4 71.4±2.5 + joint dico 83.7±0.7 70.8±1.4 78.0±0.9 76.7±1.0 74.6±1.3 **72.7**±1.6 XLM-R large 87.9±0.7 77.5±1.3 83.2±1.3 81.9±1.2 79.1±1.1 *78.2*±1.3
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8
✗ A2. Did you discuss any potential risks of your work?
Our work does not introduce new methods. It is anlysing existing ones and trying to provide a better understanding of their inner working. Hence, we do not believe that our paper present a direct risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Introduction and abstract.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix A.1
✓ B1. Did you cite the creators of artifacts you used?
Appendix A.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All artifacts used are under permissive licences or terms of use.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Provided it was specified, our use of existing artifact was consistent with intended use.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We did not collect data, and the datasets we used are already publicly available.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 8. Limitations
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A.
## C ✓ **Did You Run Computational Experiments?** Section 4 And 6.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.6 and Table "Number of parameters" in Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Overlap with standard deviation is indicated on all tables, error bars are provided on all graphs where it is relevant. Appendices C, D, E, F provide detailed results with confidence intervals and standard deviations.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
fan-etal-2023-aerial | Aerial Vision-and-Dialog Navigation | https://aclanthology.org/2023.findings-acl.190 | The ability to converse with humans and follow natural language commands is crucial for intelligent unmanned aerial vehicles (a.k.a. drones). It can relieve people{'}s burden of holding a controller all the time, allow multitasking, and make drone control more accessible for people with disabilities or with their hands occupied. To this end, we introduce Aerial Vision-and-Dialog Navigation (AVDN), to navigate a drone via natural language conversation. We build a drone simulator with a continuous photorealistic environment and collect a new AVDN dataset of over 3k recorded navigation trajectories with asynchronous human-human dialogs between commanders and followers. The commander provides initial navigation instruction and further guidance by request, while the follower navigates the drone in the simulator and asks questions when needed. During data collection, followers{'} attention on the drone{'}s visual observation is also recorded. Based on the AVDN dataset, we study the tasks of aerial navigation from (full) dialog history and propose an effective Human Attention Aided Transformer model (HAA-Transformer), which learns to predict both navigation waypoints and human attention. | # Aerial Vision-And-Dialog Navigation
Yue Fan, Winson Chen, Tongzhou Jiang, Chun Zhou, Yi Zhang, Xin Eric Wang University of California, Santa Cruz
{yfan71, wchen157, tojiang, czhou43, yiz, xwang366}@ucsc.edu
## Abstract
The ability to converse with humans and follow natural language commands is crucial for intelligent unmanned aerial vehicles (a.k.a. drones).
It can relieve people's burden of holding a controller all the time, allow multitasking, and make drone control more accessible for people with disabilities or with their hands occupied.
To this end, we introduce Aerial Vision-andDialog Navigation (AVDN), to navigate a drone via natural language conversation. We build a drone simulator with a continuous photorealistic environment and collect a new AVDN
dataset of over 3k recorded navigation trajectories with asynchronous human-human dialogs between commanders and followers. The commander provides initial navigation instruction and further guidance by request, while the follower navigates the drone in the simulator and asks questions when needed. During data collection, followers' attention on the drone's visual observation is also recorded. Based on the AVDN dataset, we study the tasks of aerial navigation from (full) dialog history and propose an effective Human Attention Aided Transformer model (HAA-Transformer), which learns to predict both navigation waypoints and human attention. Dataset and code are released: https://sites.google.com/view/
aerial-vision-and-dialog/home.
## 1 Introduction
Drones have been widely adopted for many applications in our daily life, from personal entertainment to professional use. It has the advantage of mobility and observing large areas over ground robots. However, compared with ground robots, the control of the aerial robot is more complex because an extra degree of freedom, altitude, is involved. To control a drone, people often need to hold a controller all the time, so it is essential to create a hands-free control experience for drone users and develop an intelligent drone that can complete tasks simply by talking to humans. It can lower the barrier of
![0_image_0.png](0_image_0.png)
drone control for users with some disabilities and who have their hands occupied by activities such as taking photos, writing, etc.
Therefore, this work introduces Aerial Visionand-Dialog Navigation (AVDN), aiming to develop an intelligent drone that can converse with its user to fly to the expected destination. As shown in Figure 1, the user (commander) provides instructions, and the aerial agent (follower) follows the instruction and asks questions when needed. The past visual trajectories are also provided along with the question, which frees the commander from monitoring the drone all the time and minimizes the burden of drone control. In this free-form dialog, potential ambiguities in the instruction can be gradually resolved through the further instructions provided by the commanders upon request.
To implement and evaluate the AVDN task, we 3043 build a photorealistic simulator with continuous state space to simulate a drone flying with its onboard camera pointing straight downward. Then we collect an AVDN dataset of 3,064 aerial navigation trajectories with human-human dialogs, where crowd-sourcing workers play the commander role and drone experts play the follower role, as illustrated in Figure 1. Moreover, we also collect the attention of human followers over the aerial scenes for a better understanding of where humans ground navigation instructions.
Based on our AVDN dataset, we introduce two challenging navigation tasks, Aerial Navigation from Dialog History (ANDH) and Aerial Navigation from Full Dialog History (ANDH-Full). Both tasks focus on predicting navigation actions that can lead the agent to the destination area, whereas the difference is that ANDH-Full presents the agent with full dialog and requires it to reach the final destination (Kim et al., 2021), while ANDH evaluates the agent's completion of the sub-trajectory within a dialog round given the previous dialog information (Thomason et al., 2020).
The proposed tasks open new challenges of sequential action prediction in a large continuous space and natural language grounding on photorealistic aerial scenes. We propose a sequenceto-sequence Human Attention Aided Transformer model (HAA-Transformer) for both tasks. The HAA-Transformer model predicts waypoints to reduce the complexity of the search space and learns to stop at the desired location. More importantly, it is jointly trained to predict human attention from the input dialog and visual observations and learns where to look during inference. Experiments on our AVDN dataset show that multitask learning is beneficial and human attention prediction improves navigation performance. The main contributions are concluded as follows:
- We create a new dataset and simulator for aerial vision-and-dialog navigation. The dataset includes over 3K aerial navigation trajectories with human-human dialogs.
- We introduce ANDH and ANDH-Full tasks to evaluate the agent's ability to understand natural language dialog, reason about aerial scenes, and navigate to the target location in a continuous photorealistic aerial environment.
- We propose an HAA-Transformer model as the baseline for ANDH and ANDH-Full. Be-
sides predicting the waypoint navigation actions, HAA-Transformer also learns to predict the attention of the human follower along the navigation trajectory. Experiments on our AVDN dataset validate the effectiveness of the HAA-Transformer model.
## 2 Related Work
Vision-and-Language Navigation Vision-andLanguage Navigation (VLN) is an emerging multimodal task that studies the problem of using both language instructions and visual observation to predict navigation actions. We compare some of the works with our AVDN dataset in Table 1. Early VLN datasets such as Anderson et al. (2018); Ku et al. (2020) start with the indoor house environments in the Matterport3D simulator (Chang et al.,
2017), where the visual scenes are connected on a navigation graph. To simulate continuous state change as in the real world, Krantz et al. (2020)
built a 3D continuous environment by reconstructing the scene based on topological connections where the agent uses continuous actions during the navigation. Some other VLN studies focus on language instructions. Nguyen et al. (2019); Nguyen and Daumé III (2019); Thomason et al. (2020) created datasets where the agent can interact with the user by sending fixed signals or having dialogs.
There are also works on synthetic indoor environments, such as Shridhar et al. (2020b); Padmakumar et al. (2021) that use an interactive simulation environment with synthetic views named ALFRED,
where the agent needs to follow language instructions or dialogs to finish household tasks. Besides the indoor environment, some VLN datasets work on the more complex outdoor environment, such as the Touchdown dataset (Chen et al., 2019) and the modified LANI dataset (Misra et al., 2018).
Blukis et al. (2019) is similar to ours for both using drones. However, the synthetic environment used has a gap from the realistic scene, and they ignored the control of the drone's altitude, where such navigation is oversimplified and has a large gap towards navigation in the real world in terms of language and vision aspects. Our work absorbs the advantage from previous works where we have continuous environments and dialog instructions to better approximate the real-world scenario.
Aerial Navigation Using both vision and language for aerial navigation is a less studied topic, whereas vision-only aerial navigation for drones is
Dataset Env Photorealistic **Continuous**
![2_image_1.png](2_image_1.png)
![2_image_4.png](2_image_4.png)
Space Dialog **Free**
Form
already an active topic in the field. Some inspiring works (Loquercio et al., 2018; Giusti et al., 2015; Smolyanskiy et al., 2017; Fan et al., 2020; Bozcan and Kayacan, 2020; Majdik et al., 2017; Kang et al., 2019) worked on using pre-collected realworld drone data to tackle aerial vision navigation problems. Due to the hardness of collecting data and the risk of crashes, some other works applied simulation for aerial navigation (Chen et al., 2018; Shah et al., 2017; Chen et al., 2020), where rich ground truths are provided without the need for annotation. However, the modality of language is missing in these prior works and as a result, the navigation tasks only contain simple goals. As for the aerial vision-and-language navigation task in this work, the navigation is guided by natural dialog.
As a result, it allows more diverse and complex navigation and also resolves ambiguities during complicated navigation.
## 3 Dataset
The AVDN dataset includes dialogs, navigation trajectories, and the drone's visual observation with human attention, where an example is shown in Figure 2. With the help of a newly proposed simulator, we record the AVDN trajectories created by two groups of humans interacting with each other, playing either the commander role or the follower role. Our AVDN dataset is the first aerial navigation dataset based on dialogs to the best of our knowledge.
![2_image_0.png](2_image_0.png)
![2_image_2.png](2_image_2.png)
![2_image_3.png](2_image_3.png)
## 3.1 Simulator
We build a simulator to simulate the drone with a top-down view area. Our simulation environment is a continuous space so that the simulated drone can move continuously to any point within the environment. The drone's visual observations are square images generated corresponding to the drone's view area by cropping from high-resolution satellite images in the xView dataset (Lam et al.,
2018), an open-source large-scale satellite image object detection dataset. In this way, our simulator is capable of providing continuous frames with rich visual features. We also design an interface for our simulator, where the simulated drone can be controlled with a keyboard and the drone's visual observation will be displayed in real-time with a digital compass. During the control, users can also provide their attention over the displayed images on the interface by clicking the region they attend to. Last but not least, our simulator is capable of generating trajectory overviews, i.e. commander's view, showing the starting positions, destination areas, current view area and past trajectory (if exists)
as in Figure 2.
## 3.2 Dataset Structure
In our AVDN dataset, each navigation trajectory includes time steps T = 0, 1*, . . . , M*, where M ≥ 1.
At T = 0, an initial instruction is provided by the commander. Between adjacent time steps, there is a corresponding navigation sub-trajectory. At every time step of 0 *< T < M*, there are questions from the follower and the corresponding answers from the commander. At T = M, the navigation trajectory ends because the destination area Des is reached and claimed by the follower. For details about when a trajectory ends, please refer to Section 3.3 Success Condition.
There are M follower's view area sequences
< uT
0
, uT
1
, . . . , uTNT>, NT is the length of T-th sequence, where the view area's center coordinate c T i always falls on the trajectory. Therefore, based on each view area, we could retrieve not only the simulated drone's location ci, but also direction di and altitude hi. Last but not least, for each view area u, there is a corresponding binary human attention mask with the same size. The area in u that corresponds to the white area on the mask is where the follower attended.
## 3.3 Dataset Collection
We collect our dataset with the help of Amazon Mechanical Turk (AMT) workers and drone experts, where AMT workers play the commander role to provide instructions and drone experts play the follower role to control a simulated drone and carry out the instruction. We pay the workers with wages no less than $15/h, and the data collection lasts for 90 days. We adopt an asynchronous data collection method, where the followers and commanders work in turns rather than simultaneously. This not only lowers the cost of data collection but also simulates how aerial vision-and-dialog navigation would work in practice, where the commanders will not monitor the follower's actions all the time.
Pipeline Before the start of data collection, we first sample objects in the xView dataset (Lam et al., 2018) as the destination areas and pair them with randomly selected initial follower's view areas within 1.5km distance. Then, using our simulator, we generate the trajectory overview at time step T = 0, as shown in Figure 2, which becomes the initial commander's view.
During data collection, the initial commander's view is presented to AMT workers for creating the initial instructions. We instruct the AMT workers to write instructions as if they are talking to a drone pilot based on the marked satellite images.
Next, we let human drone experts play the follower role, i.e. controlling the simulated drone through our simulator interface, following the instructions and asking questions if they cannot find the destination area. When the experts stop the current navigation, they can either enter questions into a chatbox, claim the destination with a template sentence or reject the instruction for bad quality. If the destination is falsely claimed, the simulator will generate an auto-hint to let the follower ask some questions. For questions asked, AMT workers will provide further instructions accordingly based on given navigation information and dialog history.
Then, the same drone experts will continue playing the follower role and again asking questions when necessary. We iterate the process until the destination is successfully reached and claimed by the follower.
Success Condition The navigation trajectory is successful only when the destination is reached at the time the follower claims it. We determine that the destination is reached in view area uj by checking the center cj and computing the Intersection over Union (IoU) between uj and Des. If cj is inside Des and the IoU of uj and Des is larger than 0.4, the destination is regarded in uj .
## 3.4 Data Analysis
Our AVDN dataset includes 3,064 aerial navigation trajectories, each with multi-round natural language dialog. There are two rounds of dialog on average per trajectory, where the number of dialog rounds in a trajectory equals to the maximum time step M. The most frequent words are shown in Figure 3a. The recorded AVDN trajectory path length has an average of 287m, and its distribution is shown in Figure 3b. The trajectories and dialogs can be further separated into 6,269 sub-trajectories corresponding to the dialog rounds.
We split our dataset into training, seenvalidation, *unseen-validation*, and *unseen-testing*
![4_image_1.png](4_image_1.png)
sets, where *seen* and *unseen* sets are pre-separated by making sure the area locations of the visual scenes are over 100km apart from each other. We show some statistical analysis across the dataset splits in Table 2. The visual scenes in our dataset come from the xView dataset (Lam et al., 2018),
which covers both urban and rural scenes. The average covered area of the satellite images is 1.2km2.
Rather than providing a target hint in the beginning as in Thomason et al. (2020), the destination must be inferred from the human instructions given by the commander. For example, the commander may give a detailed description of the destination initially or write a rough instruction first and then describe the destination later in the dialog. We also find that there are two ways of describing the directions for navigation: egocentric direction description, such as "turn right", and allocentric direction description, such as "turn south". By filtering and categorizing words related to directions, we find that 82% of the dialog rounds use egocentric direction description and 30% of the dialog rounds include allocentric direction description. There are 17% dialog rounds that have mixed direction deceptions, making the instruction complex. This opens a new challenge for developing a language understanding module that can ground both the egocentric and allocentric descriptions to navigation actions.
## 4 Task
Following indoor dialog navigation (Thomason et al., 2020; Kim et al., 2021), we introduce an Aerial Navigation from Dialog History (ANDH)
task and an Aerial Navigation from Full Dialog History (ANDH-Full) task based on our AVDN
dataset and simulator.
![4_image_0.png](4_image_0.png)
## 4.1 Aerial Navigation From Dialog History
The goal of the task is to let the agent predict aerial navigation actions that lead to goal areas G, following the instructions in the dialog history.
Specifically, to predict one action aˆj of an action sequence between navigation time step Ti and Ti+1, the inputs are dialogs from navigation time step 0 to Ti and images from a sequence of view areas
< uˆ0, uˆ1*, . . . ,* uˆj−1 >. A new view area uˆj will be generated after aˆj takes place.1 The goal area G
depends on the current navigation time step,
$$G=\begin{cases}u_{0}^{T_{i+1}},&\text{if}T_{i+1}\neq M\\ Des,&\text{otherwise}\end{cases},\tag{1}$$
The predicted view area sequence will be recorded for evaluation with regard to the ground truth view area sequence < uTi 0
, . . . , u Ti NTi
>.
## 4.2 Aerial Navigation From Full Dialog History
Compared with the ANDH task, the major difference of the ANDH-Full task is that it adopts the complete dialog history from navigation time step T = 0, 1*, . . . , M* as input. With the full dialog and visual observation, the agent needs to predict the full navigation trajectory from the starting view area u 00 to the destination area Des. ANDH-Full provides complete supervision for agents on a navigation trajectory with a more precise destination description and includes longer utterances and more complex vision grounding challenges.
## 4.3 Evaluation
Since the agent in both tasks, ANDH and ANDHFull, needs to generate predicted view area sequences, the evaluation metrics for both tasks are the same. In the evaluation, the center points of every view area are connected to form the navigation trajectory, and the last view area is used to determine whether the predicted navigation successfully 1uˆ0 is known as it is the initial view area at time step Ti.
leads to the destination area. The predicted navigation is successful if the IoU between the predicted final view area and the destination area is greater than 0.4. We apply several metrics for evaluation.
Success Rate (SR): the number of the predicted trajectory being regarded as successful, i.e., the final view area of the predicted trajectory satisfies the IoU requirement, over the number of total trajectories predicted.
Success weighted by inverse Path Length (SPL)
(Anderson et al., 2018): measuring the Success Rate weighted by the total length of the navigation trajectory.
Goal Progress (GP) (Thomason et al., 2020): evaluating the distance of the progress made towards the destination area. It is computed as the Euclidean distance of the trajectory, deducted by the remaining distance from the center of the predicted final view area cˆN to the center of goal area G.
## 5 Model
We proposed a Human Attention Aided (HAATransformer) model for the ANDH and ANDHFull tasks as shown in Figure 4, where it takes as input multimodal information and generates multimodal predictions, including human attention prediction and navigation prediction.
Multimodal Encoding The input has three modalities, the drone's direction, images from the drone's visual observation, and history dialogs. At the start of a prediction series, our model uses a BERT encoder (Devlin et al., 2018) to get the language embeddings of the input dialog history, h l 1:L
, where special language tokens such as [INS] and [QUE]
are added in front of each instruction and question in the dialog. Then, at every time step, all previous drone directions and images from the drone's visual observation are input to the model. A fully connected direction encoder is used to generate direction embeddings h x 1:t and an xView-pretrained Darknet-532(Redmon and Farhadi, 2018) with an attention module is used to extract and flatten the visual features to get visual embeddings h v1:t
. Finally, similar to the Episodic Transformer (Pashevich et al., 2021), all embeddings from the languages, images and directions, are concatenated and input into a multimodal transformer (FMT ) to produce output multimodal embeddings {z l 1:L
, zv 1:t
, zx 1:t} as in Equation 2.
{z l 1:L, zv 1:t, zx 1:t} = FMT ({h l 1:L, hv1:t, hx 1:t}) (2)
2https://github.com/ultralytics/xview-yolov3 Navigation Prediction and Waypoint Control The navigation outputs from our model come from a fully connected navigation decoder (FND) taking as input the transformer's output embeddings
{z l 1:L
, zv 1:t
, zx 1:t} and generating predicted waypoint actions wˆ and predicted navigation progress gˆ as in Equation 3.
( ˆw, gˆ) = FND({z l 1:L, zv 1:t, zx 1:t}) (3)
The predicted waypoint action wˆ is a 3-D coordinate (ˆx, y, ˆ hˆ), where (ˆx, yˆ) corresponds to an position in the current view area u and hˆ corresponds to an altitude. The predicted waypoint also controls the drone's direction, where the direction is kept towards the direction of movement. Therefore, wˆ controls the drone's movement, and as a result, the center, width and rotation of the next view area center are determined by wˆ. As for the navigation progress prediction gˆ, it is to generate a one-dimension navigation progress indicator for deciding when to stop (Xiang et al., 2019). If the predicted navigation progress is larger than a threshold, the drone navigation will be ended without executing the predicted waypoint action.
Human Attention Prediction A human attention decoder is proposed to predict the human attention mask using the output embeddings, z v 1:t
, from the multi-layer transformer that corresponds to the visual inputs. We build the decoder based on He et al. (2019), where the input to the decoder will be decoded to an 8 ∗ 8 representation through a fully connected layer and then linearly interpolated to a mask with the same shape as the input image. The greater the values in the mask means more likely the human follower attends the corresponding pixels.
Training We first train our HAA-Transformer model on the ANDH task and then fine-tuned it on the ANDH-Full task because the ANDH task is relatively easier with a shorter path length. For each task, we conduct the training alternately in teacher-forcing (Williams and Zipser, 1989) and student-forcing modes, where the main difference is whether the model interacts with the simulator using ground truth actions or the predicted actions.
Our model is trained with a sum of losses from both navigation prediction and human attention prediction. First, the predicted waypoint action wˆ and predicted navigation progress gˆ are trained with Mean Square Error (MSE) loss, supervised by the ground truth w and g computed based on the recorded trajectory in our dataset. The naviga-
![6_image_0.png](6_image_0.png)
tion prediction loss (Lnav) is shown in Equation 4, where Rot(.) is computing the rotation change as a result of the waypoint action.
$$\begin{array}{c}{{L_{n a v}=M S E(R o t(\hat{w}),R o t(w))}}\\ {{+M S E(\hat{w},w)+M S E(\hat{g},g)}}\end{array}\quad\mathrm{(4)}$$
Second, for human attention prediction training, we apply the modified Normalized Scanpath Score loss (NSS) (He et al., 2019). Given a predicted human attention mask P and a ground-truth human attention mask Q,
$$\begin{array}{l}{{\mathrm{relation~mask}Q,}}\\ {{N S S\left(P,Q\right)=\frac{1}{N}\sum_{i}\overline{{{P}_{i}}}\times Q_{i},}}\\ {{\mathrm{~where~}N=\sum_{i}Q_{i}\mathrm{~and~}\bar{P}=\frac{P-\mu(P)}{\sigma(P)}}}\\ {{\mathrm{~}}}\end{array}\tag{5}$$
Since human attention may not exist in certain view areas, the human attention loss is only computed for view areas with recorded human attention.
## 6 Results
We conduct experiments to study our AVDN
dataset and our HAA-Transformer model on the ANDH and ANDH-Full tasks.
Results on the ANDH task and ANDH-Full task As shown in Table 3, we evaluate our HAATransformer model along with multiple baseline models on both ANDH and ANDH-Full tasks.
We first create a multimodal Episodic Transformer (E.T.) model (Pashevich et al., 2021) by removing the human attention decoder from our HAA-Transformer, and then build vision-only and language-only uni-modal models by ablating on the multimodal E.T. model. For uni-modal models, direction inputs are maintained while either vision input or language input is discarded. A multimodal LSTM-based model is also included as a sequenceto-sequence baseline model, which has the same input and output as the multimodal E.T.model. All models, including our HAA-Transformer model are trained with random initialization. The batch size is 4 for the ANDH task, while for the ANDH-Full task. Based on the result, our HAA-Transformer model outperforms the baseline models in both tasks by a large margin. Also, compared with unimodal baseline models and a random model outputting random waypoint actions, the multimodal E.T. model achieves overall higher performance, which indicates the importance of learning multimodal information in order to succeed in the ANDH task. Last but not least, we find that the language-only uni-modal model achieves much better performance than the vision-only uni-modal model showing that the language instructions play a more important role in guiding the navigation in our AVDN dataset.
## Impact Of Human Attention Prediction Training
We then evaluate the impact of human attention prediction training for multimodal learning by ablation not only on our HAA-Transformer model but also on a Human Attention Aided Multimodal LSTMbased (HAA-LSTM) model developed by adding the human attention decoder module to the multimodal LSTM-based model (detailed in Appendix B). We apply the same human attention prediction training process and training loss as in our HAATransformer model. As the result shown in Table. 3, we find that the human attention prediction training significantly boosts both transformer-based and LSTM-based models across all evaluated metrics.
We further evaluate the benefit of human attention prediction training on different trajectory lengths. The sub-trajectories in the validation set
![7_image_0.png](7_image_0.png)
Table 3: Main results on both ANDH and ANDH-Full tasks including ablation results on human attention prediction training. Both the Human Attention Aided Multi-modal LSTM (HAA-LSTM) model and our HAA-Transformer model are benefited from the human attention prediction training based on the performance comparison.
for ANDH task are split into four subsets based on the ground truth length. In Figure 5, we compare the number of successful sub-trajectory in different subsets among models with and without human attention prediction training. As a result, both our HAA-Transformer model and the HAA-LSTM
model achieves significant performance improvements for subsets of longer trajectory. It leads to the conclusion that human attention prediction training benefits navigation prediction, especially for long trajectories for both two models that are based on LSTM and Transformer.
Besides improving task performance, human attention prediction also benefits the interpretability of the model by generating visualizable attention predictions paired with navigation predictions. We evaluate the human attention prediction result using the Normalized Scanpath Saliency (NSS) score, which measures the normalized saliency prediction at the ground truth human attention. Our HAATransformer model receives NSS scores of 0.84, 0.62 and 0.68, respectively, in seen validation, unseen validation, and test set, indicating the human attention prediction is effective.
## Comparison For Different Input Dialog Length
Comparing with the ANDH task, the ANDH-Full task requires the model to predict actions that correspond to longer dialogs with more dialog rounds.
As a result, more challenges are involved and longer training time is needed compared with the results in the ANDH task. During training, we add a prompt of the drone's direction that corresponds to the dialog, e.g., "when facing east" to clarify the instructions in dialogs that happened in different time steps, especially when egocentric direction descriptions exist. In Table 4, we show our HAATransformer model's performance on trajectories with different dialog lengths, i.e. different numbers of dialog rounds, and we find the model's SR and SPL are diminished for trajectories with the num-
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
Table 4: Result of our HAA-Transformer on ANDHFull task regarding different dialog lengths. The more rounds in dialog, the longer the trajectory is and the more challenging the task is.
ber of dialog rounds less or greater than average, where they either containing too less or too much information. It shows a big room for improvement in understanding dialog with various lengths.
## 7 Conclusion
In this work, we introduce a dataset and a simulator for Aerial Vision-and-Language Navigation
(AVDN). Challenging tasks are proposed based on our dataset focusing on navigation. A Human Attention Aided Multimodal Transformer (HAATransformer) model is designed for both tasks. Our work provides the possibilities for further studies to develop stronger models on AVDN that not only focus on navigation prediction but also on question generation. Furthermore, based on our results, future works may investigate using human attention prediction training to help solve VLN problems.
## Limitation
This work proposed a dataset, a simulator, tasks, and models for Aerial Vision-and-Language Navigation. Since satellite images are needed to simulate the drone's observation, risks of privacy leaking may exist. By using the open-source satellite dataset xView (Lam et al., 2018), we mitigate the risks while also being able to develop a simulator for training our model. Additionally, using satellite images for simulating top-down visual observation of the drone introduces the shortcoming of having only 2D static scenes while adopting the strength of the satellite images where rich labels and visual features are included.
## Broader Impact
We recognize the potential ethical problems during the dataset collection, where human annotators are involved. The data collection of this project is classified as exempt by Human Subject Committee vis IRB protocols. As a result, we utilized the Amazon Mechanical Turk (AMT) website to find workers willing to participate in the project. With AMT, our data collection is constrained by legal terms, and the data collection protocol is under AMT's approval. The agreement signed by both requesters and workers on AMT also ensures a transparent and fair data annotation process and that privacy is well protected.
## References
Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. 2018. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pages 3674–
3683.
Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A
Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quadcopter control using simulated flight. arXiv preprint arXiv:1910.09664.
Ilker Bozcan and Erdal Kayacan. 2020. AU-AIR: A
multi-modal unmanned aerial vehicle dataset for low altitude traffic surveillance. arXiv preprint arXiv:2001.11737.
Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran
Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3d: Learning from rgb-d data in indoor environments. *arXiv preprint arXiv:1709.06158*.
Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12538–12547.
Lyujie Chen, Feng Liu, Yan Zhao, Wufan Wang, Xiaming Yuan, and Jihong Zhu. 2020. Valid: A comprehensive virtual aerial image dataset. In *2020 IEEE*
International Conference on Robotics and Automation (ICRA), pages 2009–2016. IEEE.
Lyujie Chen, Wufan Wang, and Jihong Zhu. 2018.
Learning transferable uav for forest visual perception. *arXiv preprint arXiv:1806.03626*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Yue Fan, Shilei Chu, Wei Zhang, Ran Song, and Yibin Li. 2020. Learn by observation: Imitation learning for drone patrolling from videos of a human navigator. In *IEEE/RSJ International Conference on Intelligent* Robots and Systems (IROS), pages 5209–5216.
Xiaofeng Gao, Qiaozi Gao, Ran Gong, Kaixiang Lin, Govind Thattai, and Gaurav S Sukhatme. 2022. Dialfred: Dialogue-enabled agents for embodied instruction following. *arXiv preprint arXiv:2202.13330*.
Alessandro Giusti, Jérôme Guzzi, Dan C Cire¸san, FangLin He, Juan P Rodríguez, Flavio Fontana, Matthias Faessler, Christian Forster, Jürgen Schmidhuber, Gianni Di Caro, et al. 2015. A machine learning approach to visual perception of forest trails for mobile robots. *IEEE Robotics and Automation Letters*,
1(2):661–667.
Sen He, Hamed R Tavakoli, Ali Borji, Yang Mi, and Nicolas Pugeault. 2019. Understanding and visualizing deep visual saliency models. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10206–10215.
Katie Kang, Suneel Belkhale, Gregory Kahn, Pieter Abbeel, and Sergey Levine. 2019. Generalization through simulation: Integrating simulated and real data into deep reinforcement learning for vision-based autonomous flight. *arXiv preprint* arXiv:1902.03701.
Hyounghun Kim, Jialu Li, and Mohit Bansal. 2021.
Ndh-full: Learning and evaluating navigational agents on full-length dialogue. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6432–6442.
Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, and Stefan Lee. 2020. Beyond the nav-graph:
Vision-and-language navigation in continuous environments. In European Conference on Computer Vision, pages 104–120. Springer.
Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. arXiv preprint arXiv:2010.07954.
Darius Lam, Richard Kuzma, Kevin McGee, Samuel Dooley, Michael Laielli, Matthew Klaric, Yaroslav Bulatov, and Brendan McCord. 2018. xview: Objects in context in overhead imagery. arXiv preprint arXiv:1802.07856.
Antonio Loquercio, Ana I Maqueda, Carlos R DelBlanco, and Davide Scaramuzza. 2018. Dronet:
Learning to fly by driving. *IEEE Robotics and Automation Letters*, 3(2):1088–1095.
András L Majdik, Charles Till, and Davide Scaramuzza. 2017. The zurich urban micro aerial vehicle dataset. The International Journal of Robotics Research, 36(3):269–273.
Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018.
Mapping instructions to actions in 3d environments with visual goal prediction. *arXiv preprint* arXiv:1809.00786.
Khanh Nguyen and Hal Daumé III. 2019. Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning. *arXiv preprint arXiv:1909.01871*.
Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. 2019. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12527–12537.
Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. 2021. Teach: Taskdriven embodied agents that chat. arXiv preprint arXiv:2110.00534.
Alexander Pashevich, Cordelia Schmid, and Chen Sun.
2021. Episodic transformer for vision-and-language navigation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 15942–
15952.
Joseph Redmon and Ali Farhadi. 2018. Yolov3:
An incremental improvement. *arXiv preprint* arXiv:1804.02767.
Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. 2017. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. arXiv preprint arXiv:1705.05065.
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020a. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. In *The IEEE Conference on* Computer Vision and Pattern Recognition (CVPR).
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2020b. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768.
Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey Smith, and Stan Birchfield. 2017. Toward low-flying autonomous mav trail navigation using deep neural networks for environmental awareness. In IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS), pages 4241–4247.
Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2020. Vision-and-dialog navigation. In *Conference on Robot Learning*, pages 394–406. PMLR.
Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural computation*, 1(2):270–280.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Jiannan Xiang, Xin Wang, and William Yang Wang.
2019. Not all actions are equal: Learning to stop in language-grounded urban navigation. In *ViGIL@*
NeurIPS.
## A Haa-Transformer Model Details
There are around 120m parameters in our HAATransformer model. Our model uses a BERTBASE
encoder (Devlin et al., 2018) with pretrained weights that open-sourced on Hugging Face (Wolf et al., 2020) to extract language feature of the input dialog history. For ANDH task, We extract two sets of language embeddings in ANDH task, where the input is either all the previous and current dialog rounds, or only the current dialog round for the target sub-trajectory. The language embeddings that include all previous dialog are used to attend to the image feature extracted by DarkNet-53 and flatten the feature to only 768 long per frame. The other with only current dialog is passed to the multimodel encoder. Whereas in ANDH-Full task, since the agent starts at an initial position with no previous dialog, only one set of language embeddings is extracted and used.
The attention modules that are used in our HAATransformer model and the HAA-LSTM model have the same structure. They generate soft attention based on dot-product attention mechanism.
The inputs are context features and attention features. There is a fully connected layer before the output of the attention module. The context features attended by the attention features are concatenated with the attention features to become the input of the fully connected layer, and the output will be the attention module's output which has the same shape as the attention features.
## A.1 Navigation Progress Prediction
As for the navigation progress prediction, we adopt the idea of L2Stop (Xiang et al., 2019) and create a navigation progress predictor to help decide when to stop, which overcomes the problem that the model would fail to stop at the desired position.
The navigation progress is trained with the supervision of IoU score of the current view area uˆ*i,j,k* and the destination area. When the IoU is larger than 0, it indicates the designation area is seen in uˆ*i,j,k* and the larger the IoU the closer the uˆ*i,j,k* to the desi,j
. During the inference time, the predicted navigation stops when the generated navigation progress indicator is less than 0.5.
## B Haa-Lstm Model
We also design a Human Attention Aided Multimodal LSTM model for experiments in Section 6 as shown in Figure 6, where it takes the same input
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
and output as our HAA-Transformer model. We also add the same human attention decoder as in our HAA-Transformer model for human attention prediction training. The language embeddings, visual observation and direction embeddings are also extracted in the same way.
## C Training Details
We train all models on one Nvidia RTX A6000 graphic card. We train all baseline models as well as both HAA-Transformer model and HAA-LSTM model for approximately 150k iterations on ANDH task with batch size being 4 and learning rate being 1e-5. For the ANDH-Full task, since it uses full dialog history as input, where more GPU RAM is needed, we use a batch size of 2 and learning rate of 5e-6 and train the model for 200k interactions which take about 48 hours.
## D Simulator Details
We design a simulator to simulate a drone flying with its onboard camera facing straight downward, as in Figure 7a. The simulator uses satellite images from xView dataset (Lam et al., 2018) for the drone's visual observation, where the observation is square image patches cropped from the satellite image based on the drone's view area, as in Figure 7b. We argue that by using satellite images, our simulator is capable of providing equally rich visual features as in the real world and some examples are shown in Figure 7c. Additionally, since satellite images have boundaries that are not adjacent with each other, we prevent the drone's view area from moving out of boundary by automatically invalidate the drone's action that will lead to out-of-boundary view areas. Further more, for simplicity, we assume perfect control of the drone's movement, and therefore, the drone's current view area is determined by the previous drone's position and navigation action.
During the dataset collection, the follower controls the simulated drone through the simulator interface with keyboards. We defined 8 keys for the control with a total of four degrees of freedoms (DoFs), where there are 2 DoFs for horizontal movement, 1 DoF for altitude control, and 1 DoF for rotation control. Despite that our simulator environment is continuous, the control through the interface is discrete for an easier control experience. Every time a key is pressed, the simulated drone will move along the DoF for a fixed distance and the higher the simulated drone flies, the faster it moves with one press of the keyboard. Before the follower presses ESC key to stop the control, he/she can also generate the human attention data by using the mouse to left-click on the attended image region shown on the interface. After every left-click, a circle with a radius being 1/10 of the current view area width will become the attended region and be displayed on the interface. Also, a right-click on the circle will remove this region from the attention record.
## E Dataset Details And Examples
We provide some details about our dataset with related examples. Each example includes a dialog, sample drone's visual observation with human attention and navigation overviews.
![11_image_0.png](11_image_0.png)
![11_image_1.png](11_image_1.png)
## E.1 Human Attention
We record the attention from the follower through our simulator interface when the follower is controlling the simulated drone. In each navigation trajectory collected, the attention are stored in a list where the order of the list is ignored, meaning that the attended areas either recorded earlier or later during the navigation will be retrieved together when using the human attention data. In this way, the human attention data becomes more accurate since the area that followers missed to attend in the current view area is likely to be included in the future time steps. Also, because the previously attended area is kept in later view areas, less effort is needed to annotate the attended areas. We find that 1/7 of the area on average is attended to in the recorded view areas ui,j .
## E.2 Dialog Structure
The dialogs contained in our AVDN datset have a various number of rounds. Since the dialog rounds are split based on the data collection rounds, each dialog round contains only one instruction written by the commander. Figure 8 shows an example of a simple dialog with only one dialog round.
However, when the follower can not follow the initial instruction to find the destination area, questions will be brought up, and therefore more dialog
![12_image_0.png](12_image_0.png)
rounds will be introduced. Every dialog rounds start with the instruction from human commanders and could include one or more utterance from the follower, depending on if auto-instructions exist.
We provide details about auto-instructions in the next sub-section. Also, when followers are writing the questions, we enable them to define some shortcut keys for frequently used general questions such as "could you further explain it?", "where should I
go?", etc. To avoid templated dialogs, followers are forbidden to only use the shortcut for the question but need to incorporate their own language.
## E.3 Auto-Instructions
When the follower claims that the destination is reached, our simulator will check the navigation result automatically using the success condition described in Section 3.3. Then, auto-instructions will be generated based on whether the destination area is reached successfully. Specifically, when the success condition is met, an auto-instruction of "Yes, you have found it!!!" will be added to the dialog as the end; if the destination is in the center of the view area, but the view area is either too large or too small, failing the success condition, the simulator will also provide auto-instructions asking the follower to adjust the drone's altitude and verify again if the success conditions are met or not, as shown is Figure 9.
## E.4 Dialog Quality
To ensure the dialogs in our dataset have good quality, we make efforts during the data collection process and conduct extra examination for the dialog data after the data collection.
![12_image_1.png](12_image_1.png)
![12_image_2.png](12_image_2.png)
During the data collection, online workers from Amazon Mechanical Turk (AMT) are playing as commanders and provide instructions in the dialog, who, compared with the follower that we hired to work on-site and supervised by us in-person, have a higher chance of generating low quality and incorrect language instructions. We develop some strategies to deal with these undesired instructions.
First, if the follower, guided by the instruction, lets the drone navigate to a direction that is more than 90 degrees different from the ground truth direction of the destination area, our simulator will automatically label the instruction as incorrect. Those labeled instructions will be discarded and collected again. Then, since the follower needs to read and understand the instructions, they have the chance to report the instructions as being low-quality or incomprehensible and skip them. Finally, in the remaining instructions that are not spotted as lowquality or incorrect, it is still possible that instructions are not accurate or incorrect due to human mistakes from the AMT workers, such as in Figure 10. By manually checking the dialogs and navigation trajectories in randomly selected subsets of our AVDN dataset, we spot only 5 instructions with potential mistakes in 50 dialogs. In those cases, because the follower successfully followed the instruction, we keep those instructions unchanged even if they didn't help guide the follower to find the destination area. In the real world, the user in AVDN could also make mistakes, so this mistake tolerance strategy makes our dataset even closer to real scenarios.
We further examine the dialog quality after the data collection by analyzing the dialogs. The average utterance (human-written instructions and questions) in a dialog is 3.1, with a minimum and maximum being 1 and 7 because each dialog includes at least one instruction written by a human.
The average number of words written by commander and follower are 45 and 19, and there are about 15 words from auto-instructions. Also, in Figure 11, we show the distribution of the top 30 most frequent words in the commander's and follower's utterances. The results show a smooth variance across nouns, verbs, adjectives, and prepositions, indicating that our dataset's utterances have rich contents and good variety. Last but not least, we manually checked the dialogs in all validation and test sets by visualizing the corresponding navigation trajectory and the dialog, and we observed no
## F Interface For Workers In Dataset Collection
We use help from Amazon Mechanical Turk (AMT)
workers and human drone experts during the collection of our Aerial Vision-and-Language Navigation
(AVDN) dataset, where the AMT workers play the commander role providing instructions the drone experts play the follower role asking questions and controlling the drone. In this section, we demonstrate the interface for both groups of workers with all the information they receive in the data collection procedure.
## F.1 Interfaces For Commanders
There are two interfaces for commanders (AMT
workers) depending on which data collection round it is. The interface includes one trajectory each time and contains all the information needed for the commander to create the instruction. Detailed and step-by-step instructions for what needs to be done as a commander are introduced at the beginning of the interface. The AMT workers need to write sentences in the *Answer* according to the provided information.
In the first round of data collection, the commander needs to write the initial instruction based on an overview of the AVDN trajectory. As shown in Fig. 12 the satellite image shows the trajectory overview marked with a predefined staring position
(the red point with an arrow showing the drone's direction at the starting position) and a destination area (purple bounding box).
In the data collection round after the first round, the commander is required to give follow-up instructions, i.e., answers, to the questions from the follower. The user interface for the second and following rounds is shown in Fig. 13. Besides all the information shown to the commander in the first round, the follower is also provided with previous dialog, past trajectories (broken purple line), and the view area corresponding to the most recent time step (named current view area marked with white bounding box).
## F.2 Interface For Followers
The follower uses an interface to interact with our simulator. In our simulator, they receive instructions from the commander and control the simulated drone. The keyboard is used to simulate
![14_image_0.png](14_image_0.png)
![14_image_1.png](14_image_1.png)
the drone controller with eight keys representing four channels in the controller, where key w and s represent the channel controlling forward and backward movement, key a and d represent the channel controlling left and right movement, key q and e represent the channel controlling rotating clockwise and anti-clockwise movement and key 1 and 2 represent the channel controlling altitude change.
After the experts finish the control, the commander can either claim the destination is reached or ask questions for more instruction. As in Fig. 14, the interface is an image window showing the simulated drone's visual observation and a text window for displaying the previous dialogs and inputting questions from the follower. There is a compass on the top left of the image window, showing the orientation of the simulated drone. The red cross in the image window shows the center of the view, helping the follower control the drone to right above the destination area, and the red corners in the window show the area of 0.4 IoU with the view area. The follower is instructed to make the destination area larger than the area indicated by the red corners in order to finish successful navigation.
Please read and follow the following requirements carefully, if your answer will be rejected if it is wrong or has poor quality (cannot be
followed).
Background:
"Decribe the drow is a my showing the flying translation (surple line) of a cone, The chore wors pass.
The conversion is monks of conversion consistent part of Steps:
Step 1: In the Image below, find:
one corrent done lection (provi). The red arrow is pointing in the forward drection of the drone.
Step 2: figure out the greenation. The forward direction of the helicopter is where the red arrow is pointing.
- You may use the compuss to tell the corentation. Up is north Step I : Read the previous conversations and continue the conversation by answering the following questions.
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
![16_image_0.png](16_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section of Limitation.
✓ A2. Did you discuss any potential risks of your work?
Section of Broader Impact.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, Section 5 And Appendix A.1
✓ B1. Did you cite the creators of artifacts you used?
Section 3, Section 5 and Appendix A.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 3, Section 5 and Appendix A.1
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3, Section 5 and Appendix A.1
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Appendix E.4
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3.2.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3.3
## C ✓ **Did You Run Computational Experiments?** Section 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5 and Appendix C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 6
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.3 and Appendix A
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Appendix 3.3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section F
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 3.3
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 3.3 and Appendix F
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. The data collection part of this project is classified as exempt by Human Subject Committee vis IRB protocols.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Amazon Mechanical Turk website for collecting commander data, we have no access to the workers' information. |
zhang-etal-2023-improved | Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming | https://aclanthology.org/2023.findings-acl.191 | Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules and applies semantic loss to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. The results of our experiments suggest that DSR-LM improves the logical reasoning abilities of pre-trained language models, resulting in a significant increase in accuracy of over 20{\%} on deductive reasoning benchmarks. Furthermore, DSR-LM outperforms a variety of competitive baselines when faced with systematic changes in sequence length. | # Improved Logical Reasoning Of Language Models Via Differentiable Symbolic Programming
Hanlin Zhang1,∗ Jiani Huang2,∗ Ziyang Li2 Mayur Naik2 **Eric Xing**1,3,4 1Carnegie Mellon University, 2University of Pennsylvania, 3Mohamed Bin Zayed University of Artificial Intelligence, 4Petuum Inc.
## Abstract
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules and applies semantic loss to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. The results of our experiments suggest that DSR-LM improves the logical reasoning abilities of pre-trained language models, resulting in a significant increase in accuracy of over 20% on deductive reasoning benchmarks.
Furthermore, DSR-LM outperforms a variety of competitive baselines when faced with systematic changes in sequence length.1
## 1 Introduction
Complex applications in natural language processing involve dealing with two separate challenges.
On one hand, there is the richness, nuances, and extensive vocabulary of natural language. On the other hand, one needs logical connectives, long reasoning chains, and domain-specific knowledge to draw logical conclusions. The systems handling these two challenges are complementary to each other and are likened to psychologist Daniel Kahneman's human "system 1" and "system 2" (Kahneman, 2011): while the former makes fast and intuitive decisions, akin to neural networks, the latter
| Language Model | Symbolic Reasoner |
|------------------|---------------------|
| - Rapid reasoning - Sub-symbolic knowledge - Handling noise, ambiguities, and naturalness - Process open domain text - Can learn in-context | - Multi-hop reasoning - Compositionality - Interpretability - Data efficiency - Can incorporate domainspecific knowledge |
![0_image_0.png](0_image_0.png)
Table 1: Respective advantages of **language models**
and **symbolic reasoners**.
thinks more rigorously and methodically. Considering LMs as "system 1" and symbolic reasoners as "system 2", we summarize their respective advantages in Table 1.
Although pre-trained LMs have demonstrated remarkable predictive performance, making them an effective "system 1", they fall short when asked to perform consistent logical reasoning (Kassner et al., 2020; Helwe et al., 2021; Creswell et al.,
2022), which usually requires "system 2". In part, this is because LMs largely lack capabilities of systematic generalization (Elazar et al., 2021; Hase et al., 2021; Valmeekam et al., 2022).
In this work, we seek to incorporate deductive logical reasoning with LMs. Our approach has the same key objectives as neuro-symbolic programming (Chaudhuri et al., 2021): compositionality, consistency, interpretability, and easy integration of prior knowledge. We present DSR-LM, which tightly integrates a differentiable symbolic reasoning module with pre-trained LMs in an end-to-end fashion. With DSR-LM, the underlying LMs govern the perception of natural language and are finetuned to extract relational triplets with only weak supervision. To overcome a common limitation of symbolic reasoning systems, the reliance on human-crafted logic rules (Huang et al., 2021; Nye et al., 2021), we adapt DSR-LM to induce and finetune rules automatically. Further, DSR-LM allows incorporation of semantic loss obtained by logical integrity constraints given as prior knowledge, which substantially helps the robustness.
We conduct extensive experiments showing that DSR-LM can consistently improve the logical reasoning capability upon pre-trained LMs. Even if DSR-LM uses a RoBERTa backbone with much less parameters and does not explicitly take triplets as supervision, it can still outperform various baselines by large margins. Moreover, we show that DSR-LM can induce logic rules that are amenable to human understanding to explain decisions given only higher-order predicates. As generalization over long-range dependencies is a significant weakness of transformer-based language models (Lake and Baroni, 2018; Tay et al., 2020), we highlight that in systematic, long-context scenarios, where most pre-trained or neural approaches fail to generalize compositionally, DSR-LM can still achieve considerable performance gains.
## 2 Related Work
Logical reasoning with LMs. Pre-trained LMs have been shown to struggle with logical reasoning over factual knowledge (Kassner et al., 2020; Helwe et al., 2021; Talmor et al., 2020a). There is encouraging recent progress in using transformers for reasoning tasks (Zhou et al., 2020; Clark et al.,
2021; Wei et al., 2022; Chowdhery et al., 2022; Zelikman et al., 2022) but these approaches usually require a significant amount of computation for re-training or human annotations on reasoning provenance (Camburu et al., 2018; Zhou et al.,
2020; Nye et al., 2021; Wei et al., 2022). Moreover, their entangled nature with natural language makes it fundamentally hard to achieve robust inference over factual knowledge (Greff et al., 2020; Saparov and He, 2022; Zhang et al., 2022).
There are other obvious remedies for LMs' poor reasoning capability. Ensuring that the training corpus contains a sufficient amount of exemplary episodes of sound reasoning reduces the dependency on normative biases and annotation artifacts (Talmor et al., 2020b; Betz et al., 2020; Hase et al., 2021). Heuristics like data augmentation are also shown to be effective (Talmor et al., 2020b).
But the above works require significant efforts for crowdsourcing and auditing training data. Our method handily encodes a few prototypes/templates of logic rules and is thus more efficient in terms of human effort. Moreover, our goal is fundamentally different from theirs in investigating the tight integration of neural and symbolic models in an end-to-end manner.
Neuro-symbolic reasoning. Neuro-symbolic approaches are proposed to integrate the perception of deep neural components and the reasoning of symbolic components. Representative works can be briefly categorized into regularization (Xu et al.,
2018), program synthesis (Mao et al., 2018), and proof-guided probabilistic programming (Evans and Grefenstette, 2018; Rocktäschel and Riedel, 2017; Manhaeve et al., 2018; Zhang et al., 2019; Huang et al., 2021). To improve compositionality of LMs, previous works propose to parameterize grammatical rules (Kim, 2021; Shaw et al.,
2021) but show that those hybrid models are inefficient and usually underperform neural counterparts.
In contrast to the above works, DSR-LM focuses on improving LMs' reasoning over logical propositions with tight integration of their pre-trained knowledge in a scalable and automated way.
## 3 Methodology 3.1 Problem Formulation
Each question answering (QA) example in the dataset is a triplet containing input text x, query q, and the answer y. Figure 1 shows an instance that we will use as our running example. The input text x is a natural language passage within which there will be a set of entities, possibly referenced by 3rd person pronouns. The sentences hint at the relationships between entities. For example,
"Dorothy went to her brother Rich's birthday party" implies that Rich is Dorothy's brother and Dorothy is Rich's sister. The query q is a tuple of two entities, representing the people with whom we want to infer the relation. The expected relation is stored in the answer y, which will be one of a confined set of possible relations R, allowing us to treat the whole problem as an ∣R∣-way classification problem. We focus only on the problems where the desired relation is not explicitly stated in the context but need to be deduced through a sequence of reasoning.
## 3.2 Methodology Overview
The design of DSR-LM concerns tightly integrating a perceptive model for relation extraction with a symbolic engine for logical reasoning. While we apply LMs for low-level perception and relation extraction, we employ a symbolic reasoning module to consistently and logically reason about the extracted relations. With a recent surge in neurosymbolic methods, reasoning engines are made differentiable, allowing us to differentiate through
![2_image_0.png](2_image_0.png)
the logical reasoning process. In particular, we employ Scallop (Huang et al., 2021) as our reasoning engine. We propose two add-ons to the existing neuro-symbolic methodology. First, some rules used for logical deduction are initialized using language models and further tuned by our end-to-end pipeline, alleviating human efforts. Secondly, we employ integrity constraints on the extracted relation graphs and the logical rules, to improve the logical consistency of LMs and the learned rules.
Based on this design, we formalize our method as follows. We adopt pretrained LMs to build relation extractors, denoted Mθ, which take in the natural language input x and return a set of probabilistic relational symbols r. Next, we employ a differentiable deductive reasoning program, Pϕ, where ϕ represents the weights of the learned logic rules. It takes as input the probabilistic relational symbols and the query q and returns a distribution over R as the output yˆ. Overall, the deductive model is written as yˆ = Pϕ(Mθ(x), q). (1)
Additionally, we have the semantic loss (sl) derived by another symbolic program Psl computing the probability of violating the integrity constraints:
lsl = Psl(Mθ(x), ϕ) (2)
Combined, we aim to minimize the objective J
over training set D with loss function L:
$$J(\theta,\phi)=\frac{1}{|\mathcal{D}|}\sum_{(x,q,y)\in\mathcal{D}}w_{1}\mathcal{L}(\mathcal{P}_{\phi}(\mathcal{M}_{\theta}(x),q),y)$$ $$+w_{2}\mathcal{P}_{\mathbb{S}1}(\mathcal{M}_{\theta}(x),\phi),\tag{3}$$
$$\hat{y}={\mathcal{P}}_{\phi}({\mathcal{M}}_{\theta}(x),q).$$
$$l_{\mathrm{{\tiny{S}}}\perp}={\mathcal{P}}_{\mathrm{{\tiny{S}}}\perp}({\mathcal{M}}_{\theta}(x),\phi)$$
where w1 and w2 are tunable hyper-parameters to balance the deduction loss and semantic loss.
## 3.3 Relation Extraction
Since pre-trained LMs have strong pattern recognition capabilities for tasks like Named-EntityRecognition (NER) and Relation Extraction (RE)
(Tenney et al., 2019; Soares et al., 2019), we adopt them as our neural components in DSR-LM. To ensure that LMs take in strings of similar length, we divide the whole context into multiple windows.
The goal is to extract the relations between every pair of entities in each windowed context. Concretely, our relation extractor Mθ comprises three components: 1) a Named-Entity Recognizer (NER)
to obtain the entities in the input text, 2) a pretrained language model, to be fine-tuned, that converts windowed text into embeddings, and 3) a classifier that takes in the embedding of entities and predicts the relationship between them. The set of parameters θ contains the parameters of both the LM and the classifier.
We assume the relations to be classified come from a finite set of relations R. For example in CLUTRR (Sinha et al., 2019), we have 20 kinship relations including mother, son, uncle, fatherin-law, etc. In practice, we perform (∣R∣ + 1)-
way classification over each pair of entities, where the extra class stands for "n/a". The windowed contexts are split based on simple heuristics of
"contiguous one to three sentences that contain at least two entities", to account for coreference resolution. The windowed contexts can be overlapping and we allow the reasoning module to deal with noisy and redundant data. Overall, assuming that there are m windows in the context x, we extract mn(n − 1)(∣R∣ + 1) probabilistic relational symbols. Each symbol is denoted as an atom of the form p(*s, o*), where p ∈ R ∪ {n/a}
is the relational predicate, and s, o are the two entities connected by the predicate. We denote
$$({\mathfrak{I}})$$
the probability of such symbol extracted by the LM and relational classifier as Pr(p(*s, o*) ∣ θ). All these probabilities combined form the output vector r = Mθ(x) ∈ R
mn(n−1)(∣R∣+1).
## 3.4 Differentiable Symbolic Inference
The symbolic inference modules Pϕ and Psl are responsible for processing the extracted relations to deduce 1) an expected output relation in R, and 2) a semantic loss encoding the probability of constraint violation. There are two main objectives for these modules. First, they need to logically reason about the output relation and the semantic loss based on the extracted relational symbols r, the query q, and the rule weights ϕ. Second, they need to compute the gradients of yˆ and lsl with respect to θ and ϕ, namely ∂yˆ
∂θ ,
∂yˆ
∂ϕ,
∂lsl
∂ϕ , and ∂lsl
∂θ , in order for the fine-tuning and rule learning to happen.
Logical deduction. Logic rules can be applied to known facts to deduce new ones. For example, below is a horn clause, which reads "if b is a's brother and c is b's daughter, then c is a's niece":
niece(*a, c*) ← brother(*a, b*) ∧ daughter(*b, c*).
Note that the structure of the above rule can be captured by a higher-order logical predicate called "composite" (abbreviated as comp ). This allows us to express many other similarly structured rules with ease. For instance, we can have comp(brother, daughter, niece) and comp(father, mother, grandmother) . With this set of rules, we may derive more facts based on known kinship relations. In fact, composition is the only kind of rule we need for kinship reasoning. In general, there are many other useful higher-order predicates to reason over knowledge bases, which we list out in Table 2.
| Predicate | Example |
|-------------|-------------------------|
| transitive | transitive(relative) |
| symmetric | symmetric(spouse) |
| inverse | inverse(husband, wife) |
| implies | implies(mother, parent) |
Table 2: Higher-order predicate examples.
Probability propagation. We seek to have the deduced facts to also be associated with probabilities computed using probabilities predicted by the underlying relation extractor Mθ. This is achieved by allowing the propagation of probabilities. For example, we have the proof tree with probabilities:
## 0.9 $\cdot$ brother($D$,$R$) 0.8 $\cdot$ day 0.72 $\cdot$ piece($D$,$K$)
$\frac{1}{2}$ .
0.72 ∶∶ niece(*D, K*)
In practice, there could be multiple steps in the proof tree (multi-hop) and one fact can be derived by multiple proof trees. We employ the inference algorithms based on approximated weighted model counting (WMC) presented in (Manhaeve et al.,
2018) to account for probabilistic inference under complex scenarios. Since the WMC procedure is augmented for differentiation, we can obtain the gradient ∂yˆ
∂r
. From here, we can obtain ∂yˆ
∂θ =
∂yˆ
∂r
∂r
∂θ ,
where the second part can be automatically derived from differentiating Mθ.
Rule learning. Hand-crafted rules could be expensive or even impossible to obtain. To alleviate this issue, DSR-LM applies LMs to help automatically extract rules, and further utilizes the differentiable pipeline to fine-tune the rules. Each rule such as comp(brother, daughter, niece)
is attached a weight, initialized by prompting an underlying LM. For example, the prompt we use for extracting comp(r,p,q) is "one's r's p is their
<q:mask>". Given that the relations *r, p, q* ∈ R,
DSR-LM automatically enumerates r and p from R while querying for LM to unmask the value of q.
LM then returns a distribution of words, which we take an intersection with R. The probabilities combined form the initial rule weights ϕ. This type of rule extraction strategy is different from existing approaches in inductive logic programming since we are exploiting LMs for existing knowledge about relationships.
Note that LMs often make simple mistakes answering such prompt. In fact, with the above prompt, even GPT-3 can only produce 62% of composition rules correctly. While we can edit prompt to include few-shot examples, in this work we consider fine-tuning such rule weights ϕ within our differentiable reasoning pipeline. The gradient with respect to ϕ is also derived with the WMC
procedure, giving us ∂yˆ
∂ϕ. In practice, we use two optimizers with different hyper-parameters to update the rule weights ϕ and the underlying model parameter θ, in order to account for optimizing different types of weights.
Semantic loss and integrity constraints. In general, learning with weak supervision label is hard, not to mention that the deductive rules are learnt as well. We thereby introduce an additional semantic loss during training. Here, semantic loss is derived by a set of integrity constraints used to regularize the predicted entity-relation graph as well as the learnt logic rules. In particular, we consider rules that detect *violations* of integrity constraints.
For example, "if A is B's father, then B should be A's son or daughter" is an integrity constraint for relation extractor—if the model predicts a father relationship between A and B, then it should also predict a son or daughter relationship between B
and A. Encoded in first order logic, it is
∀*a, b,* father(*a, b*) ⇒ (son(*b, a*) ∨ daughter(*b, a*)).
Through differentiable reasoning, we evaluate the probability of such constraint being violated, yielding our expected *semantic loss*. In practice, arbitrary number of constraints can be included, though too many interleaving ones could hinder learning.
## 4 Experiments
We evaluate DSR-LM on both CLUTRR and DBpedia-INF. We show that DSR-LM has accurate and generalizable long-range reasoning capability.
## 4.1 Datasets
CLUTRR (Sinha et al., 2019) consists of kinship reasoning questions. Given a context that describes a family's routine activity, the goal is to deduce the relationship between two family members that is not explicitly mentioned in the story. Although the dataset is synthetic, the sentences are crowdsourced and hence there is a considerable amount of naturalness inside the dataset. The family kinship graph is synthetic and the names of the family members are randomized. For ablation study, we manually crafted 92 kinship composition rules as an external symbolic knowledge base. This yields the following symbolic information for each datapoint: 1) the full kinship graph corresponding to the story, 2) the symbolic knowledge base (KB),
and 3) a query representing the question. The CLUTRR dataset is divided into different difficulties measured by k, the number of facts used in the reasoning chain. For training, we only have 10K data points with 5K k = 2 and another 5K
k = 3, meaning that we can only receive supervision on data with short reasoning chains. The test set, on the other hand, contains 1.1K examples with k ∈ {2*, . . . ,* 10}.
DBpedia-INF is a curated subset of the evaluation dataset used in RuleBert (Saeed et al., 2021).
Similar to CLUTRR, it is generated synthetically to test the reasoning capability of LMs. Given a synthetic passage describing the relation between entities, and soft deductive logic rules, we aim to deduce the relationship between any two entities.
The symbolic program of DBpedia-INF consists of 26 predicates, 161 soft rules mined from DBpedia, and 16 rules defining the negation and symmetricity between the predicates. The difficulty of the questions is represented in terms of reasoning length from k ∈ {0*, . . . ,* 5}.
2 Larger k implies harder question. Compared to the exact dataset used in Rulebert, we clean it in order to ensure the question-answer pairs are logically consistent and probabilistically correct.
## 4.2 Experimental Setup
Implementation. We employ Scallop (Huang et al., 2021) as the differentiable symbolic inference module. We show the program used for CLUTRR reasoning task in Figure 2. It comprises relation type declarations, deductive rules for kinship reasoning, and integrity constraints for computing semantic loss (attached in the Appendix).
The program used for DBpedia-INF is written in a similar manner with additional high-order predicates listed in Table 2.
Pre-trained LMs for fine-tuning. We used the HuggingFace (Wolf et al., 2019) pre-trained *w2vgoogle-news-300*, RoBERTa-base, and DeBERTabase as the pretrained language models. We finetune RoBERTa-base and DeBERTa-base during training with binary cross entropy loss. Our relation extraction module is implemented by adding an MLP classifier after the LM, accepting a concatenation of the embedding of the two entities and the embedding of the whole windowed context.
Our model. Our main model, DSR-LM, uses RoBERTa as the underlying LM. The relation classifier is a 2-layer fully connected MLP. For training, we initialize ϕ by prompting the LM. To accelerate the learning process, we use multinomial sampling to retrieve 150 rules for symbolic reasoning. During testing, we will instead pick the top 150 rules.
We use two Adam optimizer to update θ and ϕ, with learning rate 10−5and 10−2respectively.
For ablation studies, we present a few other models. First, we ablate on back-bone LMs. Specifically, we have DSR-LM-DeBERTa which uses De2A length of 0 means that the hypothesis can be verified using the facts alone without using any rules.
![5_image_0.png](5_image_0.png)
BERTa as the back-bone LM. DSR-w2v-BiLSTM, on the other hand, uses as back-bone the word2vec
(Mikolov et al., 2013) model for word embedding and BiLSTM (Huang et al., 2015) for sequential encoding. For DSR-LM-with-Manual-Rule we treat the logic rules as given, meaning that we provide 92 composition rules for CLUTRR and around 180 rules for DBpedia-INF. In this case, we set ground truth rules to have 1.0 weight and therefore ϕ is not learnt. Then, we have DSR-LM-without-IC
which does not have integrity constraints and semantic loss. Lastly, we have DSR-without-LM that takes ground truth structured entity relation graph as input. This way, we do not need the underlying relation extractor and only ϕ needs to be learned.
Baselines. We compare DSR-LM with a spectrum of baselines from purely neural to logically structured. The baselines include pretrained large language models (BERT (Kenton and Toutanova, 2019) and RoBERTa (Liu et al., 2019)), non-LM
counterparts (BiLSTM (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) and BERT-LSTM),
structured models (GAT (Velickovi ˇ c et al. ´ , 2018),
RN (Santoro et al., 2017), and MAC (Hudson and Manning, 2018)), and other neuro-symbolic models (CTP (Minervini et al., 2020), RuleBert (Saeed et al., 2021)). The structured models include those models with relational inductive biases, while the neuro-symbolic model uses logic constraints.
Baseline setup. We highlight a few baselines we include for completeness but are treated as unfair comparison to us: GAT, CTP, and GPT-3 variants.
All baselines other than GAT and CTP take as input natural language stories and the question to produce the corresponding answer. GAT and CTP, on the contrary, takes entity relation graph rather than natural language during training and testing.
The model sizes are different across baselines as well. Model size generally depends on two parts, the backbone pre-trained LM, and the classification network built upon the LM. GPT-3 contains 175B parameters, and RoBERTa uses 123M parameters. The classification model of our method has 2.97M parameters (assuming using embeddings from RoBERTa). With extra 10K parameters for rule weights, our DSR-LM framework has around 127M parameters.
For GPT-3 variants, we conduct experiments on CLUTRR with GPT-3 under the Zero-Shot (GPT-3 ZS), GPT-3 Fine-Tuned (GPT-3 FT), and Few(5)- Shot (GPT-3 5S) (Brown et al., 2020), as well as Zero-Shot-CoT (GPT-3 ZS-CoT) (Kojima et al.,
2022a) settings. For fair comparison, we also include the ground truth kinship composition knowledge in GPT-3 zero shot (GPT-3 ZS w/ Rule), and 5 shot (GPT-3 5S w/ Rule). We include the prompts we used and additional details in Appendix A.
![5_image_1.png](5_image_1.png)
## 4.3 Experimental Results
DSR-LM systematically outperforms a wide range of baselines by a large margin. We evaluate DSR-LM and baselines on both CLUTRR and DBpedia-INF, as reported in Figure 3 and Table 3.
In the CLUTRR experiment, DSR-LM achieves the best performance among all the models (Figure 3). Next, we examine how models trained on stories generated from clauses of length k ≤ 3 and 3067
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
evaluated on stories generated from larger clauses of length k ≥ 4. A fine-grained generalizability study reveals that although all models' performances decline as the reasoning length of the test sequence increases, pure neural-based models decrease the fastest (Figure 4a and 4b). It manifests the systematic issue that language models alone are still not robust for length generalization (Lake and Baroni, 2018). On the other hand, the performance of DSR-LM decreases much slower as test reasoning length increases and outperforms all the baselines when k ≥ 4.
In the DBpedia-INF experiment, DSR-LM outperforms RuleBert by 37% in terms of overall performance (Table 3), showing that DSR-LM has much more robust generalization. Recall that RuleBert aims to improve the logical reasoning of LMs by straightforward fine-tuning with soft rules and facts. Our results show that augmenting data alone for fine-tuning do not effectively improve systematicity. Meanwhile, DSR-LM imbues reasoning inductive biases throughout training and learns useful rules to generalize to longer reasoning lengths.
Learning interpretable logic rules. DSR-LM is capable of producing explicit logic rules as part of the learning process. For presentation, we show the top-10 rules learnt from DSR-LM model in Table 4.
We compare the top-92 most likely prompted and fine-tuned rules against the 92 hand-crafted rules, and 70 of them match. Additionally, we find that our rule weight fine-tuning helps correct 11 of the incorrect rules produced by LM. Through this qualitative analysis, it is clear that DSR-LM provides an interface to probe and interpret the intermediate steps, enhancing the interpretability.
GPT-3 variants are inferior in long-range reasoning. Interestingly, ZS scores 28.6% accuracy on CLUTRR while ZS-CoT scores 25.6%, suggesting that the chain-of-thought prompting might not work in long-range reasoning (Figure 3). In fact, there are many cases where GPT-3 favors complication over simplicity: GPT-3 frequently answers "stepdaughter", "stepmother", and "adopted son", while the real answers are simply "daughter", "mother", and "son". Additionally, GPT-3 could derive the correct result for the wrong reason, e.g. "Jeffrey is Gabrielle's son, which would make William her grandson, and Jeffrey's brother."
While we count the final answer to be correct
(William is Jeffrey's brother), there is a clear inconsistency in the reasoning chain: William cannot be Gabrielle's grandson and Jeffrey's brother simultaneously, given that Jeffrey is Gabrielle's son.
Lastly, we observe that, both GPT-3 FT and many other methods have an accuracy drop as k becomes larger (Figure 4b), ZS and ZS-CoT stay relatively consistent, suggesting that the size of context and the reasoning chain may have a low impact on GPT3's performance.
## 4.4 Analyses And Ablation Studies Symbolic Reasoner Consistently Improves Lms
and word embeddings. Since DSR-LM has a model agnostic architecture, we study how the choice of different LMs impacts the reasoning performance. As shown in Table 5, the two transformer-based models have on-par performance and outperform the word2vec one. However, note that the word2vec-based model still has better performance than all other baselines. Besides higher final accuracy, the pre-trained transformerbased language model also accelerates the training process. Both DSR-LM-RoBERTa and DSRLM-DeBERTa reach their best performance within 20 epochs, while it takes DSR-w2v-BiLSTM 40 epochs to peak.
![7_image_3.png](7_image_3.png)
Table 5: Ablation study about **neural backbones** of DSR-LM. We compare the CLUTRR performance of DSR-LM using different LMs.
Incorporate domain knowledge. DSR-LM allows injecting domain specific knowledge. In DSRLM-with-Rule, we manually crafted 92 rules for kinship reasoning to replace the learnt rules. As shown in Table 6, it obtained a 0.36% performance gain over DSR-LM. The fact that the improvement is marginal implies our method extracts useful rules to obtain on-par performance with manually crafted ones. DSR-LM-without-IC, our model without integrity constraints specified on predicted relations and rules, performs worse than DSR-LM, suggesting that logical integrity constraints are essential component for improving the model robustness.
The impact of the relation extractor. To understand what causes the failure case of DSR-LM, we study the performance of our relation classification model separately. We isolate the trained relation
![7_image_0.png](7_image_0.png)
Table 6: Ablation study. We compare our model's performance on CLUTRR with different setups.
extractor and found that it reaches 84.69% accuracy on the single relation classification task. For comparison, we train a relation extractor using all the intermediate labels in the training dataset, and it reaches 85.32% accuracy. It shows that even using only weak supervision (i.e., the final answers to multi-hop questions), our approach can reach onpar performance as supervised relation extraction.
Reasoning over structured KBs. To understand the rule learning capability of our approach, we design our ablation model DSR-without-LM to take as input ground-truth KBs instead of natural language. In this case, rule weights are not initialized by LM but randomized. As shown in Table 7, our model outperforms GAT and CTP which also operates on structured KBs. It demonstrates that our differentiable rule learning paradigm learns rules to reason about KBs consistently.
Model Accuracy (%)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
Table 7: DSR-without-LM compared against GAT and CTP on reasoning with ground truth KBs. For this comparison we train on k ∈ [2, 3] and test on k ∈ [4, 10].
Failure cases of DSR-LM. We showcase in Appendix Table 8 that even state-of-the-art large LMs are prone to logical fallacies. On the other hand, the failure case of our method usually occurs in the stage of relation extraction. For example, for the following sentence "Christopher and Guillermina are having a father-daughter dance", our RoBERTa based relation extractor fails to recognize the fatherdaughter relationship but rather thinks C and G
have a husband-wife relationship. We require most of the relation extraction to be correct in order to avoid cascading error. As the error rate on individual relation extraction accumulates, it leads to the observed drop in accuracy as k becomes larger.
5 Concluding Remarks We investigate how to improve LMs' logical reasoning capability using differentiable symbolic reasoning. Through extensive experiments, we demonstrate the effectiveness of DSR-LM over challenging scenarios where widely deployed large LMs fail to reason reliably. We hope our work can lay the groundwork for exploring neuro-symbolic programming techniques to improve the robustness of LMs on reasoning problems.
## Limitations
The primary limitation of DSR-LM is the need for a confined problem space. It requires a well-defined relational schema to perform logical reasoning, and thus will not be suited for an open-ended problem setup. Nevertheless, DSR-LM is suitable for many domain specific problems within Natural Language Understanding and Reasoning, allowing domain experts to freely inject domain-specific knowledge in a structured and logical manner.
## References
Gregor Betz, Christian Voigt, and Kyle Richardson.
2020. Critical thinking for language models. *arXiv* preprint arXiv:2009.07185.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *NeurIPS*.
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems, 31.
Swarat Chaudhuri, Kevin Ellis, Oleksandr Polozov, Rishabh Singh, Armando Solar-Lezama, Yisong Yue, et al. 2021. *Neurosymbolic Programming*.
Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In *EMNLP*.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311.
Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2021.
Transformers as soft reasoners over language. In IJCAI.
Antonia Creswell, Murray Shanahan, and Irina Higgins.
2022. Selection-inference: Exploiting large language
models for interpretable logical reasoning. arXiv preprint arXiv:2205.09712.
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. *TACL*.
Richard Evans and Edward Grefenstette. 2018. Learning explanatory rules from noisy data. *Journal of* Artificial Intelligence Research, 61:1–64.
Klaus Greff, Sjoerd Van Steenkiste, and Jürgen Schmidhuber. 2020. On the binding problem in artificial neural networks. *arXiv preprint arXiv:2012.05208*.
Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, and Srinivasan Iyer. 2021. Do language models have beliefs? methods for detecting, updating, and visualizing model beliefs. *arXiv preprint arXiv:2111.13654*.
Chadi Helwe, Chloé Clavel, and Fabian M. Suchanek.
2021. Reasoning with transformer-based models:
Deep learning, but shallow reasoning. In *AKBC*.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*.
Jiani Huang, Ziyang Li, Binghong Chen, Karan Samel, Mayur Naik, Le Song, and Xujie Si. 2021. Scallop:
From probabilistic deductive databases to scalable differentiable reasoning. *NeurIPS*.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. *arXiv* preprint arXiv:1508.01991.
Drew A Hudson and Christopher D Manning. 2018.
Compositional attention networks for machine reasoning. In *ICLR*.
Daniel Kahneman. 2011. *Thinking, fast and slow*.
Macmillan.
Nora Kassner, Benno Krojer, and Hinrich Schütze. 2020.
Are pretrained language models symbolic reasoners over knowledge? *CoNLL*.
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL.
Yoon Kim. 2021. Sequence-to-sequence learning with latent neural grammars. *NeurIPS*.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022a. Large language models are zero-shot reasoners. *NeurIPS*.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022b. Large language models are zero-shot reasoners. *NeurIPS*.
Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In *ICML*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. 2018.
Deepproblog: Neural probabilistic logic programming. *NeurIPS*.
Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B
Tenenbaum, and Jiajun Wu. 2018. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In *ICLR*.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality.
NeurIPS.
Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, and Tim Rocktäschel. 2020.
Learning reasoning strategies in end-to-end differentiable proving. In *ICML*.
Maxwell Nye, Michael Tessler, Josh Tenenbaum, and Brenden M Lake. 2021. Improving coherence and consistency in neural sequence models with dualsystem, neuro-symbolic reasoning. *NeurIPS*.
Tim Rocktäschel and Sebastian Riedel. 2017. End-toend differentiable proving. *NeurIPS*.
Mohammed Saeed, Naser Ahmadi, Preslav Nakov, and Paolo Papotti. 2021. Rulebert: Teaching soft rules to pre-trained language models. In *EMNLP*.
Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. *NeurIPS*.
Abulhair Saparov and He He. 2022. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. *arXiv preprint arXiv:2210.01240*.
Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021. Compositional generalization and natural language variation: Can a semantic parsing approach handle both? In ACL.
Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. *EMNLP*.
Livio Baldini Soares, Nicholas Fitzgerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks:
Distributional similarity for relation learning. In ACL.
Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020a. olmpics-on what language model pre-training captures. *TACL*.
Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, and Jonathan Berant. 2020b. Leap-of-thought:
Teaching pre-trained models to systematically reason over implicit knowledge. *NeurIPS*.
Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2020. Long range arena: A benchmark for efficient transformers.
In *ICLR*.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In ACL.
Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large language models still can't plan (a benchmark for llms on planning and reasoning about change). *arXiv preprint* arXiv:2206.10498.
Petar Velickovi ˇ c, Guillem Cucurull, Arantxa Casanova, ´
Adriana Romero, Pietro Liò, and Yoshua Bengio.
2018. Graph attention networks. In *ICLR*.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *NeurIPS*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *EMNLP*.
Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Broeck. 2018. A semantic loss function for deep learning with symbolic knowledge. In *ICML*.
Eric Zelikman, Yuhuai Wu, and Noah D Goodman.
2022. Star: Bootstrapping reasoning with reasoning. *NeurIPS*.
Hanlin Zhang, Yi-Fan Zhang, Li Erran Li, and Eric Xing. 2022. The impact of symbolic representations on in-context learning for few-shot reasoning. *arXiv* preprint.
Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, and Le Song. 2019. Efficient probabilistic logic reasoning with graph neural networks. In *ICLR*.
Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, and Jian Tang. 2020. Towards interpretable natural language understanding with explanations as latent variables.
NeurIPS.
## A Implementation Details
Reasoner details. The learning of rules and the fine-tuning of the underlying LM should happen separately with different learning rates - finetuning LM is an intricate process that requires a very small learning rate, while rules should be learned with larger learning rates since gradients are directly back-propagated onto the weights. This can be realized by employing two separate optimizers, one for fine-tuning and the other for rule learning. During training time, we rotate training the two parts by toggling one and the other optimizer for every 10 batches of data points.
Rule learning training setup. For rule learning, we can initialize the transitivity tensor using the language model provided composite rules. Since the CLUTRR dataset consists of 20 different relations and a transitivity relationship is defined over 3 relations, there are 8K possible transitivity facts over these relations. Specifically, we give every predicted composite rule by the GPT with a 0.5 weight, while initializing the other rules with a range such as [0, 0.1], since otherwise, an insensible transitive fact may be getting a random high weight while it effectively does nothing for reasoning. The learning process encourages the rules that yield the correct query result and suppresses the rules that lead to wrong answers. To avoid the exponential blow-up caused by injecting all the 8K
rules in the reasoning engine, we sample 200 rules according to their weights during the training time and deterministically use the top 200 learned rules during the test time. For the *QA-No-Rule* setup, the confidence score of rules, the MLP classifier for relation extraction, and the underlying LM are learned and updated simultaneously during training. To account for their difference, we employ two Adam optimizers ARL and ARE. ARE is used for optimizing models for relation extraction, and thus will take as parameters the MLP classifier and the underlying LM. It has a low learning rate 0.00001 since it needs to fine-tune LMs. ARL, on the other hand, will take as a parameter the confidence score tensor for the transitive rules, and is set to have a higher learning rate of 0.001. For the integrity constraints, we set the result integrity violation loss with the weight 0.1, and set the rule integrity constraint violation loss with the weight 0.01. We set the batch size to 16 and train for 20 epochs.
To obtain the initial rule weights for the composition rule in our CLUTRR experiment, the prompt we use is "Mary's P's Q is her <mask>." where P
and Q are enumerations of all possible relationships, and the unmasked value is treated as the answer R, producing composite(P, Q, R). For the other rule templates we used, the prompts are 1. transitive: "is R's R one's R? <mask>";
the probability of the unmasked word being "yes" is treated the rule weight for transitive(R).
2. symmetric: "does A is R of B means B is R
of A? <mask>"; the probability of the unmasked word being "yes" is treated the rule weight for symmetric(R).
3. inverse: "A is R of B means B is <mask>
of A"; the unmasked value is treated as the answer P, producing inverse(R, P).
4. implies: "does R imply P? <mask>"; the probability of unmasked value being "yes" is treated as the rule weight for implies(R,
P).
GPT-3 Prompt Setups. For Zero-Shot, we use the prompt "So B is A's:" for the query pair
(*A, B*) to ask GPT-3 to complete the relationship between A and B. We pick the phrase in the first line or before the first period from the completed text and compare it directly with the ground truth relation. For the Few(5)-Shot setting, we randomly select 5 examples from the training dataset used for other models (k ∈ [2, 3]) to serve as examples. We use the same prompt for Few-Shot and Fine-Tuned as the Zero-Shot and the automated GPT-3 finetuning setup for our training dataset, trained for 4 epochs. To add in the transitive KB, we simply include 92 hand-crafted rules in natural language as a part of the prompt, and we performed Zero-shot with KB, and Few(5)-shot with KB experiments.
For the Zero-Shot-CoT setting, we use the prompt
"Who is B to A? Let's think step by step" to suggest GPT-3 to auto-complete while working out a reasoning chain. Under this setup, it is impossible to compare the answer to the ground truth automatically. Therefore, we manually check through the whole test dataset of CLUTRR.
## B Additional Experimental Results
In Table 8, we showcase the failure cases of large LMs for logical inference, where Zero-shot-CoT
denotes zero-shot chain-of-thoughts (Kojima et al., 2022b).
| k | Prompt | GPT-3 CoT Output | Ground Truth | | |
|--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|----------------|----|----------|
| 2 | Jeffrey asked his brother William if he would come help him fix his car next weekend. Gabrielle was delighted to see her grandson, William, at her birthday party. Who is Jeffrey to Gabrielle? Let's do it step by step: | Jeffrey is Gabrielle's grandson. | grandson | | |
| William is Jeffrey's brother. | | | | | |
| 3 | Elliott felt lonely when his wife Marian was gone to see her mother Molly. Elliott and his daughter Karen took a day off school to go to the zoo. Who is Karen to Molly? Let's do it step by step: | Karen is Elliott's daughter. Molly is Elliott's wife's mother. Therefore, Karen is Molly's granddaughter. | granddaughter | | |
| 4 | Frank took his daughter Julia to the park. Beverly, Emilie´s sister went with them. Kimberley was excited because today she was going to the zoo with her uncle Frank. Jenny spent a great day shopping with her daughter, Kimberley. Who is Beverly to Jenny? Let's do it step by step: | Beverly is Jenny's sister-in-law. | niece | | |
| 5 | Melissa and her father, Stephen, went to the marina. Stephen's daughter, Margaret, had purchased a boat, and they were eager to see it. Annie and her aunt, Evelyn, went to the deli. They got a half a pound of corned beef, and two pounds of salami. Melissa and her sister Annie were arguing over a toy. While they were fighting, Melissa's sister Margaret grabbed the toy, and ran off to play with it by herself. Who is Evelyn to Melissa? Let's do it step by step: | Melissa's | father | is | Stephen. |
| Stephen has a daughter, Margaret. Margaret is Annie's aunt. Therefore, Evelyn is Annie's aunt, and Melissa's great-aunt. | aunt | | | | |
| 6 | James took his grandson Davis to the baseball game. Sharon and her son Brian went out to lunch together yesterday. Brian and his brother Oliver went to the park to play basketball. Davis played scrabble with his mother Sharon and his brother Oliver. Davis asked his brother Oliver if he would come help him fix his car next weekend. Who is James to Oliver? Let's do it step by step: | James is Davis's grandfather, so | grandfather | | |
| he is Oliver's great-grandfather. | | | | | |
| 7 | George rushed to the hospital to find out that his wife and already given birth to a boy and had named him David. Larry was playing checkers with his brother Craig. Larry's sister Jean was waiting to play. Judy made mixed drinks for her son Craig. Judy's daughter Jean was jealous because she was n't offered one. Jean planned a trip to the zoo for her brother, Craig. They had a great time. George was having trouble with his divorce because he only got to see his daughter Jean a few times a week. Who is David to Craig? Let's do it step by step: | George's wife gave birth to a boy and named him David. David is Craig's nephew. | brother | | |
| 8 | Anthony went to the bakery with his uncle Roger to pick up some bread for lunch. Anthony loved going to the store with his mom Mary. She always bought him snacks Charles would n't let his son Floyd go to the park by himself. Floyd's brother Anthony offered to go with him. Faye was hoping to travel out of state to visit her son Cecil. Ellie's brother, Charles, was doing his best to find the best flight. Mary and her son Anthony went out to lunch together yesterday. Cecil took her sister, Christine, out to dinner for her birthday. Who is Roger to Christine? Let's do it step by step: | Cecil is Faye's son, so Faye is Christine's mother-in-law. Christine is married to Faye's son, so Christine is Faye's daughter-inlaw. Charles is Ellie's brother, so Charles is Christine's brotherin-law. Anthony is Charles's son, so Anthony is Christine's nephew. Roger is Anthony's uncle, so Roger is Christine's brother-inlaw. | uncle | | |
| 9 | Gino bought gifts for his lovely sister, Jami. Michele and William left that evening to go bowling. Ethel and her son, Gordon, went to the park, and had a wonderful time. Manuel's father, Kenneth, thought his son was a monster. Meanwhile Michele thought her son Manuel was a little angel. Gino, Kenneth's other son, agreed with his dad. William asked his brother Larry if he would come help him fix his car next weekend. Joseph was so happy for his brother, Larry, getting a promotion. Gordon and his uncle Joseph went to the movies Sunday after church and got popcorn and candy while they were there. Who is Jami to Ethel? Let's do it step by step: | 1. Jami is Gino's sister. 2. Ethel is Gino's mother. 3. Therefore, Jami is Ethel's daughter-in-law. | niece | | |
| Table 8: Qualitative analysis of GPT-3 Zero-Shot-CoT on the CLUTRR dataset. The novelty comes from the | | | | | |
Table 8: Qualitative analysis of GPT-3 Zero-Shot-CoT on the CLUTRR dataset. The novelty comes from the sentence marked in orange. Queries that are of interest are marked in blue. Correct answer in the output is marked green and incorrect ones are marked red.
| Confidence | Rule |
|---------------------------------------------------------------------------------------------|-------------------------------------------------------|
| 1.154 | mother(A,B) ← sister(A,C) ∧ mother(C,B) |
| 1.152 | daughter(A,B) ← daughter(A,C) ∧ sister(C,B) |
| 1.125 | sister(A,B) ← daughter(A,C) ∧ aunt(C,B) |
| 1.125 | father(A,B) ← brother(A,C) ∧ father(C,B) |
| 1.123 | granddaughter(A,B) ← grandson(A,C) ∧ sister(C,B) |
| 1.120 | brother(A,B) ← sister(A,C) ∧ brother(C,B) |
| 1.117 | brother(A,B) ← son(A,C) ∧ uncle(C,B) |
| 1.105 | brother(A,B) ← daughter(A,C) ∧ uncle(C,B) |
| 1.104 | daughter(A,B) ← wife(A,C) ∧ daughter(C,B) |
| 1.102 | mother(A,B) ← brother(A,C) ∧ mother(C,B) |
| 1.102 | brother(A,B) ← father(A,C) ∧ son(C,B) |
| 1.096 | sister(A,B) ← mother(A,C) ∧ daughter(C,B) |
| 1.071 | sister(A,B) ← father(A,C) ∧ daughter(C,B) |
| 1.071 | son(A,B) ← son(A,C) ∧ brother(C,B) |
| 1.070 | uncle(A,B) ← father(A,C) ∧ brother(C,B) |
| 1.066 | daughter(A,B) ← son(A,C) ∧ sister(C,B) |
| 1.061 | brother(A,B) ← brother(A,C) ∧ brother(C,B) |
| 1.056 | grandson(A,B) ← husband(A,C) ∧ grandson(C,B) |
| 1.055 | sister(A,B) ← son(A,C) ∧ aunt(C,B) |
| 1.053 | grandmother(A,B) ← sister(A,C) ∧ grandmother(C,B) |
| 1.050 | granddaughter(A,B) ← granddaughter(A,C) ∧ sister(C,B) |
| 1.050 | grandmother(A,B) ← brother(A,C) ∧ grandmother(C,B) |
| 1.047 | grandson(A,B) ← granddaughter(A,C) ∧ brother(C,B) |
| 1.046 | grandfather(A,B) ← mother(A,C) ∧ father(C,B) |
| 1.036 | son(A,B) ← daughter(A,C) ∧ brother(C,B) |
| 1.035 | sister(A,B) ← brother(A,C) ∧ sister(C,B) |
| 1.029 | grandmother(A,B) ← mother(A,C) ∧ mother(C,B) |
| 1.027 | grandfather(A,B) ← sister(A,C) ∧ grandfather(C,B) |
| 1.019 | brother(A,B) ← mother(A,C) ∧ son(C,B) |
| 1.017 | granddaughter(A,B) ← wife(A,C) ∧ granddaughter(C,B) |
| Table 9: Showcase of the learnt logic rules with top@30 confidence of DSR-LM rule learning. | |
// question :: (sub, obj) represents a question asking about relation
// between `sub` and `obj`
type question(sub: String, obj: String)
// context :: (rela, sub, obj) represents there is a `rela`
// between `sub` and `obj`
type kinship(rela: usize, sub: String, obj: String)
// Composition rule :: (r1, r2, r3) represents compositing r1 and r2 yields r3 type composite(r1: usize, r2: usize, r3: usize)
// Constants used for defining relation properties const DAUGHTER = 0, SISTER = 1, ..., MOTHER_IN_LAW = 19 const MALE = 0, FEMALE = 1 type gender(r: usize, gender_id: i32) rel gender = {(DAUGHTER, FEMALE), (SISTER, FEMALE), ..., (MOTHER_IN_LAW, FEMALE)} type gen(r: usize, gen_id: i32) rel gen = {(DAUGHTER, -1), (SISTER, 0), ..., (MOTHER_IN_LAW, 1)} // Composition rel kinship(r3, x, z) = composite(r1, r2, r3),
kinship(r1, x, y), kinship(r2, y, z), x != z
// Answer rel answer(r) = question(s, o), kinship(r, s, o)
// Integrity constraints on results rel violation(!r) = r := forall(a, b: kinship(GRANDFATHER, a, b) =>
(kinship(GRANDSON, b, a) or kinship(GRANDDAUGHTER, b, a)))
rel violation(!r) = r := forall(a, b: kinship(GRANDMOTHER, a, b) =>
(kinship(GRANDSON, b, a) or kinship(GRANDDAUGHTER, b, a)))
rel violation(!r) = r := forall(a, b: kinship(FATHER, a, b) =>
(kinship(SON, b, a) or kinship(DAUGHTER, b, a)))
rel violation(!r) = r := forall(a, b: kinship(MOTHER, a, b) =>
(kinship(SON, b, a) or kinship(DAUGHTER, b, a)))
rel violation(!r) = r := forall(a, b: kinship(HUSBAND, a, b) => kinship(WIFE, b, a)) rel violation(!r) = r := forall(a, b: kinship(BROTHER, a, b) =>
(kinship(SISTER, b, a) or kinship(BROTHER, b, a)))
// Integrity constraints on rules rel violation(!r) = r := forall(r1, r2, r3:
composite(r1, r2, r3) and gender(r2, g) => gender(r3, g))
rel violation(!r) = r := forall(r1, r2, r3:
composite(r1, r2, r3) and gen(r1, g1) and gen(r2, g2) => gen(r3, g1 + g2))
Figure 5: Full Scallop program including deductive rules and integrity constraints
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
takase-etal-2023-b2t | {B}2{T} Connection: Serving Stability and Performance in Deep Transformers | https://aclanthology.org/2023.findings-acl.192 | In the perspective of a layer normalization (LN) position, the architecture of Transformers can be categorized into two types: Post-LN and Pre-LN.Recent Transformers prefer to select Pre-LN because the training in Post-LN with deep Transformers, e.g., ten or more layers, often becomes unstable, resulting in useless models. However, in contrast, Post-LN has also consistently achieved better performance than Pre-LN in relatively shallow Transformers, e.g., six or fewer layers. This study first investigates the reason for these discrepant observations empirically and theoretically and discovers 1, the LN in Post-LN is the source of the vanishing gradient problem that mainly leads the unstable training whereas Pre-LN prevents it, and 2, Post-LN tends to preserve larger gradient norms in higher layers during the back-propagation that may lead an effective training. Exploiting the new findings, we propose a method that can equip both higher stability and effective training by a simple modification from Post-LN.We conduct experiments on a wide range of text generation tasks and demonstrate that our method outperforms Pre-LN, and stable training regardless of the shallow or deep layer settings. | # B2T Connection: Serving Stability And Performance In Deep Transformers
Sho Takase†∗ Shun Kiyono† Sosuke Kobayashi‡ **Jun Suzuki**‡
†LINE Corporation ‡Tohoku University
{sho.takase, shun.kiyono}@linecorp.com [email protected] [email protected]
## Abstract
From the perspective of the layer normalization
(LN) positions, the architectures of Transformers can be categorized into two types: Post-LN
and Pre-LN. Recent Transformers tend to be Pre-LN because, in Post-LN with deep Transformers (e.g., those with ten or more layers),
the training is often unstable, resulting in useless models. However, Post-LN has consistently achieved better performance than Pre-LN
in relatively shallow Transformers (e.g., those with six or fewer layers). This study first investigates the reason for these discrepant observations empirically and theoretically and made the following discoveries: 1, the LN in PostLN is the main source of the vanishing gradient problem that leads to unstable training, whereas Pre-LN prevents it, and 2, Post-LN tends to preserve larger gradient norms in higher layers during the back-propagation, which may lead to effective training. Exploiting the new findings, we propose a method that can provide both high stability and effective training by a simple modification of Post-LN. We conduct experiments on a wide range of text generation tasks.
The experimental results demonstrate that our method outperforms Pre-LN, and enables stable training regardless of the shallow or deep layer settings. Our code is publicly available at https://github.com/takase/b2t_connection.
## 1 Introduction
To prevent the vanishing (or exploding) gradient problem in the training of a deep neural network
(DNN), various techniques, such as batch normalization (Ioffe and Szegedy, 2015) and residual connection (Srivastava et al., 2015; He et al.,
2016a), have been proposed and widely used in almost all recent DNNs. Transformer (Vaswani et al., 2017) employs the layer normalization (Ba et al., 2016) for this purpose. Transformer is currently the most successful model architecture
∗ A part of this work was done when the author was at Tokyo Institute of Technology.
15
![0_image_0.png](0_image_0.png)
in DNNs. It was firstly developed for applying sequence-to-sequence tasks, such as machine translation (Vaswani et al., 2017), summarization (Takase and Okazaki, 2019), and automatic speech recognition (ASR) (Wang et al., 2020), and is currently used in speech, vision, and many other information processing research fields.
As reported in the batch normalization literature (He et al., 2016b), the position of the normalization layers primarily affects both the stability and resultant performance of a trained model.
In Transformers, some previous studies have investigated the impact of the layer normalization positions (Wang et al., 2019; Xiong et al., 2020).
There are currently two major layer normalization positions in Transformers: Pre-Layer Normalization (Pre-LN) and Post-Layer Normalization (PostLN). Pre-LN applies the layer normalization to an input for each sub-layer, and Post-LN places the layer normalization after each residual connection.
The original Transformer (Vaswani et al., 2017)
employs Post-LN. However, many recent studies have suggested using Pre-LN (Wang et al., 2019; Baevski and Auli, 2019; Brown et al., 2020) because the training of deep Transformers (e.g., those with ten or more layers) using Post-LN is often unstable, resulting in useless models. Figure 1 shows loss curves for an actual example; the training of 18 layered Transformer encoder-decoders
(18L-18L) on a widely used WMT English-toGerman machine translation dataset. These figures clearly show that the Post-LN Transformer encoder-decoders fail to train the model. However, in contrast, Liu et al. (2020) reported that Post-LN
consistently achieved better performance than PreLN on a machine translation task when they used 6 layered (relatively shallow, 6L-6L) Transformers.
This paper focuses specifically on such discrepancies between Pre-LN and Post-LN in configurations with various number of layers. We investigate the sources of the instability of training in deep configurations and the superior performance in shallow configurations for Post-LN, compared with that for Pre-LN, to understand the essentials of the differences between Pre-LN and Post-LN. We discover that the layer normalization in Post-LN is the main source of the vanishing gradient problem that leads to unstable training, whereas Pre-LN prevents it, as shown in Figure 1. In particular, we clarify that the layer normalization is a significant factor of the vanishing gradient problem by comparing the input/output vector norms of gradient flows for each layer normalization during back-propagation.
These analyses bring us a novel idea that can satisfy higher stability by skipping over layer normalizations and provide better performance than Pre-LN
regardless of their layer sizes. Consequently, we propose a method that is based on Post-LN Transformers but has additional residual connections to enable stable training.
We conduct experiments on a wide range of text generation tasks, namely machine translation, summarization, language modeling, and ASR. The experimental results lead to the following three new major findings:
1. Post-LN Transformers achieve better performance than Pre-LN Transformers on text generation tasks (not only machine translation (Liu et al., 2020) but also other tasks).
Thus, Post-LN is superior to Pre-LN if the problem of its unstable training can be solved.
2. Our modification enables Post-LN Transformers to stack many layers. 3. Our method can maintain the performance advantage of Post-LN and mitigate its unstable training property, thus providing better performance than Pre-LN.
## 2 Post-Ln And Pre-Ln Transformers
We briefly describe Post-LN and Pre-LN Transformers in this section. The original Transformer (Vaswani et al., 2017) uses Post-LN, in which layer normalizations are located after each residual connection. Let x be an input of a sublayer, and F(·) be a sub-layer of a Transformer, such as a feed-forward network or multi-head attention. Post-LN is defined as follows:
$$\mathrm{PostLN}(x)=\mathrm{LN}(x+{\mathcal{F}}(x)),\qquad\quad(1)$$
where LN(·) is the layer normalization function.
In contrast, Pre-LN places the layer normalization before an input of each sub-layer:
$$\mathrm{PreLN}(x)=x+{\mathcal{F}}(\mathrm{LN}(x)).$$
$\eqref{eq:walpha}$.
Figure 2 (a) and (b) illustrate Post-LN and Pre-LN
Transformer architectures, respectively.
## 3 Gradients Of Transformer Layers
As described in Liu et al. (2020), the vanishing gradient problem often occurs in Post-LN Transformers. Figure 3 shows the gradient norms of each layer for the (a) encoder-side and (b) decoderside at the beginning of training, when 18L-18L
Transformer encoder-decoders are trained on a widely used machine translation dataset (the WMT
English-to-German dataset). Focus on the decoderside of Post-LN as illustrated in Figure 3 (b). This figure shows that shallower layers have smaller gradient norms. In other words, the vanishing gradient occurs in the decoder-side of Post-LN because its gradient norms exponentially decay as they are back-propagated to shallower layers. This result is consistent with the previous study (Liu et al., 2020).
We consider that this vanishing gradient causes the difficulty of stacking many layers with the Post-LN setting, as shown in Figure 1.
To investigate the vanishing gradient empirically in more detail, we measure the gradient norms of parts (1) - (5) of Figure 2 (a). Figure 4 shows the gradient norms of each part in the 18th layer1.
This figure shows that the gradient norms decrease drastically from (4) to (3) and (2) to (1). These parts correspond to layer normalizations, as shown in Figure 4. This suggests that layer normalizations in Post-LN Transformers are probably the cause of the vanishing gradient problem.
1Appendix B shows the gradient norms of each part in the 1st and 9th decoders as additional examples.
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
and (2), as follows:
$$\partial{\rm PostLN}(x)=\frac{\partial{\rm LN}(x+{\cal F}(x))}{\partial(x+{\cal F}(x))}\left(I+\frac{\partial{\cal F}(x)}{\partial x}\right),\tag{3}$$ $$\partial{\rm PreLN}(x)=I+\frac{\partial{\cal F}({\rm LN}(x))}{\partial{\rm LN}(x)}\frac{\partial{\rm LN}(x)}{\partial x},\tag{4}$$
where I is the identity matrix. As Equation (3),
the derivative of Post-LN is equal to the product of two derivatives: one is the layer normalization, and the other consists of the residual connection and sub-layer F. In contrast, in Pre-LN, the derivative of the residual connection is isolated from the term related to the derivative of the layer normalization.
The difference between these equations implies that the residual connection in Pre-LN prevents the vanishing gradient because it retains the gradients of upper layers even if the derivative of the layer normalization decreases gradients drastically.
## 4 Transformations By Each Layer
As described, it is difficult to stack many layers in Post-LN Transformers because the vanishing gra-
![3_image_0.png](3_image_0.png)
10
dient problem occurs. Although Pre-LN is more stable in training, Post-LN can achieve better performance if training succeeds (see Section 6). In this section, we explore the reason for this difference in performance.
Focus Pre-LN in Figure 3. In contrast to PostLN, in Pre-LN, a deeper (higher) layer has a smaller gradient norm. Thus, the parameters of higher layers are not required to change dramatically from their initial values. This implies that higher layers in Pre-LN are not sufficiently effective.
To investigate the effectiveness of higher layers, we focus on the transformations by each layer.
Figure 5 shows the average cosine similarities between the outputs of each pair of layers for 6L6L Transformer encoder-decoders trained on the WMT dataset when several sequences are input.
This figure indicates that the lower-left similarities of Pre-LN are higher than those of Post-LN. This result means that the outputs of shallow layers are similar to the output of the final layer in Pre-LN,
but not in Post-LN. Consequently, higher layers in Pre-LN are less effective than those in Post-LN if training succeeds.
We consider that the residual connection in PreLN causes this phenomenon. As Equation (2)
shows, in Pre-LN, an input x skips over the sublayer F(·) by the residual connection. Thus, the input x is directly connected to the final layer output. This property makes the training stable, as described in Section 3, but causes high similarities between the outputs of the various layers. Therefore, we consider that Pre-LN underperforms Post-LN
because the residual connection in Pre-LN reduces the effectiveness of its higher layers. In contrast, in Post-LN, larger gradient norms in higher layers
(as shown in Figure 3) make higher layers more effective (as shown in Figure 5) but it is necessary to prevent the vanishing gradient problem in shallow layers when we stack many layers.
## 5 Modification For Stable Training In Post-Ln: Bottom-To-Top Connection
This section introduces a modification that makes the training of Post-LN more stable while preserving its high performance. This modification comprises an additional residual connection to mitigate the vanishing gradient in Post-LN by enabling many layers to be stacked.
As discussed in the previous sections, we need a term that retains gradients in the derivatives, as in Equation (4), to prevent the vanishing gradient.
To satisfy this requirement, we propose a residual connection that skips over all layer normalizations except the final one in each layer. Our introduced connection ties an input of a layer to the result of the feed-forward network (FFN), as illustrated by the red arrows in Figure 2 (c). We call this connection **Bottom-to-Top (B2T)** connection, which is formalized in the following equation:
$$x_{i n p}+x_{f f n}+\mathrm{FFN}(x_{f f n}),$$
where xinp is an input of a layer, FFN(·) is an FFN, and x*f fn* is an input of the FFN. In short, xinp skips the layer normalizations after the selfattention and encoder-decoder cross-attention. Because the derivative of xinp is isolated from the terms related to the derivatives of the layer normalizations just behind the attention sub-layers, it retains gradients, as in Pre-LN. For example, in an encoder-side, x*f fn* is as follows:
$$x_{f f n}=\mathrm{LN}(\mathrm{SelfAttn}(x_{i n p})+x_{i n p}),$$
$$(6)$$
where SelfAttn(·) is a self-attention network.
Thus, Equation (5) can be written as follows:
$$\begin{array}{c}{{x_{i n p}+\mathrm{LN}(\mathrm{SelfAttn}(x_{i n p})+x_{i n p})}}\\ {{\qquad+\mathrm{FFN}(\mathrm{LN}(\mathrm{SelfAttn}(x_{i n p})+x_{i n p})),}}\end{array}\tag{7}$$
The derivative of this equation is the following equation:
$$\begin{array}{r}{I+{\frac{\partial(\mathrm{LN}(\mathrm{SelfAttn}(x_{i n p})+x_{i n p}))}{\partial x_{i n p}}}}\\ {+{\frac{\partial(\mathrm{FFN}(\mathrm{LN}(\mathrm{SelfAttn}(x_{i n p})+x_{i n p})))}{\partial x_{i n p}}},}\end{array}\tag{8}$$
Because this derivative contains I, which is unrelated to the derivatives of internal layer normalizations, our B2T connection (i.e., xinp) helps to propagate gradients. For a decoder-side, we can prove this property in the same manner.
Figure 3 (b) indicates that B2T connection mitigates the vanishing gradient of 18L-18L encoderdecoders. Moreover, we locate B2T connection before the final layer normalization in each layer to avoid a direct connection to the final layer output based on the discussion in Section 4. Thus, B2T
connection preserves the property of Post-LN with respect to the transformations performed by each layer, as illustrated in Figure 5 (c)2.
## 6 Experiments
Through experiments, we indicate following three findings.
- Post-LN Transformers achieve better performance than Pre-LN Transformers if their training succeeds.
- B2T connection enables the training of deep Transformers with the Post-LN configuration.
- Our modification preserves the performance advantage of Post-LN Transformers, which therefore outperform Pre-LN Transformers.
We describe the essential experimental configurations in this section. Appendix A presents more details, such as the hyper-parameters and computational budgets.
## 6.1 Machine Translation
6.1.1 Dataset The machine translation task has been widely used to investigate the performance of Transformer-based methods since the original Transformer (Vaswani et al., 2017; Ott et al.,
2018; Wang et al., 2019; Xiong et al., 2020; Liu et al., 2020). We adopted the widely used WMT
English-to-German training dataset (Vaswani et al.,
2017; Ott et al., 2018), which contains 4.5M
2We also tried a connection that skips over all layer normalizations including the final one in each layer but it significantly impaired the performance. When we prepare such a connection, the connection ties an input to the output directly.
Because this connection inhibits transformations performed by each layer as described in Section 4, it is reasonable that the performance is impaired. Therefore, we avoid skipping the final layer normalization in each layer to take the advantage of Post-LN.
sentence pairs. We applied the byte-pair-encoding
(BPE) algorithm (Sennrich et al., 2016) to construct a vocabulary set in the same manner as previous studies. We set the number of BPE
merge operations to 32K and shared the vocabulary between the source and target languages. We used newstest2010-2016 to investigate the performance, following Takase and Kiyono (2021).
We compare Post-LN, **Pre-LN**, and Post-LN with our B2T connection (**B2T connection**) Transformers. We used fairseq3(Ott et al., 2019) as an implementation of Transformers. We stacked 6 and 18 layers for the encoders and decoders (6L6L and 18L-18L) as the widely used configuration and deep configuration, respectively. We used the Transformer (base) setting for dimension sizes of internal layers. In addition to the above methods, we evaluate the following five methods, which are recent approaches that enable the training of deep Transformers. We used the same hyper-parameters for all methods except T-Fixup. For T-Fixup, we used the hyper-parameters reported in Huang et al.
(2020) to prevent divergence.
DLCL To make Transformers deep, Wang et al.
(2019) proposed dynamic linear combination of layers (DLCL), which uses the weighted sum of the lower layers as an input of a layer. In contrast to our B2T connection, which is an additional connection within each layer, DLCL uses a connection among layers. We apply DLCL to Post-LN Transformers.
We used the official implementation4.
Admin Liu et al. (2020) proposed adaptive model initialization (Admin), which uses additional parameters to stabilize the training of Post-LN Transformers. This method requires the variances of internal layers to initialize the additional parameters.
Thus, this method first processes several forward steps for the initialization, and then conducts the actual training. In a nutshell, this method incurs additional computational costs. We used the official implementation5.
T-Fixup Huang et al. (2020) proposed an initialization scheme for Transformers, T-Fixup, to perform stable training without the learning rate warm-up and layer normalizations. Because this method can remove the cause of the vanishing gradient, we can
| Method | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | Average |
|------------------------------|--------------------------------|--------|--------|--------|--------|--------|--------|-----------|
| Enc-Dec: 6L-6L | | | | | | | | |
| Post-LN | 24.27 | 22.06 | 22.43 | 26.11 | 27.13 | 29.70 | 34.40 | 26.59 |
| Pre-LN | 24.03 | 21.77 | 22.08 | 25.63 | 26.27 | 29.07 | 33.84 | 26.10 |
| DLCL (Wang et al., 2019) | 23.94 | 22.00 | 22.24 | 26.11 | 27.37 | 29.71 | 34.26 | 26.52 |
| Admin (Liu et al., 2020) | 24.32 | 21.79 | 22.17 | 26.26 | 27.14 | 29.61 | 34.12 | 26.49 |
| T-Fixup (Huang et al., 2020) | 24.09 | 21.98 | 22.04 | 25.96 | 26.92 | 29.45 | 34.56 | 26.43 |
| RealFormer (He et al., 2021) | 24.18 | 22.02 | 22.17 | 26.02 | 26.98 | 29.36 | 34.15 | 26.41 |
| DeepNet (Wang et al., 2022) | 24.08 | 21.76 | 22.09 | 25.90 | 26.85 | 29.62 | 34.39 | 26.38 |
| B2T connection | 24.12 | 21.93 | 22.29 | 26.31 | 26.84 | 29.48 | 34.73 | 26.53 |
| Enc-Dec: 18L-18L | | | | | | | | |
| Post-LN | Training failed (See Figure 1) | N/A | | | | | | |
| Pre-LN | 24.07 | 21.98 | 22.40 | 26.28 | 27.36 | 29.74 | 34.16 | 26.57 |
| DLCL (Wang et al., 2019) | 24.20 | 22.51 | 22.83 | 26.59 | 27.97 | 30.24 | 33.98 | 26.90 |
| Admin (Liu et al., 2020) | 24.56 | 22.17 | 22.62 | 26.48 | 27.99 | 30.35 | 33.88 | 26.86 |
| T-Fixup (Huang et al., 2020) | 24.45 | 22.29 | 22.76 | 26.57 | 27.71 | 30.13 | 34.69 | 26.94 |
| RealFormer (He et al., 2021) | 24.32 | 22.42 | 22.68 | 26.59 | 28.58 | 30.36 | 33.71 | 26.95 |
| DeepNet (Wang et al., 2022) | 24.70 | 22.40 | 22.92 | 26.85 | 28.21 | 30.60 | 34.25 | 27.13 |
| B2T connection | 24.62 | 22.51 | 22.86 | 26.74 | 28.48 | 30.99 | 34.93 | 27.30 |
stack many layers. We used the official implementation6.
RealFormer To improve the performance of Transformers, He et al. (2021) proposed RealFormer, which introduces additional connections into attention sub-layers. Although their motivation is not addressing the vanishing gradient problem, their method is similar to ours with respect to the use of additional connections.
DeepNet Wang et al. (2022) proposed DeepNorm, which uses a weight that corresponds to the number of layers in a residual connection before layer normalizations to stabilize Post-LN based Transformers. They also provided the combination of the initialization scheme and DeepNorm as DeepNet.
![5_image_0.png](5_image_0.png)
The upper part of Table 1 shows results in the 6L-6L configuration. This part indicates that PostLN achieved better scores than Pre-LN on all test sets. In addition, B2T connection outperformed Pre-LN on all test sets. Thus, these methods are superior to Pre-LN when the total number of layers is small.
The lower part of Table 1 shows results in the 18L-18L configuration. This part shows that the training of Post-LN failed, and thus we cannot successfully stack 18L-18L in the vanilla Post-LN.
With the B2T connection, its training succeeded and it outperformed Pre-LN in the 18L-18L configuration. Figure 6 shows the negative log-likelihood
(NLL) values of all methods when we regard newstest2013 as validation data. This figure indicates that the NLLs of Pre-LN are worse than those of the other methods. These results demonstrate that our modification enabled the stacking of many layers
| Method | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | Average |
|--------------------|-----------------|--------|--------|--------|--------|--------|--------|-----------|
| Enc-Dec: 100L-100L | | | | | | | | |
| Post-LN | Training failed | N/A | | | | | | |
| Pre-LN | 24.81 | 22.67 | 23.15 | 26.98 | 28.42 | 30.50 | 34.53 | 27.29 |
| B2T connection | 25.26 | 23.27 | 23.72 | 27.50 | 29.33 | 31.57 | 35.37 | 28.00 |
![6_image_1.png](6_image_1.png)
Table 2: BLEU scores on WMT newstest2010-2016 and their averages in the 100L-100L configuration.
without harm to its performance such as Pre-LN.
In the comparison with the recent methods, B2T
connection outperformed them with respect to the averaged BLEU score. This result implies that our modification is superior to the recent methods. To make our findings more reliable, we also conduct a comparison with the recent methods on the summarization task.
Table 2 shows results in a much deeper configuration: 100L-100L. This table also indicates that B2T connection stabilized the training and outperformed Pre-LN. Appendix C describes the details of this 100L-100L configuration and shows a comparison with the latest method, DeepNet (Wang et al., 2022).
## 6.2 Abstractive Summarization 6.2.1 Dataset
The abstractive summarization task is one of the most famous sequence-to-sequence problems in NLP. In this study, we conduct the experiment on the headline generation task, which is the task of generating a headline from a given sentence (Rush et al., 2015). We used headlinesentence pairs extracted from Annotated English Gigaword (Napoles et al., 2012) by Rush et al.
(2015). This dataset contains 3.8M headlinesentence pairs as the training set and 1951 pairs as the test set. In addition, we used 13M additional headline-sentence pairs extracted from REALNEWS (Zellers et al., 2019) and NewsCrawl (Barrault et al., 2019) for training deep Transformers, following Takase and Kiyono (2021). We applied BPE (Sennrich et al., 2016) to construct a vocabulary set. As in the machine translation experiments, we set the number of BPE merge operations to 32K
and shared the vocabulary between the encoder and decoder sides.
We compare Post-LN, **Pre-LN**, and **B2T connection** Transformers in the same manner as in Section 6.1. In addition, we compare DLCL, **Admin**,
T-Fixup, **RealFormer**, and **DeepNet** because it
| Method | R-1 | R-2 | R-L |
|------------------------------|-----------------|-------|-------|
| Enc-Dec: 6L-6L | | | |
| Post-LN | 38.57 | 19.37 | 35.79 |
| Pre-LN | 38.27 | 19.29 | 35.39 |
| DLCL (Wang et al., 2019) | 38.13 | 18.49 | 35.00 |
| Admin (Liu et al., 2020) | 37.96 | 18.93 | 35.05 |
| T-Fixup (Huang et al., 2020) | 38.11 | 19.13 | 35.32 |
| RealFormer (He et al., 2021) | 38.30 | 19.32 | 35.46 |
| DeepNet (Wang et al., 2022) | 38.27 | 18.89 | 35.34 |
| B2T connection | 38.43 | 19.37 | 35.72 |
| Enc-Dec: 18L-18L | | | |
| Post-LN | Training failed | | |
| Pre-LN | 38.97 | 19.94 | 35.99 |
| DLCL (Wang et al., 2019) | 38.25 | 19.44 | 35.57 |
| Admin (Liu et al., 2020) | 39.10 | 20.08 | 36.30 |
| T-Fixup (Huang et al., 2020) | 39.15 | 19.97 | 36.34 |
| RealFormer (He et al., 2021) | 39.22 | 20.12 | 36.49 |
| DeepNet (Wang et al., 2022) | 39.27 | 19.97 | 36.41 |
| B2T connection | 39.61 | 20.28 | 36.66 |
![6_image_0.png](6_image_0.png)
Table 3: F1 based ROUGE-1, 2, and L scores (columns headed R-1, R-2, and R-L, respectively) on headline generation (Rush et al., 2015).
would be premature to conclude that our modification is more effective than those methods from the results of experiments on the machine translation task alone. We set the numbers of layers of encoders and decoders to 6L-6L and 18L-18L as the base and deep configurations, respectively.
Table 3 shows the ROUGE-1, 2, and L scores achieved by each method on the test set. Since these scores are computed by n-gram overlapping between the generated and correct headlines, a higher score represents better performance.
In the 6L-6L configuration, Post-LN achieved better performance than Pre-LN. Thus, Post-LN
outperformed Pre-LN on the headline generation task if training succeeded. Moreover, B2T connection achieved scores comparable to those of Post-LN.
In the 18L-18L configuration, the training of Post-LN failed. In contrast, the training of B2T connection succeeded, and this method outperformed Pre-LN. Thus, our modification is more suitable than Pre-LN for training deep Transformers to perform the headline generation task.
B2T connection outperformed the recent methods in the 6L-6L configuration and achieved the best ROUGE scores in the 18L-18L configuration.
According to the results on both the machine translation and headline generation tasks, B2T connection achieved performance that was better than, or comparable to, that of previous methods. It is worth emphasizing that, in addition to the performance, our modification does not incur additional computational costs, such as those incurred by DLCL and Admin.
## 6.3 Language Model
In addition to encoder-decoders, we investigate the effect of our B2T connection when used in the decoder side only, i.e., a neural language model.
Because recent pre-trained models, such as the GPT series, are language models trained on a large amount of training data, experimental results in this section give an insight for pre-trained models.
## 6.3.1 Dataset
We used WikiText-103 (Merity et al., 2017), which consists of a large number of tokens. The training, validation, and test sets contain 103M, 0.2M,
and 0.2M tokens, respectively. The vocabulary set contains 0.3M words.
We used a Transformer with adaptive input representations (Baevski and Auli, 2019), which is implemented in fairseq, as the base architecture in this experiment. For the base configuration, we stacked 6 layers, in the same manner as in the machine translation and summarization experiments.
For the deep configuration, we used 16 layers, following Baevski and Auli (2019). For the dimensions of internal layers, we used the same values as those used by Baevski and Auli (2019). We compare Post-LN, Pre-LN, and B2T connection.
Table 4 shows perplexities of each method on the validation and test sets of WikiText-103. Since the perplexity is computed based on the negative loglikelihood, a smaller value corresponds to better performance. The upper part of this table indicates that, with 6 layers, Post-LN and our B2T connection outperformed Pre-LN. When we stacked 16 layers, the training of Post-LN failed, but B2T
connection achieved better performance than PreLN. These results are consistent with results on
![7_image_0.png](7_image_0.png)
Dev Test
Method Clean Other Clean Other
Enc-Dec: 6L-6L
Post-LN 3.78 **8.76** 4.19 **8.74**
Pre-LN 3.89 9.69 4.22 9.65
B2T connection **3.69** 8.97 **3.86** 8.94
Enc-Dec: 12L-6L
Post-LN Training failed
Pre-LN **3.21** 7.91 3.49 8.22 B2T connection 3.26 **7.74 3.48 7.68**
the machine translation and summarization tasks.
Thus, our modification enables the training of deep Transformers for language modeling, and it is more effective than Transformers with Pre-LN.
## 6.4 Automatic Speech Recognition
In addition to experiments on natural language processing tasks, we conduct an experiment on another modality, ASR.
## 6.4.1 Dataset
We used LibriSpeech (Panayotov et al., 2015),
which is the standard English ASR benchmark dataset. The dataset contains 1,000 hours of English speech extracted from audiobooks. We used the standard splits of LibriSpeech: we used all available training data for training and two configurations ('clean' and 'other') of development sets and test sets for evaluation. We applied the same pre-processing as that used by Wang et al. (2020).
We constructed a vocabulary set for the decoderside with SentencePiece (Kudo and Richardson, 2018) by setting the vocabulary size to 10,000. To obtain speech features, we used torchaudio9.
We used the Transformer-based speech-to-text model described in Wang et al. (2020) as the base 9https://github.com/pytorch/audio architecture in this experiment. This model contains a convolutional layer to construct an embedding for the encoder-side but the other parts are identical to the Transformers used on the machine translation and summarization tasks. We used the same dimensions as those of T-Md, described in Wang et al. (2020). We set the numbers of layers to 6L-6L and 12L-6L as the base and deep configurations, respectively, because Wang et al. (2020)
stacked many layers on the encoder-side only. We compare Post-LN, Pre-LN, and B2T connection.
Table 5 shows the word error rates (WERs) of each method on each set. A smaller value of WER corresponds to better performance. The upper part of this table indicates that Post-LN and B2T connection outperformed Pre-LN on all sets in the 6L-6L configuration. The lower part of the table shows that B2T connection succeeded in training and achieved performance that was better than (or comparable to) that of Pre-LN in the 12L-6L configuration10. These results are consistent with those of the other experiments in this study.
## 7 Related Work
Layer normalization (Ba et al., 2016) is a useful technique for training neural networks but its mechanism has been unclear (Xu et al., 2019). The Transformer, which is the standard architecture for various tasks, also contains layer normalizations.
The original Transformer architecture adopted the Post-LN configuration (Vaswani et al., 2017). However, recent Transformer implementations have adopted Pre-LN configurations (Klein et al., 2017; Vaswani et al., 2018; Ott et al., 2019; Baevski and Auli, 2019).
To construct deep Transformers that achieve better performance, recent studies have focused on the behavior of layer normalizations. Wang et al.
(2019) indicated the difficulty of training deep Transformers with Post-LN due to the vanishing gradient problem, and demonstrated that Pre-LN
enables the stacking of many layers through machine translation experiments. In addition, they proposed a method to connect all layers to increase the effectiveness of deep Transformers. Bapna 10Wang et al. (2020) reported that the improvement was small even if they increased the number of parameters. Thus, we emphasize that B2T connection achieved better WERs on dev-other and test-other even though the number of parameters of B2T connection is (almost) equal to that of Pre-LN.
et al. (2018) and Dou et al. (2018) also proposed such connection methods to stack many layers. He et al. (2021) introduced additional connections into attention sub-layers to improve the performance.
Xiong et al. (2020) explored the relation between the warm-up strategy and layer normalizations in Transformers. Through theoretical and empirical analyses, they indicated that Post-LN requires the warm-up strategy to stabilize the training.
Liu et al. (2020) analyzed the training dynamics of Post-LN and Pre-LN Transformers. They then proposed Admin, which consists of additional weight parameters to control the variances of outputs from each sub-layer. In contrast, we indicated that we can stabilize the training of Post-LN Transformers by adding only a residual connection that skips over layer normalizations that cause the vanishing gradient. Some studies have proposed initialization methods to make the training of deep neural networks stable (Zhang et al., 2019a,b; Huang et al., 2020).
Zhang et al. (2019a) proposed the depth-scaled initialization to prevent the vanishing gradient problem in Transformers. Zhang et al. (2019b) proposed the fixed-update initialization to remove normalizations in neural networks. Inspired by these studies, Huang et al. (2020) proposed T-Fixup, which enables both warm-up and layer normalizations to be removed from Transformers. In addition to the initialization scheme, Wang et al. (2022) introduced weights into residual connections before layer normalizations, following Liu et al. (2020).
## 8 Conclusion
In this study, we addressed the stability of training Post-LN Transformers. Through theoretical and empirical analyses, we indicated that layer normalizations cause the unstable training when many layers are stacked. In addition, we investigated the reason for the different performance of Pre-LN and Post-LN by transformations of each layer. We introduced B2T connection to prevent the vanishing gradient while preserving the advantage of PostLN. We conducted experiments on various tasks.
The experimental results led to the following three findings; 1, Post-LN achieved better performance than Pre-LN if its training succeeded. 2, Our modification enabled the training of deep Transformers (e.g., those with ten or more layers). 3, Our modification preserved the benefit of Post-LN, and therefore outperformed Pre-LN.
## Limitations
In this paper, we indicated that the vanishing gradient problem, caused by layer normalizations, makes the training of deep Post-LN Transformers unstable. We proposed the B2T connection to mitigate this vanishing gradient problem. However, the proposed B2T connection does not perfectly prevent the vanishing gradient, as shown in Figure 3. Therefore, the vanishing gradient might harm the training in extremely deep Transformers even if our B2T connection is used.
In addition, this study depends on empirical observations. In particular, we provided little theoretical justification of the reason for Post-LN outperforming Pre-LN when training succeeds. However, as discussed in Appendix C, the method with a theoretical justification often collapses in some situations. Because the behavior of deep Transformers in various situations is not fully understood, we believe that it is important to provide empirical findings for our research field to progress.
Although Appendix C includes a comparison between our B2T connection and the latest method, DeepNet (Wang et al., 2022), we could not investigate the behavior of all methods in the 100L-100L
configuration because of our limited computational budgets. However, we are confident that we conducted sufficient experiments to verify our contributions.
## Ethics Statement
The proposed method helps to construct deep Transformers. As discussed in Strubell et al. (2019) and Schwartz et al. (2019), such deep neural networks consume substantial amounts of energy. In fact, as discussed in Appendix A.2, we spent a large amount of computational resources on our experiments. Therefore, we also need to explore methods of improving energy efficiency while maintaining the good performance achieved by stacking many layers.
With respect to ethical considerations, the datasets used in our experiments are publicly available. LibriSpeech (Panayotov et al., 2015) is derived from audiobooks. The other datasets are mainly constructed from newswire texts and Wikipedia. Thus, in our understanding, our used datasets do not contain any personally identifiable information or offensive contents.
## Acknowledgements
We thank the anonymous reviewers for their useful suggestions. A part of this work was supported by JSPS KAKENHI Grant Number JP21K17800 and JST ACT-X Grant Number JPMJAX200I. The work of Jun Suzuki was partly supported by JST
Moonshot R&D Grant Number JPMJMS2011 (fundamental research). We thank Edanz for editing a draft of this manuscript.
## References
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization.
Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In Proceedings of the 7th International Conference on Learning Representations (ICLR).
Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 3028–3033.
Loïc Barrault, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019.
Findings of the 2019 conference on machine translation (WMT19). In *Proceedings of the Fourth Conference on Machine Translation (WMT)*, pages 1–61.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In Advances in Neural Information Processing Systems 33 (NeurIPS), pages 1877–1901.
Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, and Tong Zhang. 2018. Exploiting deep representations for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4253–4262.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), pages 770–778.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Identity mappings in deep residual networks. In *14th European Conference on Computer* Vision, pages 630–645.
Ruining He, Anirudh Ravula, Bhargav Kanagal, and Joshua Ainslie. 2021. RealFormer: Transformer likes residual attention. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 929–943.
Xiao Shi Huang, Felipe Perez, Jimmy Ba, and Maksims Volkovs. 2020. Improving transformer optimization through better initialization. In Proceedings of the 37th International Conference on Machine Learning, volume 119, pages 4475–4483.
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning
(ICML), volume 37, pages 448–456.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Opensource toolkit for neural machine translation. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)*, pages 67–72.
Taku Kudo and John Richardson. 2018. SentencePiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 66–71.
Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. 2020. Understanding the difficulty of training transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5747–5763.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer Sentinel Mixture Models. In *Proceedings of the 5th International Conference on Learning Representations (ICLR)*.
Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), pages 95–100.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)*, pages 48–53.
Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation (WMT), pages 1–9.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation (WMT), pages 186–191.
Alexander M. Rush, Sumit Chopra, and Jason Weston.
2015. A Neural Attention Model for Abstractive Sentence Summarization. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 379–389.
Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. *CoRR*, abs/1907.10597.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (ACL), pages 1715–1725.
Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems 28
(NIPS), pages 2377—-2385.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics (ACL), pages 3645–3650.
Sho Takase and Shun Kiyono. 2021. Rethinking perturbations in encoder-decoders for fast training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
(NAACL-HLT), pages 5767–5780.
Sho Takase and Naoaki Okazaki. 2019. Positional encoding to control output sequence length. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL),
pages 3999–4004.
Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit.
2018. Tensor2Tensor for neural machine translation.
In *Proceedings of the 13th Conference of the Association for Machine Translation in the Americas*, pages 193–199.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30 (NIPS)*, pages 5998–6008.
Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020. Fairseq S2T: Fast speech-to-text modeling with fairseq. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing (AACL-IJCNLP),
pages 33–39.
Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. 2022. Deepnet:
Scaling transformers to 1,000 layers.
Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao.
2019. Learning deep transformer models for machine translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*
(ACL), pages 1810–1822.
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In *Proceedings of the 37th International Conference on* Machine Learning (ICML), pages 10524–10533.
Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. 2019. Understanding and improving layer normalization. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Processing Systems 32 (NeurIPS), pages 9054–9065.
Biao Zhang, Ivan Titov, and Rico Sennrich. 2019a. Improving deep transformer with depth-scaled initialization and merged attention. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 898–909.
Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma.
2019b. Fixup initialization: Residual learning without normalization. In *Proceedings of the 7th International Conference on Learning Representations*
(ICLR).
## A Details Of Experimental Settings A.1 Hyper-Parameters
As described in Section 6, our hyper-parameters follow those used in previous studies. Table 6 shows hyper-parameters used for each experiment.
For fair comparisons, we used the same hyperparameters for all methods except T-Fixup. For T-Fixup, we used hyper-parameters reported in Huang et al. (2020) to prevent divergence.
## A.2 Computational Resources
We mainly used NVIDIA Tesla P100 GPUs for most of our experiments. Table 7 shows the number of GPUs and the computational time used to construct one model in our experiments. For the 100L-100L configuration, described in Section 6.1, we used 24 Tesla V100 GPUs and spent approximately 120 hours to train one model.
## B Supplementary Of Gradient Norms Of Each Location
For gradient norms of each part in a layer, we check 1st and 9th decoders in addition to the 18th decoder for the 18L-18L Post-LN Transformer encoderdecoder as shown in Figure 4. Figure 7 shows the gradient norms of each part. This figure shows that the gradient norms decrease drastically through layer normalizations in the same manner as they do in the 18th decoder (Figure 4). Therefore, the vanishing gradient problem in Post-LN Transformers is probably caused by layer normalizations.
## C **Details Of The 100L-100L Configuration** C.1 Regularizations During The Training
As reported in Section 1, we constructed 100L100L Transformers with widely-used WMT
English-to-German dataset. In the preliminary experiments, we found that regularization is the key to preventing overfitting and achieving high performance in this situation. Figure 8 shows the NLL
values of Pre-LN and B2T connection on validation data in the 36L-36L configuration when we used the same hyper-parameters as those used in 6L-6L and 18L-18L configurations. As this figure shows, the NLL values began to increase from the middle of training, and thus the overfitting occurred. In addition, the use of the same hyper-parameters as 6L-6L and 18L-18L makes it difficult to improve the performance of deeper configurations. Figure 9 shows the best NLL values on validation data when we varied the number of layers: 6L-6L, 12L-12L,
18L-18L, 36L-36L, and 50L-50L11. This figure indicates that adding more layers to the 18L-18L
configuration did not improve the performance.
To prevent overfitting during the training of 100L-100L Transformers, we increased the dropout rate from 0.3 to 0.5. In addition, we used word dropout, as described in Takase and Kiyono (2021).
We set the word dropout rate to 0.1 for the encoder and decoder. We multiplied the initial parameter values, except those for embeddings, by 0.1. We set the gradient clipping to 0.1. Finally, we decreased the number of updates from 50K to 25K. These regularization techniques prevented overfitting and achieved better performance than 18L-18L, as described in Section 6.1.
## C.2 Comparison With Deepnet
As described in Section 7, various studies have attempted to stabilize the training of deep Transformers. Each study indicated the effectiveness of their proposed method empirically, and some have provided theoretical justifications. However, Wang et al. (2022) demonstrated that the training of previous methods except DeepNet failed in a much deeper configuration than normally used, i.e.,
100L-100L. Then, can we conclude that DeepNet is a silver bullet for deep Transformers? It is difficult to reach this conclusion because the training of DeepNet also fails in some configurations. For example, when we train deep Transformers, we might decrease the batch size because the trainable parameters occupy most of the GPU memories. When we tried this, the NLL value of DeepNet on validation data diverged, as shown in Figure 10. In other words, the training of DeepNet failed. In contrast, the training of our B2T connection succeeded in this situation. This result implies that there are problems in the training of deep Transformers that have not been solved in previous studies. Therefore, we believe that we should continue to add the empirical findings about new techniques, including B2T connection, to those of previous studies.
## D B2T Connection Without Layer Normalization
In addition to B2T connection, we also consider a further modification to prevent the vanishing gra-11The horizontal axis of Figure 9 represents the total number of layers, which are divided equally between the encoder and decoder. For example, 100 on the horizontal axis represents 50L-50L Transformers.
| Params | Machine Translation | Abstractive Summarization | Language Model | ASR |
|------------------|-----------------------|-----------------------------|------------------|--------------|
| Hidden dim size | 512 | 512 | 1024 | 512 |
| FFN dim size | 2048 | 2048 | 4096 | 2048 |
| Attention heads | 8 | 8 | 8 | 8 |
| Learning rate | 0.001 | 0.001 | 0.001 | 0.001 |
| Scheduler | inverse sqrt | inverse sqrt | inverse sqrt | inverse sqrt |
| Adam β | (0.9, 0.98) | (0.9, 0.98) | (0.9, 0.98) | (0.9, 0.98) |
| Warmup updates | 4K | 4K | 2K | 4K |
| Max updates | 50K | 50K | 50K | 150K |
| Max tokens / GPU | 3584 | 3584 | 1024 | 40K |
| Machine Translation | Abstractive Summarization | Language Model | ASR | | | | | |
|-----------------------|-----------------------------|------------------|---------|-----|-----|-------|--------|----|
| 6L-6L | 18L-18L | 6L-6L | 18L-18L | 6L | 16L | 6L-6L | 12L-6L | |
| #GPU | 128 | 128 | 64 | 144 | 128 | 192 | 32 | 32 |
| Time (hour) | 5 | 13 | 4 | 17 | 4 | 7 | 22 | 34 |
Table 7: The number of GPUs and computational time used to construct one model in our experiments.
![13_image_0.png](13_image_0.png)
![13_image_2.png](13_image_2.png)
![13_image_1.png](13_image_1.png)
dient problem. Because layer normalizations decrease gradients drastically, as described in Section 3, removing layer normalizations may provide stable gradients during back-propagation. However,
![14_image_0.png](14_image_0.png)
the values in the forward pass increase exponentially if layer normalizations are removed. Therefore, we introduce weights that prevent the explosive increase in the forward pass while mitigating the decreasing gradients in back-propagation, as an alternative to the layer normalization. To use this alternative, we replace Equation (5) with the following equation:
$$\alpha x_{i n p}+\beta\left(x_{f f n}+\mathrm{FFN}(x_{f f n})\right).\qquad(9)$$
Through several experiments12, we found that the following values of α and β are suitable:
$$\alpha=\operatorname*{min}\left({\frac{N}{12}},N^{-0.15}\right),\eqno(10)$$ $$\beta=d^{-0.2},\eqno(11)$$
where N is the number of layers and d is the dimension of the input vectors xinp. For example, N
is set to 12 and 6 in the encoder and decoder, respectively, in the 12L-6L configuration. Therefore, as the number of layers increases, the value of α increases while N remains small (until N = 9), and then α starts to decrease. In short, α prevents an explosive increase in the forward pass when we stack many layers. β decreases as the dimension d increases, and thus it prevents an explosive increase when a large dimension is used. By using Equation (9), we can remove all layer normalizations in internal layers. This solves the vanishing gradient problem caused by layer normalizations.
Tables 8 and 9 shows the results of B2T connection without layer normalizations ("w/o LN") on the machine translation and summarization tasks.
These results indicate that B2T connection without layer normalizations achieved scores comparable to those of B2T connection with layer normalizations. However, because the results of B2T connection without layer normalizations are slightly worse than those with layer normalizations, we recommend the use of B2T connection with layer normalizations.
| Method | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | Average |
|------------------|--------|--------|--------|--------|--------|--------|--------|-----------|
| Enc-Dec: 6L-6L | | | | | | | | |
| B2T connection | 24.12 | 21.93 | 22.29 | 26.31 | 26.84 | 29.48 | 34.73 | 26.53 |
| + w/o LN | 24.17 | 22.07 | 22.24 | 25.83 | 26.96 | 29.70 | 34.42 | 26.48 |
| Enc-Dec: 18L-18L | | | | | | | | |
| B2T connection | 24.75 | 22.88 | 23.09 | 27.12 | 28.82 | 30.99 | 33.64 | 27.33 |
| + w/o LN | 24.47 | 22.37 | 22.58 | 27.04 | 28.34 | 30.49 | 34.38 | 27.10 |
Table 8: BLEU scores of our modifications on WMT newstest2010-2016 and their averages.
| Method | R-1 | R-2 | R-L |
|------------------|-------|-------|-------|
| Enc-Dec: 6L-6L | | | |
| B2T connection | 38.43 | 19.37 | 35.72 |
| + w/o LN | 38.63 | 19.75 | 35.77 |
| Enc-Dec: 18L-18L | | | |
| B2T connection | 39.61 | 20.28 | 36.66 |
| + w/o LN | 39.29 | 20.01 | 36.48 |
Table 9: F1 based ROUGE scores of our modifications on headline generation.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section, after Conclusion section
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement section, after Conclusion section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Sections 1 and 8
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5
✓ B1. Did you cite the creators of artifacts you used?
Sections 6 and 7
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics Statement section B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Ethics Statement section
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 6
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 6
## C ✓ **Did You Run Computational Experiments?** Section 6 And Appendices A, B, C, And D
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendices A and C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 6 and Appendices A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
litschko-etal-2023-boosting | Boosting Zero-shot Cross-lingual Retrieval by Training on Artificially Code-Switched Data | https://aclanthology.org/2023.findings-acl.193 | Transferring information retrieval (IR) models from a high-resource language (typically English) to other languages in a zero-shot fashion has become a widely adopted approach. In this work, we show that the effectiveness of zero-shot rankers diminishes when queries and documents are present in different languages. Motivated by this, we propose to train ranking models on artificially code-switched data instead, which we generate by utilizing bilingual lexicons. To this end, we experiment with lexicons induced from (1) cross-lingual word embeddings and (2) parallel Wikipedia page titles. We use the mMARCO dataset to extensively evaluate reranking models on 36 language pairs spanning Monolingual IR (MoIR), Cross-lingual IR (CLIR), and Multilingual IR (MLIR). Our results show that code-switching can yield consistent and substantial gains of 5.1 MRR@10 in CLIR and 3.9 MRR@10 in MLIR, while maintaining stable performance in MoIR. Encouragingly, the gains are especially pronounced for distant languages (up to 2x absolute gain). We further show that our approach is robust towards the ratio of code-switched tokens and also extends to unseen languages. Our results demonstrate that training on code-switched data is a cheap and effective way of generalizing zero-shot rankers for cross-lingual and multilingual retrieval. | # Boosting Zero-Shot Cross-Lingual Retrieval By Training On Artificially Code-Switched Data
Robert Litschko Ekaterina Artemova Barbara Plank MaiNLP, Center for Information and Language Processing (CIS), LMU Munich, Germany
{robert.litschko, ekaterina.artemova, b.plank}@lmu.de
## Abstract
Transferring information retrieval (IR) models from a high-resource language (typically English) to other languages in a zero-shot fashion has become a widely adopted approach. In this work, we show that the effectiveness of zero-shot rankers diminishes when queries and documents are present in different languages.
Motivated by this, we propose to train ranking models on artificially code-switched data instead, which we generate by utilizing bilingual lexicons. To this end, we experiment with lexicons induced from (1) cross-lingual word embeddings and (2) parallel Wikipedia page titles. We use the mMARCO dataset to extensively evaluate reranking models on 36 language pairs spanning Monolingual IR (MoIR),
Cross-lingual IR (CLIR), and Multilingual IR
(MLIR). Our results show that code-switching can yield consistent and substantial gains of 5.1 MRR@10 in CLIR and 3.9 MRR@10 in MLIR, while maintaining stable performance in MoIR. Encouragingly, the gains are especially pronounced for distant languages (up to 2x absolute gain). We further show that our approach is robust towards the ratio of codeswitched tokens and also extends to unseen languages. Our results demonstrate that training on code-switched data is a cheap and effective way of generalizing zero-shot rankers for crosslingual and multilingual retrieval.
## 1 Introduction
Cross-lingual Information Retrieval (CLIR) is the task of retrieving relevant documents written in a language different from a query language. The large number of languages and limited amounts of training data pose a serious challenge for training ranking models. Previous work address this issue by using machine translation (MT), effectively casting CLIR into a noisy variant of monolingual retrieval (Li and Cheng, 2018; Shi et al., 2020, 2021; Moraes et al., 2021). MT systems are used to either train ranking models on translated training data (*translate train*), or by translating queries into the document language at retrieval time (*translate test*). However, CLIR approaches relying on MT systems are limited by their language coverage.
Because training MT models is bounded by the availability of parallel data, it does not scale well to a large number of languages. Furthermore, using MT for IR has been shown to be prone to propagation of unwanted translation artifacts such as topic shifts, repetition, hallucinations and lexical ambiguity (Artetxe et al., 2020; Litschko et al., 2022a; Li et al., 2022). In this work, we propose a resourcelean MT alternative to bridge the language gap and propose to use *artificially code-switched* data.
We focus on zero-shot cross-encoder (CE) models for reranking (MacAvaney et al., 2020; Jiang et al., 2020). Our study is motivated by the observation that the performance of CEs diminishes when they are transferred into CLIR and MLIR
as opposed to MoIR. We hypothesize that training on queries and documents from the same language leads to *monolingual overfitting* where the ranker learns features, such as exact keyword matches, which are useful in MoIR but do not transfer well to CLIR and MLIR setups due to the lack of lexical overlap (Litschko et al., 2022b). In fact, as shown by Roy et al. (2020) on bi-encoders, representations from zero-shot models are weakly aligned between languages, where models prefer non-relevant documents in the same language over relevant documents in a different language. To address this problem, we propose to use code-switching as an inductive bias to regularize monolingual overfitting in CEs.
Generation of synthetic code-switched data has served as a way to augment data in cross-lingual setups in a number of NLP tasks (Singh et al., 2019; Einolghozati et al., 2021; Tan and Joty, 2021). They utilize substitution techniques ranging from simplistic re-writing in the target script (Gautam et al.,
2021), looking up bilingual lexicons (Tan and Joty, 3096 2021) to MT (Tarunesh et al., 2021). Previous work on improving zero-shot transfer for IR includes weak supervision (Shi et al., 2021), tuning the pivot language (Turc et al., 2021), multilingual query expansion (Blloshmi et al., 2021) and crosslingual pre-training (Yang et al., 2020; Yu et al.,
2021; Yang et al., 2022; Lee et al., 2023). To this end, code-switching is complementary to existing approaches. Our work is most similar to Shi et al.
(2020), who use bilingual lexicons for full termby-term translation to improve MoIR. Concurrent to our work, Huang et al. (2023) show that codeswitching improves the retrieval performance on low-resource languages, however, their focus lies on CLIR with English documents. To the best of our knowledge, we are the first to systematically investigate (1) artificial code-switching to train CEs and (2) the interaction between MoIR, CLIR and MLIR.
Our contributions are as follows: (i) We show that training on artificially code-switched data improves zero-shot cross-lingual and multilingual rankers. (ii) We demonstrate its robustness towards the ratio of code-switched tokens and effectiveness in generalizing to unseen languages. (iii) We release our code and resources.1
## 2 Methodology
Reranking with Cross-Encoders. We follow the standard cross-encoder reranking approach (CE)
proposed by Nogueira and Cho (2019), which formulates relevance prediction as a sequence pair
(query-document pair) classification task. CEs are composed of an encoder model and a relevance prediction model. The encoder is a pre-trained language model (Devlin et al., 2019) that transforms the concatenated input [CLS] Q [SEP] D [SEP]
into a joint query-document feature representation, from which the classification head predicts relevance. Finally, documents are reranked according to their predicted relevance. We argue that fine-tuning CEs on monolingual data biases the encoder towards encoding features that are only useful when the target setup is MoIR. To mitigate this bias, we propose to perturb the training data with code-switching, as described next.
Artificial Code-Switching. While previous work has studied code-switching (CS) as a natural phenomenon where speakers borrow words from other languages (e.g. anglicism) (Ganguly et al., 2016; Wang and Komlodi, 2018), we here refer to codeswitching as a method to *artificially* modify monolingual training data. In the following we assume availability of English (EN–EN) training data. The goal is to improve the zero-shot transfer of ranking models into cross-lingual language pairs X–Y by training on code-switched data ENX–ENY instead, which we obtain by exploiting bilingual lexicons similar to Tan and Joty (2021). We now describe two CS approaches based on lexicons: one derived from word embeddings and one from Wikipedia page titles (cf. Appendix A for examples).
Code-Switching with Word Embeddings. We rely on bilingual dictionaries D induced from crosslingual word embeddings (Mikolov et al., 2013; Heyman et al., 2017) and compute for each EN
term its nearest (cosine) cross-lingual neighbor. In order to generate ENX–ENY we then use DEN✮X
and DEN✮Y to code-switch query and document terms from EN into the languages X and Y, each with probability p. This approach, dubbed Bilingual CS (**BL-CS**), allows a ranker to learn interlingual semantics between EN, X and Y. In our second approach, Multilingual CS (**ML-CS**), we additionally sample for each term a different target language into which it gets translated; we refer to the pool of available languages as seen languages.
Code-Switching with Wikipedia Titles. Our third approach, **Wiki-CS**, follows (Lan et al., 2020; Fetahu et al., 2021) and uses bilingual lexicons derived from parallel Wikipedia page titles obtained from inter-language links. We first extract word n-grams from queries and documents with different sliding window of sizes n P t1, 2, 3u. Longer n-gram are favored over shorter ones in order to account for multi-term expressions, which are commonly observed in named entities. In Wiki CS we create a single multilingual dataset where queries and documents from different training instances are code-switched into different languages.
## 3 Experimental Setup
Models and Dictionaries. We follow Bonifacio et al. (2021) and initialize rankers with the multilingual encoder mMiniLM provided by Reimers and Gurevych (2020). We report hyperparameters in Appendix C. For BL-CS and ML-CS we use multilingual MUSE embeddings2to induce bilingual 1https://github.com/MaiNLP/CodeSwitchCLIR
2https://github.com/facebookresearch/MUSE
| EN–EN | DE–DE | RU–RU | AR–AR | NL–NL | IT–IT | AVG | ∆ZS | |
|-------------------------|---------|---------|---------|---------|---------|-------|-------|------|
| Zero-shot | 35.0 | 25.9 | 23.8 | 23.9 | 27.2 | 26.9 | 25.5 | - |
| Fine-tuning | 35.0 | 30.3* | 28.5* | 27.2* | 30.8* | 30.9* | 29.5 | +4.0 |
| Zero-shotTranslate Test | - | 22.5* | 18.2* | 17.7* | 24.7* | 23.3* | 21.3 | -4.2 |
| ML-CSTranslate Test | - | 22.8* | 18.6* | 17.7* | 24.7* | 24.5* | 21.7 | -3.8 |
| BL-CS | - | 26.0 | 25.5 | 23.0 | 27.5 | 27.2 | 25.8 | +0.3 |
| ML-CS | 34.0 | 25.9 | 24.7 | 21.3 | 27.2 | 26.9 | 25.2 | -0.3 |
| Wiki-CS | 33.8* | 25.6 | 24.1 | 20.5* | 27.0 | 25.5* | 24.5 | -1.0 |
| EN–DE | EN–IT | EN–AR | EN–RU | DE–IT | DE–NL | DE–RU | AR–IT | AR–RU | AVG | ∆ZS | |
|-------------------------|---------|---------|---------|---------|---------|---------|---------|---------|-------|-------|-------|
| Zero-shot | 24.0 | 23.0 | 14.0 | 18.3 | 15.0 | 19.7 | 12.9 | 7.7 | 7.1 | 15.7 | - |
| Fine-tuning | 29.7* | 30.5* | 26.5* | 28.0* | 26.9* | 27.9* | 25.5* | 23.9* | 22.7* | 26.8 | +11.1 |
| Zero-shotTranslate Test | 22.8 | 23.2 | 16.4 | 17.0 | 15.8 | 17.5 | 11.8 | 9.8 | 8.7 | 15.9 | +0.2 |
| ML-CSTranslate Test | 24.9 | 24.6 | 17.9* | 19.5 | 17.6 | 19.3* | 14.3 | 12.2* | 10.6* | 17.9 | +2.2 |
| BL-CS | 26.9* | 27.3* | 19.3* | 22.8* | 20.4* | 22.8* | 17.8* | 15.6* | 14.1* | 20.8 | +5.1 |
| ML-CS | 26.5* | 26.4* | 18.1* | 22.1* | 19.8* | 22.8* | 17.8* | 15.3* | 14.2* | 20.3 | +4.6 |
| Wiki-CS | 26.2* | 26.4* | 19.4* | 22.9* | 19.4* | 22.4* | 18.3* | 14.4* | 14.1* | 20.4 | +4.7 |
Table 2: CLIR: Cross-lingual results on mMARCO in terms of MRR@10.
| Seen Languages | All Languages | | | | | | | | | |
|------------------|-----------------|-------|---------|-------|------|-------|-------|--------|------|------|
| X–EN | EN–X | X–X | AVGseen | ∆seen | X–EN | EN–X | X–X | AVGall | ∆all | |
| Zero-shot | 19.0 | 23.5 | 16.3 | 19.6 | - | 16.5 | 20.8 | 12.9 | 16.6 | - |
| Fine-tuning | 24.8* | 26.4* | 21.1* | 24.1 | +4.5 | 26.5* | 26.5* | 21.9* | 25.0 | +8.3 |
| ML-CS | 24.2* | 25.9* | 21.1* | 23.7 | +4.1 | 21.6* | 23.2* | 17.0* | 20.6 | +3.9 |
| Wiki-CS | 23.6* | 26.0* | 20.6* | 23.4 | +3.8 | 21.3* | 23.8* | 17.1* | 20.7 | +4.0 |
lexicons (Lample et al., 2018), which have been aligned with initial seed dictionaries of 5k word translation pairs. We set the translation probability p " 0.5. For Wiki-CS, we use the lexicons provided by the linguatools project.3 Baselines. To compare whether training on CS'ed data ENX–ENY improves the transfer into CLIR setups, we include the zero-shot ranker trained on EN–EN as our main baseline (henceforth, Zero-shot). Our upper-bound reference, dubbed Fine-tuning, refers to ranking models that are directly trained on the target language pair X–Y, i.e. no zero-shot transfer. Following Roy et al. (2020), we adopt the *Translate Test* baseline and translate any test data into EN using using our bilingual lexicons induced from word embeddings.
On this data we evaluate both the Zero-shot baseline (Zero-shotTranslate Test) and our ML-CS model
(ML-CSTranslate Test).
Datasets and Evaluation. We use use the publicly available multilingual mMARCO data set (Bonifacio et al., 2021), which includes fourteen different languages. We group those into six seen languages (EN, DE, RU, AR, NL, IT) and eight unseen languages (HI, ID, IT, JP, PT, ES, VT,
FR) and construct a total of 36 language pairs.4 Out of those, we construct setups where we have documents in different languages (EN–X), queries in different languages (X–EN), and both in different languages (X–X). Specifically, for each document ID (query ID) we sample the content from one of the available languages. For evaluation, we use the official evaluation metric [email protected] All models re-rank the top 1,000 passages provided for the passage re-ranking task. We report all results as averages over three random seeds.
## 4 Results And Discussion
![3_Image_0.Png](3_Image_0.Png)
We observe that code-switching improves crosslingual and multilingual re-ranking, while not impeding monolingual setups, as shown next.
Transfer into MoIR vs. CLIR. We first quantify the performance drop when transferring models trained on EN–EN to MoIR as opposed to CLIR
and MLIR. Comparing Zero-shot results between different settings we find that the average MoIR
performance of 25.5 MRR@10 (Table 1) is substantially higher than CLIR with 15.7 MRR@10
(Table 2) and MLIR with 16.6 MRR@10 (Table 3).
The transfer performance greatly varies with the language proximity, in CLIR the drop is larger for setups involving typologically distant languages
(AR–IT, AR–RU), to a lesser extent the same observation holds for MoIR (AR–AR, RU–RU). This is consistent with previous findings made in other syntactic and semantic NLP tasks (He et al., 2019; Lauscher et al., 2020). The performance gap to Fine-tuning on translated data is much smaller in MoIR (+4 MRR@10) than in CLIR (+11.1 MRR@10) and MLIR (+8.3 MRR@10). Our aim to is close this gap between zero-shot and full finetuning in a resource-lean way by training on codeswitched queries and documents.
Code-Switching Results. Training on codeswitched data consistently outperforms zero-shot models in CLIR and MLIR (Table 2 and Table 3). In AR–IT and AR–RU we see improvements from 7.7 and 7.1 MRR@10 up to 15.6 and 14.1 MRR@10, rendering our approach particularly effective for distant languages. Encouragingly, Table 1 shows that the differences between both of our CS approaches (BL-CS and ML-CS) versus Zero-shot is not statistically significant, showing that gains can be obtained without impairing MoIR
performance. Table 2 shows that specializing one zero-shot model for multiple CLIR language pairs
(ML-CS, Wiki-CS) performs almost on par with specializing one model for each language pair (BL-CS).
The results of Wiki-CS are slightly worse in MoIR
and on par with ML-CS on MLIR and CLIR.
Translate Test vs. Code-Switch Train. In MoIR (Table 1) both Zero-shotTranslate Test and ML-CSTranslate Test underperform compared to other approaches. This shows that zero-shot rankers work better on clean monolingual data in the target language than noisy monolingual data in English.
In CLIR, where *Translate Test* bridges the language gap between X and Y, we observe slight improvements of +0.2 and +2.2 MRR@10 (Table 2). However, in both MoIR and CLIR *Translate Test* consistently falls behind code-switching at training time.
Multilingual Retrieval and Unseen Languages.
Here we compare how code-switching fares against Zero-shot on languages to which neither model has been exposed to at training time. Table 3 shows the gains remain virtually unchanged when moving from six seen (+4.1 MRR@10 / +3.8 MRR@10)
to fourteen languages including eight unseen languages (+3.9 MRR@10 / +4.0 MRR@10). Results in Appendix B confirm that this holds for unseen languages on the query, document and both sides, suggesting that the best pivot language for zeroshot transfer (Turc et al., 2021) may not be monolingual but a code-switched language. On seen languages ML-CS is close to MT (Fine-tuning).
Ablation: Translation Probability. The translation probability p allows us to control the ratio of code-switched tokens to original tokens, with p " 0.0 we default back to the Zero-shot base3099
| EN–X | X–EN | X–X | |
|------------------------------------------------------------------------|-------------|-------------|-------------|
| No Code Switching (Zero-Shot) No overlap 12.2 | 11.0 | 7.4 | |
| Some overlap | 29.7 | 22.4 | 19.9 |
| Significant overlap | 44.6 | 36.4 | 45.5 |
| All queries | 23.5 | 19.0 | 16.3 |
| Multilingual Code Switching (ML-CS) No overlap 15.5 (+3.3) 17.8 (+6.8) | 13.0 (+5.6) | | |
| Some overlap | 31.7 (+2.0) | 27.2 (+4.8) | 25.3 (+5.4) |
| Significant overlap | 44.7 (+0.2) | 37.8 (+1.4) | 45.1 (-0.5) |
| All queries | 25.9 (+2.4) | 24.2 (+5.3) | 21.1 (+4.8) |
line, with p " 1.0 we attempt to code-switch every token. 6 Figure 1 (top) shows that code-switching a smaller portion of tokens is already beneficial for the zero-shot transfer into CLIR. The gains are robust towards different values for p. The best results are achieved with p " 0.5 and p " 0.75 for BL-CS and ML-CS, respectively. Figure 1 (bottom)
shows that the absolute differences to Zero-shot are much smaller in MoIR.
Monolingual Overfitting. Exact matches between query and document keywords is a strong relevance signal in MoIR, but does not transfer well to CLIR and MLIR due to mismatching vocabularies. Training zero-shot rankers on monolingual data biases rankers towards learning features that cannot be exploited at test time. Code-Switching reduces this bias by replacing exact matches with translation pairs,7steering model training towards learning interlingual semantics instead. To investigate this, we group queries by their average token overlap with their relevant documents and evaluate each group separately on MLIR.8 The results are shown in Table 4. Unsurprisingly, rankers work best when there is significant overlap between query and document tokens. However, the performance gains resulting from training on code-switched data
(ML-CS) are most pronounced for queries with some token overlap (up to +5.4 MRR@10) and no token overlap (up to +6.8 MRR@10). On the other hand, the gains are much lower for queries with more than three overlapping tokens and range from -0.5 to +1.4 MRR@10. This supports our hypothesis that code-switching indeed regularizes monolingual overfitting.
## 5 Conclusion
We propose a simple and effective method to improve zero-shot rankers: training on artificially code-switched data. We empirically test our approach on 36 language pairs, spanning monolingual, cross-lingual, and multilingual setups. Our method outperforms zero-shot models trained only monolingually and provides a resource-lean alternative to MT for CLIR. In MLIR our approach can match MT performance while relying only on bilingual dictionaries. To the best of our knowledge, this work is the first to propose artificial code-switched training data for cross-lingual and multilingual IR.
## Limitations
This paper does not utilize any major linguistic theories of code-switching, such as (Belazi et al., 1994; Myers-Scotton, 1997; Poplack, 2013). Our approach to generating code-switched texts replaces words with their synonyms in target languages, looked up in a bilingual lexicon. Furthermore, we do not make any special efforts to resolve word sense or part-of-speech ambiguity. To this end, the resulting sentences may appear implausible and incoherent.
## Acknowledgements
We thank the members of the MaiNLP research group as well as the anonymous reviewers for their feedback on earlier drafts of this paper. This research is in parts supported by European Research Council (ERC) Consolidator Grant DIALECT 101043235.
## References
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020.
Translation artifacts in cross-lingual transfer learning.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 7674–7684, Online. Association for Computational Linguistics.
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated machine reading comprehension dataset. *arXiv preprint* arXiv:1611.09268.
Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio. 1994. Code Switching and X-bar Theory: The Functional Head Constraint. Linguistic inquiry, pages 221–237.
Rexhina Blloshmi, Tommaso Pasini, Niccolò Campolungo, Somnath Banerjee, Roberto Navigli, and Gabriella Pasi. 2021. IR like a SIR: Sense-enhanced Information Retrieval for Multiple Languages. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1030–1041, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Luiz Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mmarco: A
multilingual version of the ms marco passage ranking dataset. *arXiv preprint arXiv:2108.13897*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Arash Einolghozati, Abhinav Arora, Lorena SainzMaza Lecanda, Anuj Kumar, and Sonal Gupta. 2021.
El volumen louder por favor: Code-switching in taskoriented semantic parsing. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1009–1021, Online. Association for Computational Linguistics.
Besnik Fetahu, Anjie Fang, Oleg Rokhlenko, and Shervin Malmasi. 2021. Gazetteer enhanced named entity recognition for code-mixed web queries. In Proceedings of the 44th International ACM SIGIR
Conference on Research and Development in Information Retrieval, pages 1677–1681.
Debasis Ganguly, Ayan Bandyopadhyay, Mandar Mitra, and Gareth J. F. Jones. 2016. Retrievability of code mixed microblogs. In *Proceedings of the 39th International ACM SIGIR conference on Research and*
Development in Information Retrieval, SIGIR 2016, Pisa, Italy, July 17-21, 2016, pages 973–976. ACM.
Devansh Gautam, Prashant Kodali, Kshitij Gupta, Anmol Goel, Manish Shrivastava, and Ponnurangam Kumaraguru. 2021. CoMeT: Towards code-mixed translation using parallel monolingual sentences. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 47–
55, Online. Association for Computational Linguistics.
Junxian He, Zhisong Zhang, Taylor Berg-Kirkpatrick, and Graham Neubig. 2019. Cross-lingual syntactic transfer through unsupervised adaptation of invertible projections. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 3211–3223, Florence, Italy. Association for Computational Linguistics.
Geert Heyman, Ivan Vulic, and Marie-Francine Moens. ´
2017. Bilingual lexicon induction by learning to combine word-level and character-level representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1085–1095, Valencia, Spain. Association for Computational Linguistics.
Zhiqi Huang, Puxuan Yu, and James Allan. 2023. Improving cross-lingual information retrieval on lowresource languages via optimal transport distillation.
In *Proceedings of the Sixteenth ACM International* Conference on Web Search and Data Mining, WSDM
'23, page 1048–1056, New York, NY, USA. Association for Computing Machinery.
Zhuolin Jiang, Amro El-Jaroudi, William Hartmann, Damianos Karakos, and Lingjun Zhao. 2020. Crosslingual information retrieval with BERT. In Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020),
pages 26–31, Marseille, France. European Language Resources Association.
Taku Kudo and John Richardson. 2018. SentencePiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium.
Association for Computational Linguistics.
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018.
Word translation without parallel data. In *International Conference on Learning Representations*.
Wuwei Lan, Yang Chen, Wei Xu, and Alan Ritter. 2020.
An empirical study of pre-trained transformers for Arabic information extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4727–4734, Online. Association for Computational Linguistics.
Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and ´
Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483–4499, Online. Association for Computational Linguistics.
Jaeseong Lee, Dohyeon Lee, Jongho Kim, and Seungwon Hwang. 2023. C2lir: Continual cross-lingual transfer for low-resource information retrieval. In Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II, pages 466–474. Springer.
Bo Li and Ping Cheng. 2018. Learning neural representation for CLIR with adversarial framework.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 1861–1870, Brussels, Belgium. Association for Computational Linguistics.
Wing Yan Li, Julie Weeds, and David Weir. 2022.
MuSeCLIR: A multiple senses and cross-lingual information retrieval dataset. In *Proceedings of the* 29th International Conference on Computational Linguistics, pages 1128–1135, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Robert Litschko, Ivan Vulic, and Goran Glavaš. 2022a. ´
Parameter-efficient neural reranking for cross-lingual and multilingual retrieval. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1071–1082, Gyeongju, Republic of Korea.
International Committee on Computational Linguistics.
Robert Litschko, Ivan Vulic, Simone Paolo Ponzetto, ´
and Goran Glavaš. 2022b. On cross-lingual retrieval with multilingual text encoders. *Information Retrieval Journal*, 25(2):149–183.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Sean MacAvaney, Craig Macdonald, and Iadh Ounis.
2022. Streamlining evaluation with ir-measures.
In *European Conference on Information Retrieval*,
pages 305–310. Springer.
Sean MacAvaney, Luca Soldaini, and Nazli Goharian.
2020. Teaching a new dog old tricks: Resurrecting multilingual retrieval using zero-shot learning. In Advances in Information Retrieval, pages 246–254.
Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013.
Exploiting similarities among languages for machine translation. *arXiv preprint arXiv:1309.4168*.
Guilherme Moraes, Luiz Henrique Bonifácio, Leandro Rodrigues de Souza, Rodrigo Nogueira, and Roberto Lotufo. 2021. A cost-benefit analysis of cross-lingual transfer methods. *arXiv preprint arXiv:2105.06813*.
Carol Myers-Scotton. 1997. *Duelling languages: Grammatical Structure in Code-Switching*. Oxford University Press.
Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085.
Shana Poplack. 2013. "sometimes i'll start a sentence in spanish y termino en español": Toward a typology of code-switching. *Linguistics*, 51(s1):11–14.
Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics.
Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA:
Language-agnostic answer retrieval from a multilingual pool. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 5919–5930, Online. Association for Computational Linguistics.
Peng Shi, He Bai, and Jimmy Lin. 2020. Cross-lingual training of neural models for document ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2768–2773, Online.
Association for Computational Linguistics.
Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2021.
Cross-lingual training of dense retrievers for document retrieval. In *Proceedings of the 1st Workshop* on Multilingual Representation Learning, pages 251– 253, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2019. XLDA:
Cross-lingual Data Augmentation for Natural Language Inference and Question Answering. arXiv preprint arXiv:1905.11471.
Samson Tan and Shafiq Joty. 2021. Code-mixing on sesame street: Dawn of the adversarial polyglots. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3596–3616, Online. Association for Computational Linguistics.
Ishan Tarunesh, Syamantak Kumar, and Preethi Jyothi.
2021. From machine translation to code-switching:
Generating high-quality code-switched text. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3154–
3169, Online. Association for Computational Linguistics.
Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, and Kristina Toutanova. 2021. Revisiting the primacy of english in zero-shot cross-lingual transfer. arXiv preprint arXiv:2106.16171.
Jieyu Wang and Anita Komlodi. 2018. Switching languages in online searching: A qualitative study of web users' code-switching search behaviors. In *Proceedings of the 2018 Conference on Human Information Interaction & Retrieval*, CHIIR '18, page 201–210, New York, NY, USA. Association for Computing Machinery.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Eugene Yang, Suraj Nair, Ramraj Chandradevan, Rebecca Iglesias-Flores, and Douglas W. Oard. 2022.
C3: Continued pretraining with contrastive weak supervision for cross language ad-hoc retrieval. In Proceedings of the 45th International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR '22, page 2507–2512, New York, NY, USA. Association for Computing Machinery.
Jian Yang, Shuming Ma, Dongdong Zhang, ShuangZhi Wu, Zhoujun Li, and Ming Zhou. 2020. Alternating language modeling for cross-lingual pre-training.
Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):9386–9393.
Puxuan Yu, Hongliang Fei, and Ping Li. 2021. Crosslingual language model pretraining for retrieval. In Proceedings of the Web Conference 2021, WWW '21, page 1029–1039, New York, NY, USA. Association for Computing Machinery.
## A Code-Switching Examples
| Approach | Query | Document |
|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Zero-Shot | What is an affinity credit card program? | Use your PayPal Plus credit card to deposit funds. If you have a PayPal Plus credit card, you are able to instantly transfer money from it to your account. This is a credit card offered by PayPal for which you must qualify. |
| Fine-tuning | Was ist ein Affinity-Kreditkartenprogramm? | Используйте свою кредитную карту PayPal Plus для внесения средств. Если у вас есть кредитная карта PayPal Plus, вы можете мгновенно переводить деньги с нее на свой счет. Это кредитная карта, предлагаемая PayPal, на которую вы должны претендовать. Использовать your PayPal плюс кредита билет попытаться депозиты funds. если you have a PayPal плюс |
| gramms? | credit билет, скажите are able to instantly переход денег from it попытаться ваши account. This is a credit билет offered by paypal for причём you может qualify. | |
| ML-CS | What is это affinità credit card program? | Use jouw PayPal Plus credit geheugenkaarten to depositi funds. @ X@ you хотя ein àAÒ J KB@ aggiunta credit card, you are попытаться quindi sofort transfer geld from questo úÍ@ deine account. Это является a кредита card offerto by A JÊË paypal voor which you devono Éë |
| Wiki-CS | What is an affinity Kreditkarte program? | Use your PayPal Plus carta di credito to deposit funds. If you have a PayPal Plus carta di credito, you are able to instantly transfer denaro from it to your account. This is a carta di credito offered by PayPal for which you mosto qualify. |
| BL-CS | Denn is einem affinity credit card pro | |
Table 5: Different Code-Switching strategies on a single training instance for the target language pair DE–RU (Query ID: 711253, Document ID: 867890, label: 0). **Zero-shot**: Train a single zero-shot ranker on the original EN–EN
MS MARCO instances (Bajaj et al., 2016). **Fine-tuning**: Fine-tune ranker directly on DE–RU, we use translations
(Google Translate) provided by the mMARCO dataset (Bonifacio et al., 2021). Bilingual Code-Switching (**BL-CS**):
Translate randomly selected EN query tokens into DE and randomly selected EN document tokens into RU, each token is translated with probability p " 0.5; Multilingual Code-Switching (**ML-CS**): Same as BL-CS but additionally sample for each token its target language uniformly at random. **Wiki-CS**: Translate n-grams extracted with a sliding window. Tokens within a single query/document are code-switched with a single language; across training instances languages are randomly mixed. We use the following "seen languages": English, German, Russian, Italian, Dutch, Arabic.
## B Results On Unseen Languages
| Unseen QL | Unseen DL | Unseen Both | | | | | | | | | |
|-----------------|-------------|---------------|-------|-------|-------|-------|-------|-------|-------|------|-------|
| FR–EN | ID–NL | EN–PT | DE–VT | IT–ZH | ES–FR | FR–PT | ID–VT | PT–ZH | AVG | ∆ZS | |
| Zero-shot | 18.3 | 13.7 | 23.2 | 10.9 | 9.4 | 19.0 | 18.7 | 11.8 | 9.6 | 15.0 | - |
| Fine-tuning | 30.0* | 27.2* | 30.8* | 24.8* | 25.0* | 29.0* | 29.0* | 25.8* | 25.4* | 27.4 | +12.2 |
| Multilingual CS | 21.4* | 18.3* | 25.9* | 15.5* | 14.8* | 22.7* | 21.9* | 16.4* | 14.7* | 19.1 | +4.1 |
| Wiki CS | 21.0* | 17.2* | 26.2* | 15.4* | 15.0* | 21.9* | 20.5* | 15.3* | 14.8* | 18.6 | +3.4 |
Table 6: CLIR results on unseen mMARCO languages in terms of MRR@10. **Bold**: Best zero-shot model for each language pair. ∆ZS: Absolute difference to the zero-shot baseline. Results significantly different from the zero-shot baseline are marked with * (paired t-test, Bonferroni correction, p ă 0.05). Results include unseen query languages
(QL), unseen document languages (DL) and unseen languages on both sides.
| FR–FR | ID–ID | ES–ES | PT–PT | ZH–ZH | VT–VT | AVG | ∆ZS | |
|-----------------|---------|---------|---------|---------|---------|-------|-------|------|
| Zero-shot | 27.2 | 26.8 | 28.2 | 27.9 | 24.8 | 22.8 | 26.3 | - |
| Fine-tuning | 30.5* | 30.6* | 31.5* | 31.2* | 29.1* | 28.6* | 30.3 | +4.0 |
| Multilingual CS | 26.4 | 26.7 | 27.6 | 27.3 | 22.3 | 23.1* | 25.6 | -0.7 |
| Wiki CS | 25.8* | 25.5* | 27.1* | 26.5* | 22.2* | 21.8* | 24.8 | -1.8 |
Table 7: MoIR: Monolingual results on unseen mMARCO languages in terms of MRR@10.
## C Hyperparameters, Datasets And Infrastructure
| Hyperparameter | Value |
|----------------------------|------------------------------------------------------|
| Maximum sequence length | 512 |
| Learning rate | 2e-5 |
| Training steps | 200,000 |
| Batch size | 64 |
| Warm-up steps (linear) | 5,000 |
| Positive-to-negative ratio | 1:4 |
| Optimizer | AdamW (Loshchilov and Hutter, 2019) |
| Encoder Model | nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large |
| Encoder Parameters | 106,993,920 |
Table 8: Hyperparameter values for re-ranking models. Following Reimers and Gurevych (2020) we extract negative samples from training triplets provided by MS MARCO (Bajaj et al., 2016). In the passage re-ranking task we re-rank for 6980 queries 1,000 passages respectively (qrels.dev.small). We construct 36 different language pairs from the mMARCO dataset (Bonifacio et al., 2021).
| Setup GPU | NVIDIA A100 (80 GB) |
|------------------------------------|-----------------------|
| Avg. Training Duration (per model) | 13 h |
| Avg. Test (per language pair) | 2 h |
Table 9: Computational environment. We use Huggingface to train our models (Wolf et al., 2020), NLTK for tokenization, ir-measures for evaluating MRR@10 (MacAvaney et al., 2022) and SciPy for significance testing.
## D Bilingual Lexicon Sizes
| Language | MUSE vocabulary | Parallel Wikipedia titles |
|------------|-------------------|-----------------------------|
| Arabic | 132,480 | 432,359 |
| German | 200,000 | 1,113,422 |
| Italian | 200,000 | 999,243 |
| Dutch | 200,000 | 822,563 |
| Russian | 200,000 | 906,750 |
Table 10: Size of bilingual lexicons. Two lexicons are used to substitute the words in English with their respective cross-lingual synonyms: (i) multilingual word embeddings provided by MUSE (Lample et al., 2018), (ii) Wikipedia page titles obtained from inter-language links, provided by linguatools project.9 The Wikipedia-based lexicons are several times larger that the MUSE vocabulary.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations (after Conclusion)
✓ A2. Did you discuss any potential risks of your work?
We train small distilled models (mentioned in Section 3 and Appendix C) to reduce environmental impact.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
Section 2, Section 3 and Appendix C.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We use Wikipedia as a resource for creating bilingual lexicons (Wiki-CS), which is openly licensed under "Wikipedia: Creative Commons Attribution-ShareAlike license."
MS-MARCO and mMARCO are standard benchmarks that have been released for non-commercial research purposes. (https://microsoft.github.io/msmarco/)
MUSE is distributed under "Attribution-NonCommercial 4.0 International" license. (https://github.com/facebookresearch/MUSE/blob/main/LICENSE)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 1
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We make use of existing research data and do not release new corpora.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3, Appendix D
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix C
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3, Appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3, Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix C
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ma-etal-2023-domain | Domain-specific Attention with Distributional Signatures for Multi-Domain End-to-end Task-Oriented Dialogue | https://aclanthology.org/2023.findings-acl.194 | The end-to-end task-oriented dialogue system has achieved great success in recent years. Most of these dialogue systems need to accommodate multi-domain dialogue in real-world scenarios. However, due to the high cost of dialogue data annotation and the scarcity of labeled dialogue data, existing methods are difficult to extend to new domains. Therefore, it is important to use limited data to construct multi-domain dialogue systems. To solve this problem, we propose a novel domain attention module. It use the distributional signatures to construct a multi-domain dialogue system effectively with limited data, which has strong extensibility. We also define a adjacent n-gram pattern to explore potential patterns for dialogue entities. Experimental results show that our approach outperforms the baseline models on most metrics. In the few-shot scenario, we show our method get a great improvement compared with previous methods while keeping smaller model scale. | # Domain-Specific Attention With Distributional Signatures For Multi-Domain End-To-End Task-Oriented Dialogue
Xing Ma1, Peng Zhang1⇤**, Feifei Zhao**2 1*College of Intelligence and Computing, Tianjin University, Tianjin, China* 2*Beijing Wenge Technology Co.,Ltd, Beijing, China*
{machine981, pzhang}@tju.edu.cn [email protected]
## Abstract
The end-to-end task-oriented dialogue system has achieved great success in recent years.
Most of these dialogue systems need to accommodate multi-domain dialogue in real-world scenarios. However, due to the high cost of dialogue data annotation and the scarcity of labeled dialogue data, existing methods are difficult to extend to new domains. Therefore, it is essential to use limited data to construct multi-domain dialogue systems. To solve this problem, we propose a novel domain attention module. It uses distributional signatures to construct a multi-domain dialogue system effectively with limited data, which has strong extensibility. We also define an adjacent n-gram pattern to explore potential patterns for dialogue entities. Experimental results show that our approach outperforms the baseline models on most metrics. In the few-shot scenario, we show our method gets a great improvement compared with previous methods while keeping a smaller model scale.
## 1 Introduction
Task-oriented dialogue systems (TOD) aim to assist users in achieving specific goals, such as hotel reservations or weather inquiries, through limited dialogue turns. In contrast with chitchat systems, task-oriented dialogues generate responses based on a specific domain knowledge base (KB). Traditional pipeline methods (Young et al., 2013; Mrkšic´
et al., 2017) suffer from error propagation and huge cost for intermediate annotations such as dialogue states and actions. Recently, end-to-end methods
(Madotto et al., 2018; Wu et al., 2019; Qin et al.,
2020; He et al., 2020a; Qin et al., 2021; Ou et al., 2022) have achieved great success by taking the sequence-to-sequence (Seq2Seq) model to generate system responses directly with dialogue history and the specific domain knowledge base. These approaches have the advantages that the dialogue
⇤Corresponding Author.
![0_image_0.png](0_image_0.png)
Figure 1: Example of multi-domain dialogue (including weather, restaurant and schedule). Words with blue underlines are entities. The importance of the same word
"scheduled" in different domains to dialogue semantics is marked with different levels of red.
states and actions are latent, which alleviates the need for intermediate annotations.
However, existing end-to-end models are still trained on a large amount of domain-specific dialogue data and the corresponding knowledge base. In practice, task-oriented dialogue systems are often applied to multiple domains. It is difficult for end-to-end models to perform well for domains with limited dialogue data. Hence, it is important to explore how to use the data in the existing domain effectively and transfer the learned knowledge to the new domain with limited data.
Many works are proposed for multiple domains.
These methods can be broadly divided into three categories for dealing with different domains in the dataset. The first type of work (Eric and Manning, 2017; Madotto et al., 2018; Wu et al., 2019; He et al., 2020a,b; Raghu et al., 2021; Ou et al., 2022)
3109
![1_image_0.png](1_image_0.png)
does not distinguish between multi-domain data but uses them jointly for training. The second type of work (Wen et al., 2018; Qin et al., 2019) trains separate models for different domain data. The former can make the model learn the shared knowledge of dialogues in various domains and improve the generalization ability of models. However, it can not capture the special knowledge of each domain effectively. The latter can model specific knowledge of dialogues of different domains, but it is challenging to extend to new domains with limited data. The third type of work (Qin et al., 2020) proposes a dynamic fusion mechanism to learn shared and domain-independent knowledge and integrate them with the dynamic fusion mechanism. However, the trained model cannot be flexibly extended to new domains since the number of categories for the domain classifier is predefined. In addition, setting up separate encoders and decoders for each domain adds additional computing overhead.
To address the above issues, we propose a novel domain attention block, which leverages distributional signatures easily extracted from each domain as prior knowledge. As shown in figure 1, we observe that the same word, which appears in different domain dialogue contexts, often has different effects on context understanding and response generation. Furthermore, the KB entities generally have a fixed pattern when appearing in different contexts. We adopt inverse word frequency, *domain condition likelihood* to model the former and propose *adjacent n-gram patterns* to model the latter. Instead of one encoder-decoder framework for each domain as figure 2(b), we use a single LSTM
to obtain the latent domain knowledge and bridge the gap caused by statistic noise (Bao et al., 2020).
A domain feature fusion module is adopted to calculate the similarity between context and each domain feature and fuse the domain-specific attention obtained by the prior knowledge of each domain. We use an auxiliary domain loss to reduce the difference between semantic and signature blocks. Due to learning from the distributional signatures of each domain, our model can better capture general and domain-specific knowledge of multi-domain dialogue.
We conduct experiments on two publicly multi-domain task-oriented dialogue datasets, InCar assistant(Eric et al., 2017) and Multi-WOZ
2.1(Budzianowski et al., 2018). Our model outperforms baseline models on most metrics. In a low resource setting, our model outperforms the prior state-of-the-art model by 1.4% in entity F1 and by 1.8% in BLEU on In-Car Assistant dataset.
## 2 Methodology
As shown in figure 2(a), given dialogue of domain d (d 2 D, D is the set of all domains) between user and system, our model takes the tokens X = (x1, x2*, ..., x*T ) from dialogue history and the corresponding multi-domain distributional signatures Sd = (s1,d, s2,d, ..., sT,d) into semantic and signature block respectively. Then we use the context vector obtained by two blocks to initialize the knowledge module. Finally, the decoder generates final responses sequentially with the KB read-out vector and context hidden state.
## 2.1 Distributional Signatures
We obtain prior distributional signatures from multi-domain dialogue data to learn the general and domain-specific knowledge better.
Adjacent N-gram Patterns We propose adjacent n-gram patterns to model the fixed patterns of dialogue data. It is calculated through the conditional probability p*cond* of a forward or backward adjacent n-grams xˆn in context.
$$\left\{\begin{array}{l}{{\hat{x}_{n}=\left(x_{i+1*I_{s i g n}},x_{i+2*I_{s i g n}},x_{i+n*I_{s i g n}}\right)}}\\ {{\mathbf{p}_{d}^{n}(\hat{x}_{n})=\frac{1}{v}\sum_{x_{j}}^{v}\frac{\varepsilon}{\varepsilon+P_{d}^{n}(x_{j}\mid\hat{x}_{n})}}}\end{array}\right.\tag{1}$$
where v is the vocab size. I*sign* is 1 for forward n-gram 1 for backward n-gram.
Variable words near a fixed pattern are often related to entities. In other words, xi with larger pnd adjacent n-grams pattern is more likely to be an entity in the dialogue domain as shown in figure 3. We take both forward and backward adjacent n-grams pnd (ˆxn) as a feature of word xi. For implementation, we use *nltk toolkit* 1 (Bird et al., 2009)
to calculate the n-gram frequency of all dialog contexts.
Inverse Word Frequency Word frequency is an important measure of the information that a word provides in a dialog. Following Bao et al. (2020),
we reduce the weight of high-frequency words and increase the weight of low-frequency words. We define domain inverse word frequency iwfd.
$$i w f_{d}(x)=\frac{\varepsilon}{\varepsilon+P_{d}(x)},\qquad\qquad(2)$$
where " = 105, x is the word of domain d, and Pd(x) is the unigram likelihood over domain d data. The general inverse word frequency iwfg is calculated in a similar way, in which Pd(x) is replaced by Pg(x). Pg(x) is the unigram likelihood over the whole dataset.
Domain Conditional Likelihood Important words in a domain often play an essential role in the semantics of the dialogue in that domain. Therefore, we define a *domain conditional likelihood* cg
![2_image_0.png](2_image_0.png)
Figure 3: Example of words with low and high pd adjacent bi-grams. Words with red underlines is xi. The entities are marked blue.
to estimate the role of a word in some domain.
$$\begin{array}{l c r}{{c_{d}(x)=P(d\mid x)}}&{{}}&{{(3)}}\\ {{c_{g}(x)=\frac{\varepsilon}{\varepsilon+\mathcal{H}(c_{d}(x))}}}&{{}}&{{(4)}}\end{array}$$
where cd(x) is determined by conditional probability instead of predicting by using a regularized linear classifier (Bao et al., 2020). We employ an entropy operator H to measure the uncertainty of domain d.
In practice, we set the zero signatures mentioned above to ". For adjacent n-gram patterns, we set zero P nd (xj | xˆn) to " to calculate the final adjacent n-gram patterns pnd (ˆxn). Finally, we use the concatenation of all signatures as the domain-specific feature of a word.
## 2.2 Context Encoder
We divide the context encoder into two blocks to encode the semantics and distributional signatures of the contexts, respectively.
Semantic Block We first embed the dialogue history tokens X = (x1, x2*, ..., x*T ) into a fixeddimensional word vector by using an embedding matrix. Following Gangi Reddy et al. (2019), we use the GloVe word vector to initialize the embedding weight. Then, we employ a bidirectional GRU
(Cho et al., 2014) to encode the embedded dialogue history.
$$\hbar_{i}^{s e}=\mathrm{BiGRU}(\phi^{e m b}(x_{i}),\hbar_{i-1}^{s e})$$
where emb(·) is the word embedding matrix. All context hidden states Hse = (hse 1 , hse 2 *, ...,* hseT )
are obtained by this way. Following Zhong et al.
(2018), we adopt simple self-attention over Hctx to get the semantic attention of context.
$$\begin{array}{l}{{{\mathbf{u}}_{i}=W_{s e,2}(\sigma(W_{s e,1}{\mathbf{h}}_{i}^{s e}+b_{s e,1}))}}\\ {{{\mathbf{a}}_{i}=\frac{\exp({\mathbf{u}}_{i})}{\sum_{j}\exp({\mathbf{u}}_{j})}}}\end{array}\tag{6}$$
1https://github.com/nltk/nltk
$\mathfrak{J}$ 1.
![3_image_0.png](3_image_0.png)
where is an activation function. Wse,1, Wse,2, bse,1 are trainable parameter. Finally, we get semantic attention Attse = (ase 1 , ase 2 *, ...,* asen ) of the dialogue history.
Signature Block We leverage the distributional signatures si to capture the general and domainspecific knowledge. However, there is much noise in these data, which may interfere with the training process. We take a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to bridge the gap following Bao et al. (2020).
$$\mathbf{s}_{i,d}=[\mathbf{iwf}_{d}(x_{i}),\mathbf{iwf}_{g}(x_{i}),\mathbf{c}_{d}(x_{i}),\mathbf{c}_{g}(x_{i}),\mathbf{p}_{d}^{n}(x_{i})]\tag{8}$$ $$\mathbf{s}_{i,d}=[\mathbf{iwf}_{d}(x_{i}),\mathbf{iwf}_{g}(x_{i}),\mathbf{c}_{d}(x_{i}),\mathbf{c}_{g}(x_{i}),\mathbf{p}_{d}^{n}(x_{i})]\tag{9}$$
i1) (9)
where pnd (xi) is concatenation of [pnp
$$\mathbf{h}_{i,d}^{s i}=\mathrm{BiLSTM}(\mathbf{s}_{i},\mathbf{h}_{i-1}^{s i})$$
d (xi)], np 2
{2, 3, 4}. Similar to the semantic block, we adopt
a self-attention layer over signatures hidden states
Hsi
d = (hsi
1,d, hsi
2,d, ..., hsiT,d) to get signature attention Attsid = (asi
1,d, asi
2,d, ..., asiT,d). Finally, we
obtain nd signature attention and single context
attention.
Domain Feature Fusion To fuse multiple domain features, we propose a domain feature fusion function based similarity between semantic attention Attse and signature attention Attsid as shown in figure 4.
$$\begin{array}{l}{{v_{d}=\langle A t t^{s e},A t t_{d}^{s i}\rangle}}\\ {{a_{d}=\frac{\exp(v_{d})}{\sum_{d_{i}}\exp(v_{d_{i}})}}}\end{array}$$
where h·i is scalar product function. We define P*domain* = [a1, a2*, ...a*dn ]. Then, We get the domain weights ad to merge all domain-specific attention.
$$A t t^{s i}=\sum_{d}a_{d}A t t_{d}^{s i}$$
Finally, we use the fused domain attention Attsid and semantic attention Attse over Hse to get the context vector c
$$(12)$$
$$c_{e n c}=W_{e n c}[\sum_{i}a_{i}^{s e}h_{i}^{s e},\sum_{i}a_{i}^{s i}h_{i}^{s e}]\quad(13)$$
## 2.3 Knowledge Module
To obtain the knowledge needed to generate system responses, the model needs to interact with the knowledge base and get query results. We adopt the memory network with the global-to-local pointer mechanism (Wu et al., 2019) to encode and query the external knowledge.
The external knowledge includes the corresponding knowledge base Kd and dialogue history X.
The ith entity triplet ei = (ei,sub, ei,rel, e*i,obj* ) is represented as cmi = BOW(Cm(e)). BOW(·)
is a bag of word function and Cm is an embedding matrix for a k-hop memory network, where m 2 {1, 2*, ..., k*}. We initialize the memory representation in the encoder stage and query the memory model sequentially in the decoder stage.
$$({\mathfrak{g}})$$
Initialize Memory Representation We use the final context vector cenc as the initial vector q1*init* to initialize the memory module. Then, we get the global pointer gmi through the k-hop mechanism. The whole initialize process is calculated as follows:
$$\begin{array}{l}{{p_{i}^{m}=S o f t m a x(\langle q_{i n i t}^{m},c_{i}^{m}\rangle)}}\\ {{g_{i}^{m}=S i g m o i(\langle q_{i n i t}^{m},c_{i}^{m}\rangle)}}\\ {{o_{i n i t}^{m}=\sum_{i}p_{i}^{m}c_{i}^{m+1}}}\\ {{q_{i n i t}^{m+1}=q_{i n i t}^{m}+o_{i n i t}^{m}}}\end{array}$$
where om*init* is the weighted sum over cmi . We obtain a memory read-out vector qk+1 init to initialize the decoder. The last hop global pointer gki is used to strengthen the KB entities representation that appears in contexts in the decoder stage.
$$\quad(10)$$ $$\quad(11)$$
Query Memory Module We get a context vector c*t,dec* in the t step of the decoder stage and use it to query the memory module. We apply the
$$3112$$
global pointer to give different weights to each entity. Then, we calculate the similarity between context vector and entity representations based on the dot product to obtain the copy probability of entities.
$$\mathbf{p}_{i,t}^{m}=S o f t m a x(\langle\mathbf{c}_{t,d e c},\mathbf{c}_{i}^{m}\mathbf{g}_{i}^{k}\rangle)\quad\quad(18)$$
We define the query result as the last hop probability P kb t = [pk1,t, pk2,t, ..., pkT +b,t, ]. We can select the word for generated responses with the highest pki,t.
## 2.4 Attention Decoder
We apply a sketch decoder to first generate a coarse response in which sketch tags substitute all the entities. For example, a sentence "dish_parking is five_miles away" is written as "@poi is @distance away". Then we use the copied entity as mentioned in section 2.3 to replace the sketch tag.
We adopt a Bi-GRU to generate coarse responses and use the concatenation of KB read-out vector qk+1 init and the last context hidden states hse T as the initial vector (different with memory initialization in section 2.3)
$$\begin{array}{l}{{h_{0}^{d e c}=W_{c o n c a t}[q_{i n i t}^{k+1},h_{T}^{s e}]+b_{c o n c a t}}}\\ {{h_{t+1}^{d e c}=\mathrm{BiGRU}(\phi^{e m b}(x_{t}),h_{t}^{d e c})}}\end{array}$$
T ] + b*concat* (19)
t ) (20)
where xt is the generated token at t timestep of the decoder. We adopt the attention mechanism
(Bahdanau et al., 2015) to reduce the information loss between the encoder and decoder. In addition, we add the coverage mechanism (See et al., 2017)
to reduce excessive attention on specific contexts.
$$\mathbf{e}_{i}^{t}=v^{T}tanh(W_{e}\mathbf{h}_{i}^{se},W_{d}\mathbf{h}_{t}^{dec},w_{c}\mathbf{c}_{i}^{t}+b_{a})\tag{21}$$ $$\mathbf{a}_{t}=Softmax(\mathbf{e}^{t})$$ (22) $$\mathbf{c}_{dec,t}=\sum_{i}\mathbf{a}_{i}^{t}\mathbf{h}_{t}^{dec}$$ (23) $$\mathbf{P}_{t}^{vocab}=Softmax(V[\mathbf{h}_{t}^{dec},\mathbf{c}_{dec,t}]+b)\tag{24}$$
where c*dec,t* is used as a query vector to interact with the knowledge module.
Finally, we generate the coarse responses through the final probability P vocab t . If a sketch tag is in coarse responses, we use P kb t to obtain the corresponding entity.
## 2.5 Joint Training
To encourage the semantic module to learn more from distributional signatures modules, we design a domain feature loss L*domain* to close the gap between the two blocks. The final loss function is defined as:
$$\mathcal{L}_{domain}=\sum_{d}-y_{d}\log a_{d}\tag{25}$$ $$\mathcal{L}_{coverage}=\sum_{i}min(a_{i}^{t},c_{i}^{t})$$ (26) $$\mathcal{L}=\mathcal{L}_{basic}+\mathcal{L}_{domain}+\mathcal{L}_{coverage}\tag{27}$$ where $y_{d}\in\{0,1\}$ and $\mathcal{L}_{basic}$ is same as GLMP
(Wu et al., 2019). The details about L*basic* can be found in appendix A.1.
## 3 Experiment 3.1 Datasets
We conducted the experiments on two publicly available task-oriented dialogue datasets, which include two multi-domain datasets: In-Car Assistant (Eric et al., 2017) and Multi-WOZ 2.1
(Budzianowski et al., 2018). We follow the partition as Madotto et al. (2018); Wu et al. (2019)
on In-Car Assistant and Qin et al. (2020) on MultiWOZ 2.1. More details about the two datasets are presented in appendix A.2.
$$\begin{array}{l}{(19)}\\ {(20)}\end{array}$$
## 3.2 Experimental Settings
We set n to {2, 3, 4} for adjacent n-gram pattern signatures. The model is trained using Adam optimizer (Kingma and Ba, 2015) and learning rate starts from 1e3 to 1e4. We select dropout rate from {0.2, 0.3} and batch size from {8, 16, 32}.
We also use the pre-trained GloVe vector (Pennington et al., 2014) to initialize our embedding. The words not in GloVe are initialized using Glorot uniform distribution (Glorot and Bengio, 2010). The hidden units of GRU are set to the same dimension with embedding. We adopt an exponential schedule sampling(Bengio et al., 2015) in the decoder stage.
You can find more details about hyper-parameters in appendix A.3.
## 3.3 Baselines
(1) **Mem2Seq** (Madotto et al., 2018) adopts a memory network to encode KB and combines the vocabulary and entity probability through a hard gate.
(2) **GLMP** (Wu et al., 2019) applies the global-tolocal pointer mechanism to query the knowledge module.
(3) **KB-retriever** (Qin et al., 2019) retrieves the most relevant KB row and filters the irrelevant information in the whole process.
| In-Car Assistant | Multi-WOZ 2.1 | | | | | | | | | |
|--------------------|-----------------|------|----------|---------|-------------|------|------|------------|------------|-------|
| Model | BLEU | F1 | Calendar | Weather | Navigate F1 | BLEU | F1 | Restaurant | Attraction | Hotel |
| F1 | F1 | F1 | F1 | F1 | | | | | | |
| Mem2Seq | 12.6 | 33.4 | 49.3 | 32.8 | 20.0 | 6.6 | 21.6 | 22.4 | 22.0 | 21.0 |
| GLMP | 13.9 | 60.7 | 54.6 | 56.5 | 53.0 | 6.9 | 32.4 | 38.4 | 24.4 | 28.1 |
| KB-retriever | 17.2 | 59.0 | 71.8 | 57.8 | 52.5 | - | - | - | - | - |
| Fg2Seq | 16.8 | 61.1 | 73.3 | 57.4 | 56.1 | 13.5 | 36.0 | 40.4 | 41.7 | 30.9 |
| DA-HIMN | 16.2 | 61.2 | 73.8 | 60.6 | 54.3 | 9.2 | 37.7 | 39.3 | 37.4 | 36.1 |
| DFNet | 14.4 | 62.7 | 73.1 | 57.9 | 57.6 | 9.4 | 35.1 | 40.9 | 28.1 | 30.6 |
| CD-NET | 17.8 | 62.9 | 75.4 | 61.3 | 56.7 | 11.9 | 38.7 | 41.7 | 38.9 | 36.3 |
| Our model | 18.0 | 63.0 | 72.3 | 55.2 | 61.4 | 12.3 | 39.5 | 41.2 | 45.5 | 37.0 |
Model **Entity F1**
test
full model 63.0
w/o Domain Loss 62.0 1.0 + w/o Signature Block 61.4 1.6
w/o coverage attention 61.8 1.2
origin model 60.7 2.3
(4) **DFnet** (Qin et al., 2020) adopts a dynamic fusion mechanism to learn shared knowledge and domain-independent knowledge.
(5) **Fg2Seq** (He et al., 2020a) uses a flow operation to strengthen the connection between the dialogue history and the knowledge base.
(6) **CD-NET** (Raghu et al., 2021) proposes a pairwise similarity-based KB distillation to enhance the relation between KB and context.
(7) **DA-HIMN** (Ou et al., 2022) combines requestaware with KB-aware to better capture the latest request of users.
We run their code for *DA-HIMN* to obtain the results on Multi-WOZ2.1. For the rest of baselines, we adopt the results reported from their paper.
## 3.4 Results
We adopt the micro Entity F1 and BLEU as our evaluation metrics following (Madotto et al., 2018; Wu et al., 2019; Qin et al., 2020). The results on the two datasets are shown in Tabel 1. We can see that our model outperforms baselines on most metrics.
We mainly compare our model with *GLMP* and DF-net, which are similar frameworks. Our model outperforms *DF-net* 0.3% and 4.4% in entity F1 on In-Car Assistant and Multi-WOZ2.1, respectively.
We also exceed 3.3% over *DF-net* in BLEU on average. In addition, our model also outperforms GLMP 4.7% and 4.8% in entity F1 and BLEU on average. The results indicate that the signature block and all distributional signatures effectively help the model to learn different domain knowledge and mitigate the domain bias.
## 3.5 Analysis
We discuss the validity of the model through experiments on In-Car Assistant dataset from the following aspects. We first conduct ablation experiments to verify the effectiveness of our model and explore the role of different signatures. Then we evaluate our model in a low resource setting and calculate the model size compared with *DF-net* and *GLMP*.
Finally, we use practical cases to demonstrate the effectiveness of the method.
## 3.5.1 Ablation
Ablation of Components We conduct some ablation experiments on our model. The results are shown in Table 2. (1) We first remove the domain loss and just keep the signature block. Our model achieves 62.0% in entity F1 with a drop of 1.0%.
The performance drop demonstrates that domain loss is critical for instructing the semantic module.
(2) Based on (1), we remove the whole signature block, and F1 score drops to 61.4%, indicating that distribution signatures obviously contribute to the model's performance. (3) Then, we remove the coverage context attention mechanism. The performance of the model decreases significantly. The covering attention mechanism indirectly affects the performance of querying knowledge base in the generation process by influencing the generation process of the model.
Ablation of signatures We evaluate our model with different signatures to explore their role in the model. We mainly care about the relation between word features (*inverse word frequency* and domain conditional likelihood) and *adjacent n-gram patterns*. In addition, we also study the n value of adjacent n-grams. The experiment results are shown
(a) Navigation domain (b) Schedule domain (c) Weather domain **(d) Average performance**
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
![6_image_4.png](6_image_4.png)
![6_image_0.png](6_image_0.png)
![6_image_3.png](6_image_3.png)
performs poorly when the training set ratio is low.
It indicates that inaccurate distributional signatures of low-ratio training set bring bias to our model.
in figure 6. It can be seen that model only using word features gets a good result because the word features have a strong relationship with the domain. Our model achieves the best performance when n 2 {2, 3, 4}. When n set adds 1, it has a little drop in entity F1. This may be caused by many short patterns unrelated to entities interfering with the learning process of the model. The performance drops significantly when n set removes 4, which indicates our model suffers from the lack of short pattern features. We also observe that model only using adjacent n-gram patterns has bad performance. The domain loss can not capture the relation of different domains without word features.
## 3.5.2 Domain Adaption
We follow Qin et al. (2020) to conduct domain adaption experiments. We keep two domain data unchanged and use different ratio resources of the last domain data. The ratio is selected from
[1%,5%,10%,20%,30%,50%]. We adopt the same GloVe vector and dimension to *DF-net* and *GLMP*
to remove the influence of irrelevant factors. As shown in figure 5, We can observe that our model achieves competitive results with *DF-net* and has significant improvement over *GLMP* in total. Particularly, our model gets 1.8% higher in BLEU than DF-net. It is because we use the hidden states instead of context vector over attention to initialize the decoder. In addition, we find that our model
## 3.5.3 Model Scale
We compare our model size with other baselines in the same setting as shown in Table 3. Our model has 3.6MB larger than *GLMP* and 21.8MB smaller than *DF-net*. In addition, *DF-net* sets up an independent encoder and decoder for each domain.
The model size grows by 4.6MB per extra domain, except for word embedding. However, we achieve competitive results with a smaller model scale, which does not change much with the increase of the dialogue domains. It denotes that our signatures block improves the efficiency of our model.
## 3.5.4 Case Study
We use some cases of dataset to analyze the effects of our method, which are given in Table 4.
We observe that our model generates more fluent responses with right information. In case 1, our model generates right entities compared with *DFnet* and the response is more fluent than *GLMP*. The question of case 2 mentions *'next'*, which requires the model to make judgments based on the time attribute of dinners. Our model outperforms other baselines and obtains the right information from KB in this situation. In addition, our model can better generate complex responses like case 3 for which other baseline models fail to produce smooth sentences. Our generated response is informative and grammatically correct.
| dialog examples | |
|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
| KB | [palo_alto_garage_r, 481_amaranta_ave, parking_garage, car_collision_nearby, 1_miles] |
| Question | where is a nearby parking_garage ? |
| Response | the nearest one is palo_alto_garage_r , it s just 1_miles away |
| GLMP | [palo_alto_garage_r] is [1_miles] away. (Right info.) |
| DF-net | [stanford_oval_parking] is [1_miles] away. (Wrong poi info.) |
| Our model | the closest parking_garage is [palo_alto_garage_r] , it s [1_miles away]. (Right info. and fluent response) |
| KB | [dinner, jon, the_11th, 6pm] |
| Question | what time is my next dinner event and who will be attending ? |
| Response | the one at 6pm with jon is on the_11th. |
| GLMP | your next dinner is on the_11th at [5pm] with [jon]. (Wrong time info.) |
| DF-net | you have two dinner scheduled one on the_20th with [jon] and one on the_11th with tom and one on [6pm] with [jon]. (Wrong problem understanding) |
| Our model | your next dinner is on the_11th at [6pm] with [jon]. (Right understanding and info.) |
| KB | [manhattan, wednesday, high, 50f, low, 40f, hail] |
| Question | what will be the temperature in manhattan on wednesday ? |
| Response | it appears as if there will be a low of 40f and a high of 50f. |
| GLMP | the temperature in manhattan will be on wednesday. (Wrong senmantics) |
| DF-net | the weather in manhattan will be wednesday on monday. (Wrong senmantics) |
| Our model | the temperature in manhattan will be low of [40f] , high of [50f] on wednesday. (Right temp. info.) |
## 4 Related Work
Existing end-to-end approaches to modeling multidomain datasets can be divided into three categories. The first strand of work trains the model on the mixed data directly. Madotto et al. (2018) first adopts end-to-end memory network (Sukhbaatar et al., 2015) to encode KB items and dialogue contexts. Wu et al. (2019) proposes a global-tolocal pointer mechanism to improve the accuracy of querying KB based on the memory network.
Our model retains the main framework of Wu et al.
(2019). He et al. (2020a) uses a flow operation to strengthen the connection between the dialogue history and the knowledge base. Raghu et al. (2021)
also propose a pairwise similarity-based KB distillation to achieve the same purpose as He et al.
(2020a). Ou et al. (2022) combines request-aware with KB-aware to better capture the latest request of users. Xie et al. (2022) models task-oriented dialogues as a text-to-text task and fine-tunes the T5 model (Raffel et al., 2020) on the mixed dataset.
These works treat data from different domains in the same way, which ignores domain-specific knowledge. The second strand of work trains separate models for each domain. Wen et al. (2018)
use the dialog state representation of some domain to query the knowledge base. Qin et al. (2019) restricts the query result from a single KB record.
They both only focus on domain-specific knowledge and lack general knowledge. The third strand of work (Qin et al., 2020) proposes a dynamic fusion network to handle multi-domain dialog, which needs multiple encoder-decoder for each domain
## And Lacks Flexibility.
The distributional signature of text data contains rich semantic and structural knowledge. Bao et al.
(2020) uses the distributional signature to generate general and class-specific attention and improve text classification performance. In our work, we leverage the dialogue data signature of different domains instead of classes. Following Bao et al.
(2020), we employ a Bi-LSTM (Hochreiter and Schmidhuber, 1997) to bridge the gap caused by statistic noise. In addition, we take inspiration from Zhong et al. (2010), which proposes an effective pattern taxonomy model. We design adjacent n-gram patterns to discover entities better in the dialogue context. To our best knowledge, we are the first to use distributional signatures to model multi-domain task-oriented dialog.
## 5 Conclusion
In this work, we propose a domain attention module with distributional signatures of dialogue corpus to capture domain-specific knowledge. We combine the features of different domains in an extensible way, and a domain loss is used to instruct our model to learn better from signatures. In addition, we define a *adjacent n-gram pattern* to mine the KB entities in the dialogue context. We also adopt attention with a coverage mechanism to improve the quality of generated responses. Extensive experiments have demonstrated the effectiveness of our method.
## 6 Acknowledgment
This work is supported in part by the Natural Science Foundation of China (grant No.62276188 and No.61876129), TJU-Wenge joint laboratory funding and MindSpore.
## 7 Limitation
Although our model achieves competitive results with baseline models, some limitations are summarized as follows.
1. The process of extracting data distributional signatures is time-consuming, especially for datasets with more diverse dialogue patterns.
The process of calculating adjacent n-grams is slow. In addition, repeated string manipulation for long texts also needs to be optimized 2. The experiment results are easily affected by the fluctuation of hyper-parameters, especially the signature block hidden size. There is some noise in the distributional signatures. Under different hyper-parameters, noise may have different effects and directly affect experiment results.
3. Our model performs poorly when the training set is too small. The distributional signatures of small data interfere with the model.
## 8 Ethics Statement
This paper proposes a domain attention module with distributional signatures to better learn the domain-specific and general knowledge. We also define an adjacent n-gram pattern to mine the entities in the context. We work within the purview of acceptable privacy practices and strictly follow the data usage policy. We use public datasets and consist of their intended use in experiments. We described our experimental setting in detail to ensure reproducibility. We neither introduce any social/ethical bias to the model nor amplify any bias in the data, and our work will not have any social consequences or ethical issues.
## References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *3rd International* Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2020. Few-shot text classification with distributional signatures. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems 28:
Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1171–1179.
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc.".
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics.
Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–
1734, Doha, Qatar. Association for Computational Linguistics.
Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In *Proceedings* of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49, Saarbrücken, Germany.
Association for Computational Linguistics.
Mihail Eric and Christopher Manning. 2017. A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In *Proceedings of the 15th Conference of the European* Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 468–473, Valencia, Spain. Association for Computational Linguistics.
Revanth Gangi Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2019. Multi-level memory for task oriented dialogs. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3744–3754, Minneapolis, Minnesota. Association for Computational Linguistics.
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the thirteenth international*
conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings.
Zhenhao He, Yuhong He, Qingyao Wu, and Jian Chen.
2020a. Fg2seq: Effectively encoding knowledge for end-to-end task-oriented dialog. In 2020 IEEE
International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 8029–8033. IEEE.
Zhenhao He, Jiachun Wang, and Jian Chen. 2020b.
Task-oriented dialog generation with enhanced entity representation. In *INTERSPEECH*, pages 3905–
3909.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735–
1780.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Andrea Madotto, Chien-Sheng Wu, and Pascale Fung.
2018. Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468–1478, Melbourne, Australia. Association for Computational Linguistics.
Nikola Mrkšic, Diarmuid Ó Séaghdha, Tsung-Hsien ´
Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking.
In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 1777–1788, Vancouver, Canada.
Association for Computational Linguistics.
Yangyang Ou, Peng Zhang, Jing Zhang, Hui Gao, and Xing Ma. 2022. Incorporating dual-aware with hierarchical interactive memory networks for taskoriented dialogue.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Bowen Qin, Min Yang, Lidong Bing, Qingshan Jiang, Chengming Li, and Ruifeng Xu. 2021. Exploring auxiliary reasoning tasks for task-oriented dialog systems with meta cooperative learning. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 13701–13708.
Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, and Ting Liu. 2019. Entity-consistent end-to-end task-oriented dialogue system with KB
retriever. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 133–142, Hong Kong, China. Association for Computational Linguistics.
Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, and Ting Liu. 2020. Dynamic fusion network for multidomain end-to-end task-oriented dialog. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 6344–6354, Online. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Dinesh Raghu, Atishya Jain, Mausam, and Sachindra Joshi. 2021. Constraint based knowledge base distillation in end-to-end task oriented dialogs. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 5051–5061, Online.
Association for Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2440–2448.
Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, and Ting Liu. 2018. Sequence-to-sequence learning for task-oriented dialogue with dialogue state representation. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 3781–
3792, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Chien-Sheng Wu, Richard Socher, and Caiming Xiong.
2019. Global-to-local memory pointer networks for task-oriented dialogue. In *7th International Conference on Learning Representations, ICLR 2019, New* Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *ArXiv preprint*, abs/2201.05966.
Steve Young, Milica Gašic, Blaise Thomson, and Ja- ´
son D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. *Proceedings of the* IEEE, 101(5):1160–1179.
Ning Zhong, Yuefeng Li, and Sheng-Tang Wu. 2010.
Effective pattern discovery for text mining. *IEEE* transactions on knowledge and data engineering, 24(1):30–44.
Victor Zhong, Caiming Xiong, and Richard Socher.
2018. Global-locally self-attentive encoder for dialogue state tracking. In *Proceedings of the 56th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1458–
1467, Melbourne, Australia. Association for Computational Linguistics.
## A Appendix A.1 Loss Function
The terms L*basic* in loss function is same as Wu et al. (2019).
$${\mathcal{L}}_{b a s i c}={\mathcal{L}}_{g}+{\mathcal{L}}_{v}+{\mathcal{L}}_{l}$$
where Lv and Lg are the cross entropy of token and local pointer, Ll is the binary cross entropy of global pointer. The labels of global and local pointer are determined by entities of responses.
$$\hat{g}_{m}=\begin{cases}0&if\,Object(e_{m})\in Y\\ 1&otherwise\end{cases}\tag{29}$$ $$\hat{l}_{t}=\begin{cases}max(z)&if\exists z,\,\,s.t.\,y_{t}=Object(e_{z})\\ T+b+1&otherwise\end{cases}\tag{30}$$
where Y = (y1, y2*, ..., y*n) is the groud truth of reponses. *Object*(·) is function to extract the object of triplet. The loss of three terms is calculated as:
$$\mathcal{L}_{g}=-\sum_{m=1}^{b+T}\hat{g}_{m}\log g_{m}+(1-\hat{g}_{m})\log(1-g_{m})\tag{31}$$ $$\mathcal{L}_{l}=-\sum_{t=1}^{n}-\hat{l}_{t}\log P_{t}^{kb}\tag{32}$$ $$\mathcal{L}_{v}=-\sum_{t=1}^{n}-\hat{y}_{t}\log P_{t}^{vocab}\tag{33}$$
Then we sum the three terms up to get L*basic*.
You can find more details about the global-to-local pointer mechanism in Wu et al. (2019).
## A.2 Dataset
We follow the partition as Madotto et al. (2018);
Wu et al. (2019) on In-Car Assistant and Qin et al.
(2020) on Multi-WOZ 2.1. The details are about two dataset as tabel 5
| Dataset | In-Car Assistant | | |
|----------------------|--------------------|------------|----------|
| Vocab size | 1651 | | |
| Avg. dialog turns | 2.6 | | |
| Avg. length of sent. | 8.1 | | |
| Domain Dialogs | Navigate | Weather | Schedule |
| 1000 | 996 | 1035 | |
| Partition | Train | Dev | Test |
| 2425 | 302 | 304 | |
| Dataset | Multi-Woz2.1 | | |
| Vocab size | 3725 | | |
| Avg. dialog turns | 4.6 | | |
| Avg. length of sent. | 14.4 | | |
| Domain Dialogs | Restaurant | Attraction | Hotel |
| 1309 | 150 | 635 | |
| Partition | Train | Dev | Test |
| 1839 | 117 | 141 | |
## A.3 Hyper-Parameters
$${\mathfrak{A}}_{1}^{\dagger}$$
We set the encoder-decoder hidden size from
{100,200} and signature block from {25, 50, 100}.
We use the *glove.6b* word vector to initialize the embedding matrix of the encoder and decoder. Then we random initialize the embedding matrix of the memory network. For tokens with '_', we first split them into a token list and use the BOW of word vectors. We adopt an exponential schedule sampling for decoding, and the schedule is calculated as He et al. (2020a):
$$t f r={\frac{\alpha}{\alpha+e^{\frac{e p o c h}{\alpha}}-1}}\qquad\qquad(34)$$
where tfr is sampling probability from ground truth. We set ↵ from {10, 15, 20}. For adjacent n-gram patterns, we set n from {2, 3, 4}.
## A.4 Experiment Details
For the **main experiment** results, we adopt the reported results of baselines except *DA-HIMN*.
For the **ablation study**, we train our model on the hyper-parameter set to get the best result of different signatures.
For the **domain adaption experiment** results, we rerun the code of *DF-net* and *GLMP*. To avoid the influence of model dimensions and pre-trained word vectors on the experimental results, we adopt the same model dimensions (200d) and GloVe vectors (*glove.6B.200d*) for our model and the baseline model. Word vectors are used in the same way as in A.3 For **model scale**, we add the Multi-WOZ2.1 dataset based on In-Car Assistant to initialize the lang. We calculate the model size in the same model dimension setting and compute the size growth on average of the three domains of Multi-WOZ2.1.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 6
✓ A2. Did you discuss any potential risks of your work?
section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3.2
✓ B1. Did you cite the creators of artifacts you used?
section 3.2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
section 7
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We use pre-trained word vector in our work and our use is consistent with their intended use.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We didn't use any data at risk in our work
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In the footnote
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 3.5.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 3.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 2.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lei-etal-2023-ckdst | {CKDST}: Comprehensively and Effectively Distill Knowledge from Machine Translation to End-to-End Speech Translation | https://aclanthology.org/2023.findings-acl.195 | Distilling knowledge from a high-resource task, e.g., machine translation, is an effective way to alleviate the data scarcity problem of end-to-end speech translation. However, previous works simply use the classical knowledge distillation that does not allow for adequate transfer of knowledge from machine translation. In this paper, we propose a comprehensive knowledge distillation framework for speech translation, CKDST, which is capable of comprehensively and effectively distilling knowledge from machine translation to speech translation from two perspectives: cross-modal contrastive representation distillation and simultaneous decoupled knowledge distillation. In the former, we leverage a contrastive learning objective to optmize the mutual information between speech and text representations for representation distillation in the encoder. In the later, we decouple the non-target class knowledge from target class knowledge for logits distillation in the decoder. Experiments on the MuST-C benchmark dataset demonstrate that our CKDST substantially improves the baseline by 1.2 BLEU on average in all translation directions, and outperforms previous state-of-the-art end-to-end and cascaded speech translation models. |
## Ckdst: Comprehensively And Effectively Distill Knowledge From Machine Translation To End-To-End Speech Translation Yikun Lei1, Zhengshan Xue1, Haoran Sun1, Xiaohu Zhao1**, Shaolin Zhu**1 Xiaodong Lin3, Deyi Xiong1,2∗
1 College of Intelligence and Computing, Tianjin University, Tianjin, China 2 School of Computer Science and Technology, Kashi University, Kashi, China 3 Department of Management Science and Information Systems, Rutgers University
{yikunlei,xuezhengshan,hrsun,xhzhao,slzhu,dyxiong}@tju.edu.cn [email protected]
## Abstract
Distilling knowledge from a high-resource task, e.g., machine translation, is an effective way to alleviate the data scarcity problem of endto-end speech translation. However, previous works simply use the classical knowledge distillation that does not allow for adequate transfer of knowledge from machine translation. In this paper, we propose a comprehensive knowledge distillation framework for speech translation, **CKDST**, which is capable of comprehensively and effectively distilling knowledge from machine translation to speech translation from two perspectives: cross-modal contrastive representation distillation and simultaneous decoupled knowledge distillation. In the former, we leverage a contrastive learning objective to optimize the mutual information between speech and text representations for representation distillation in the encoder. In the later, we decouple the non-target class knowledge from target class knowledge for logits distillation in the decoder. Experiments on the MuST-C benchmark dataset demonstrate that our CKDST substantially improves the baseline by 1.2 BLEU on average in all translation directions, and outperforms previous state-ofthe-art end-to-end and cascaded speech translation models. The source code is available at https://github.com/ethanyklei/CKDST.
## 1 Introduction
End-to-end (E2E) speech-to-text translation (ST),
directly translating speech in one language into text in another, has recently attracted increasing attention (Duong et al., 2016; Zhang et al., 2020; Xu et al., 2021; Ye et al., 2022). Compared with traditional cascaded ST, E2E ST does not require automatic transcription, which endows itself with less error propagation and lower latency.
However, parallel ST data that consist of speech inputs and target translations, are proverbially limited, especially in comparison with automatic speech recognition (ASR) and machine translation
(MT) data. In order to mitigate this issue, previous efforts leverage pre-training approaches (Xu et al., 2021; Ao et al., 2022) and multi-task learning (MTL) frameworks (Ye et al., 2021; Tang et al.,
2021; Han et al., 2021) to transfer knowledge from ASR and/or MT to ST. Among them, knowledge distillation (KD) (Hinton et al., 2015) has proved to be an effective way to improve ST performance by transferring knowledge from MT to ST (Liu et al.,
2019; Xu et al., 2021; Tang et al., 2021).
However, previous KD approaches to ST only explore the classical KD that transfers knowledge from prediction logits, which may not allow for sufficient knowledge distillation. Specifically, in classical KD (Hinton et al., 2015), two types of knowledge are encoded in prediction logits, target class knowledge from target class logits and nontarget class knowledge from non-target class logits.
Each type of knowledge contributes to the success of classical logits distillation. However, Zhao et al.
(2022) have found that the classical KD couples the non-target class knowledge with the target class knowledge. Such entanglement may inhibit the transfer of non-target class knowledge and limit the performance of logits knowledge distillation.
Additionally, due to the modality gap between speech and text, it might be difficult for E2E ST
to sufficiently capture and translate semantic information embedded in speech inputs to target translations. Fortunately, however, in MTL-based E2E ST,
a speech input is accompanied with its transcription that is used as the input fed into MT. Such speech and transcription pairs allow us to distill knowledge from transcription representations to speech representations so as to reduce the modality gap.
However, such knowledge distillation has not yet been explored for end-to-end speech translation.
In order to address these two issues and efficiently distill MT knowledge to ST, we propose
∗corresponding author.
a Comprehensive Knowledge Distillation framework for ST (**CKDST**). Specifically, we propose Cross-modal Contrastive Representation Distillation (CCRD) and Simultaneous Decoupled Knowledge Distillation (SDKD) as two essential approaches for CKDST, to transfering knowledge from text representations and to performing more sufficient logits distillation.
CCRD applies a contrastive training objective to force E2E ST to learn speech representations that are closer to their corresponding textual representations. In doing so, we could increase the mutual information lower bound between speech and text representations (Tian et al., 2019). **SDKD**
is proposed for E2E ST to mitigate the issue that the classical KD couples the target class knowledge with non-target class knowledge (Zhao et al., 2022).
For more effectively transferring logits knowledge from MT to ST, we decouple these two types of knowledge in prediction logits and extend the decoupled knowledge distillation to the MTL framework where both ST and MT are fine-tuned simultaneously.
In a nutshell, our contributions are three-fold.
- We propose CKDST for end-to-end ST, which can comprehensively and effectively transfer MT knowledge to ST in both the encoder and decoder.
- We introduce CCRD and SDKD in CKDST
to increase the mutual information between speech and text representations, and to decouple the non-target class knowledge from the target knowledge for more effective logits distillation, respectively.
- We conduct extensive experiments on the MuST-C benchmark dataset with four language pairs. Experiment results validate the effectiveness of the two approaches and demonstrate that our model outperforms previous best end-to-end and cascaded baselines.
## 2 Related Work
End-to-End Speech Translation. To alleviate the error propagation in cascaded ST and to ease the deployment, Bérard et al. (2016) and Weiss et al.
(2017) propose to use an end-to-end architecture to directly translate speech in one language into text in another, without using the intermediate transcriptions. In recent years, increasing efforts have been done in E2E ST (Di Gangi et al., 2019b; Liu et al., 2019; Wang et al., 2020b; Liu et al., 2020; Xu et al., 2021; Tang et al., 2021; Fang et al., 2022; Tang et al., 2022). Since the parallel speech translation data is notoriously limited, many approaches have been proposed to solve this problem, such as pre-training (Wang et al., 2020b; Xu et al., 2021; Tang et al., 2022), multi-task learning (Le et al.,
2020; Zhao et al., 2021; Ye et al., 2022), and data augmentation (Bahar et al., 2019; Lam et al., 2022).
Additionally, knowledge distillation from a well trained MT model to a ST model has proved effective in improving ST performance. Liu et al. (2019)
leverage knowledge distillation to allow the E2E
ST model to learn the same prediction distribution as the MT model. The MT model is frozen while the ST model is being trained. SATE (Xu et al.,
2021) uses both pre-trained ASR model and MT
model as teacher models to perform knowledge distillation. Each pre-trained model serves a different module of the ST model, and they are also frozen during training. Tang et al. (2021) propose the online-KD that simultaneously update the ST
module and the MT module in a multi-task learning framework. However, these efforts only distill the knowledge from prediction logits via classical KD, and the knowledge from encoder representations is ignored. In our work, we comprehensively and efficiently distill knowledge from both encoder representations and prediction logits of MT to ST.
Knowledge Distillation. The concept of knowledge distillation has been firstly proposed by Hinton et al. (2015). KD defines a learning framework where a stronger teacher network is employed to guide the training of a student network for many tasks (Kim and Rush, 2016; Li et al., 2017; Tan et al., 2019). The subsequent works can be roughly divided into two groups, distillation from prediction logits (Furlanello et al., 2018; Cho and Hariharan, 2019; Yang et al., 2019; Mirzadeh et al., 2020)
and intermediate representations (Yim et al., 2017; Huang and Wang, 2017; Heo et al., 2019; Park et al., 2019). Romero et al. (2014) explore intermediate representations for KD by using regressions to guide the feature activations of the student network.
Tian et al. (2019) apply a contrastive objective to maximize the mutual information lower bound between teacher representations and student representations. In contrast, DKD (Zhao et al., 2022)
decouples and amplifies student-friendly knowledge from prediction distribution to perform more
![2_image_0.png](2_image_0.png)
effective distillation. Our approaches are partially motivated by these previous efforts but are significantly different from them in two aspects. First, the knowledge gap in representations is due to different modalities rather than the model capability (teacher vs. student), so the inconsistency of modalities is also an important challenge we have to deal with in our distillation. Second, we don't freeze the teacher during distillation, we argue that this may allow the teacher model to adapt to provide student-friendly knowledge.
## 3 Ckdst
In this section, we first introduce the model architecture of CKDST and then elaborate the two knowledge distillation approaches in CKDST.
## 3.1 Model Architecture
CKDST adopts the encoder-decoder ST framework, as shown in Figure 1. It consists of four main components: speech encoder, text encoder, shared encoder and shared decoder, facilitating the joint training of the ST and MT task.
Speech Encoder is composed of non-finetuned wav2vec 2.0 (Baevski et al., 2020) followed by two layers of 1-D CNNs. It takes speech wavforms as input to obtain low-level speech representations.
Text Encoder is the normal word embedding layer, which is the same as the word embedding layer for text translation. It takes text as input for the MT
task.
Shared Encoder / Decoder adopt the standard Transformer (Vaswani et al., 2017) as their backbone network. The shared encoder takes outputs from both speech and text encoder as inputs to further extract semantic information. The shared decoder generates target translations for ST and MT. And, with shared parameters, the shared encoder and decoder are expected to learn the shared knowledge between ST and MT.
A training sample for E2E ST is a
(speech,transcript,translation) triplet (*s, t, y*). We use speech-translation pairs (*s, y*) as training data for ST, and transcript-translation pairs (*t, y*) as training data for MT. The cross-entropy loss is adopted for both ST and MT:
$$\begin{array}{l}{{{\mathcal L}_{\mathrm{ST}}=-\sum_{i=1}^{|y|}\log p(y_{i}|y_{<i},s)}}\\ {{{\mathcal L}_{\mathrm{MT}}=-\sum_{i=1}^{|y|}\log p(y_{i}|y_{<i},t)}}\end{array}\tag{1}$$
## 3.2 Cross-Modal Constrative Representation Distillation
Speech inputs are usually noisier than their textual counterpart transcripts (Tang et al., 2021), which makes the extraction of semantic information from speech difficult. Thus, we want to transfer semantic knowledge across modality, from text representations to speech representations. For this, we propose cross-modal contrastive representation distillation which employs a contrastive training objective to maximize the mutual information between text and speech representations. Due to the length difference between speech and text, we use sentencelevel representations for distillation. We apply average pooling on the output of the shared encoder in the time dimension to obtain sentence-level representations of speech and text.
Concretely, let T and S denote the sentencelevel source text representation (teacher representation) in MT and speech representation (student representation) in ST, respectively. Tian et al. (2019)
show that a lower bound of the mutual information (MI) between T and S exists when we have 1 positive pair (i.e., speech-transcript pair) and N
negative pairs (i.e., pairs of a speech with the rest of the transcripts in the same mini batch). The lower bound is estimated as follows:
$$\begin{array}{c}\mbox{MI}(\mathbf{T},\mathbf{S})\geq\log(N)+\mathbb{E}_{q(\mathbf{T},\mathbf{S})}[\log h(\mathbf{T},\mathbf{S})]\\ \qquad+N\mathbb{E}_{q(\mathbf{T})q(\mathbf{S})}[\log\left(1-h(\mathbf{T},\mathbf{S})\right)]\end{array}\tag{2}$$
where q(T ,S) (indicating positive pairs) and q(T )q(S) (indicating negative pairs) are the joint distribution and the product of marginal distributions of T and S, respectively. h(·) is a critic function that estimates the probability that the input pair
(T ,S) is drawn from q(T ,S).
We optimize our model to maximize the expectation terms in Eq. (2). In doing so, we force our model to learn a representation for S, which is semantically close to that of T , so as to optimize the mutual information between S and T . This can be considered as a procedure that distills knowledge from the representation of T to that of S.
Given a sentence-level text-speech representation pair (T ,S), the critic function h(·) calculates a score indicating the possiblity that (T ,S) is a positive pair (drawn from q(*T , S*)) or negative pair
(drawn from q(T )q(S)). We define the critic function h(T ,S) → [0, 1] as follows:
$$h(\mathbf{T},\mathbf{S})={\frac{e^{c o s(\mathbf{T},\mathbf{S})/\tau}}{1+e^{c o s(\mathbf{T},\mathbf{S})/\tau}}}\qquad\qquad(3)$$
where cos(·) is the cosine similarity and τ is a temperature hyper-parameter. With the critic probability h(T ,S), we can calculate the loss of CCRD:
$$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{CCRD}}=-\left(\mathbb{E}_{q(\mathbf{T},\mathbf{S})}[\log h(\mathbf{T},\mathbf{S})]\right.}}\\ {{\left.+N\mathbb{E}_{q(\mathbf{T})q(\mathbf{S})}[\log\left(1-h(\mathbf{T},\mathbf{S})\right)]\right)}}\end{array}$$
$$\quad(4)$$
Different from Ye et al. (2022) who use contrastive learning to bridge the modality gap between speech and text, we use NCE loss (Gutmann and Hyvärinen, 2010) as the contrastive objective, and aim to maximize the lower bound of mutual information between speech and text representations.
## 3.3 Simultaneous Decoupled Knowledge Distillation
Previous efforts (Liu et al., 2019; Xu et al., 2021; Tang et al., 2021) calculate the KL-Divergence on prediction logits between ST and MT for logits distillation. However, this classical KD loss (Hinton et al., 2015) couples target class knowledge and non-target class knowledge by the confidence of the teacher model on the target class, and suppress the non-target class knowledge transfer which limits the effectiveness of logits distillation (Zhao et al., 2022). Therefore, we propose simultaneous decoupled knowledge distillation which decouples the non-target class knowledge from target class knowledge to allow more sufficient knowledge distillation than the classical KD.
Let p T i and p S i be the probabilities of MT and ST
for the i-th subword in the vocabulary V , respectively. The classical KD loss can be formulated as:
$${\mathcal{L}}_{\mathrm{KD}}=\sum_{i=1}^{|V|}p_{i}^{T}\log\left({\frac{p_{i}^{T}}{p_{i}^{S}}}\right)\qquad\qquad(5)$$
We use ptto denote the probability of the target subword. Correspondingly, the sum of the probabilities of the remaining non-target subwords is p\t = (1 − pt). Meanwhile, let pˆi be the probability of modeling on non-target subwords (i.e.,
without considering the target class), which can be calculated as:
$${\hat{p}}_{i}={\frac{\exp(z_{i})}{\sum_{j=1,j\neq t}^{|V|}\exp(z_{j})}}(i\neq t)\qquad\quad(6)$$
where z is the logit. Since pˆiis independent of the target subword probability pt, we assume that it represents the non-target class knowledge in prediction logits. Now, according to the above definitions, we can reformulate Eq. (5) as:
LKD = p T tlog p T t p S t + p T \t + (1 − p T t)X |V | | {z } TCK i=1,i̸=t pˆ T ilog pˆ | {z } NCK
![4_image_1.png](4_image_1.png)
![4_image_0.png](4_image_0.png)
The details of the reformulation can be found in Appendix A. Obviously, the non-target class knowledge (NCK) couples with the target class knowledge (TCK) with a coupling weight (1−p T
t). Thus larger prediction scores p T
t of the teacher model would lead to smaller coupling weights of NCK,
which significantly suppresses the transfer of NCK.
However, such suppression is not desirable since the more confident the teacher is, the more reliable and valuable knowledge it can provide, and since the contributions of NCK and TCK are from different aspects that should be considered separately
(Zhao et al., 2022). Therefore, we replace (1 − p T
t)
with a hyper-parameter β to decouple the TCK and NCK to control the importance of the two types of knowledge, separately, and the training objective of SDKD is calculated as follows1:
## Lsdkd = Tck + Βnck (8) 3.4 Training And Inference
We train our model in a pretraining-then-finetuning manner. We first pre-train the text encoder, shared encoder and decoder with MT data. Then during the fine-tuning phase, we jointly train ST, MT, CCRD and SDKD with ST data. The overall training objective is the combination of the four task losses:
$${\mathcal{L}}={\mathcal{L}}_{\mathrm{ST}}+{\mathcal{L}}_{\mathrm{MT}}+{\mathcal{L}}_{\mathrm{CCD}}+{\mathcal{L}}_{\mathrm{SDKD}}$$
Note that we do not freeze MT parameters (i.e.,
we still enable the gradient propagation of MT)
when distilling knowledge from representations and prediction logits. This is because we find that contiguously training MT parameters benefits ST
performance in our experiments (see Appendix B).
During inference, we remove the text encoder and use the remaining modules of CKDST for speech translation.
1We regard this as decoupling because the knowledge transfer of NCK is not controlled (i.e., weighted) by the probability of the target class any more. Instead, it is controlled by β.
| En | ST (MuST-C) | External MT | | |
|-------|---------------|---------------|--------|-------|
| hours | #sents | version | #sents | |
| De | 408 | 234K | WMT16 | 4.6M |
| Es | 504 | 270K | WMT13 | 15.2M |
| Fr | 492 | 280K | WMT14 | 40.8M |
| Ru | 489 | 270K | WMT16 | 2.5M |
## 4 Experiments
We compared with state-of-the-art E2E/cascaded ST models to examine the effectiveness of the proposed CKDST.
## 4.1 Datasets
ST Dataset We conducted experiments on the MuST-C2(Di Gangi et al., 2019a) benchmark dataset in four translation directions:
English-German (En-De), English-Spanish (EnEs), English-French (En-Fr) and English-Russian
(En-Ru). Each direction have around 400 hours speech. dev was used to develop and analyze our approaches, *tst-common* was used for testing.
External MT Data We followed previous works
(Tang et al., 2021; Ye et al., 2022) to use WMT3 datasets of different years as external MT data:
WMT 2016 for English-German and EnglishRussian, WMT 2014 for English-French and WMT
2013 for English-Spanish.
The statistics of MuST-C and WMT datasets are shown in Table 1.
## 4.2 Settings
Pre-processing We used 16-bit 16 kHz monochannel audio wave as speech input. And we removed utterances of which the duration is longer than 30s. For text inputs, we extracted 10K unigram subwords with a shared source and target vocabulary via SentencePiece4(Kudo and Richardson, 2018).
Model Configuration We used the base version of Wav2vec 2.05in the speech encoder, which is pretrained on audio data from LibriSpeech (Panayotov et al., 2015) without finetuning. Two layers of CNNs were stacked over Wav2vec 2.0 , where the kernel size was set to 5, stride size to 2 and hidden 2https://ict.fbk.eu/must-c/
3https://statmt.org/ 4https://github.com/google/
sentencepiece 5https://dl.fbaipublicfiles.com/
fairseq/wav2vec/wav2vec_small.pt
| Models | External Data | MuST-C | | | | | | |
|----------------------------------|-----------------|----------|-------|--------|-------|-------|------|------|
| Speech | ASR | MT | En-De | En-Es | En-Fr | En-Ru | AVG | |
| w/o external MT data | | | | | | | | |
| Fairseq ST (Wang et al., 2020a) | - | - | - | 22.7 | 27.2 | 32.9 | 15.3 | 24.5 |
| Espnet ST (Inaguma et al., 2020) | - | - | - | 22.8 | 27.4 | 33.3 | 15.6 | 24.8 |
| W-Transf (Ye et al., 2021) | ✓ | - | - | 23.6 | 28.4 | 34.6 | 14.4 | 25.3 |
| XSTNet (Ye et al., 2021) | ✓ | - | - | 25.5 | 29.6 | 36.0 | 16.9 | 27.0 |
| STEMM (Fang et al., 2022) | ✓ | - | - | 25.6 | 30.3 | 36.1 | 17.1 | 27.3 |
| ConST (Ye et al., 2022) | ✓ | - | - | 25.7 | 30.4 | 36.8 | 17.3 | 27.6 |
| MTL baseline | ✓ | - | - | 25.4 | 29.6 | 35.9 | 16.8 | 26.9 |
| Ours | ✓ | - | - | 26.4 | 30.9 | 37.3 | 17.7 | 28.1 |
| w/ external MT data | | | | | | | | |
| JT-S-MT (Tang et al., 2021) | - | - | ✓ | 26.8 | 31.0 | 37.4 | - | - |
| SATE (Xu et al., 2021) | - | ✓ | ✓ | 28.1 † | - | - | - | - |
| Chimera (Han et al., 2021) | ✓ | - | ✓ | 27.1 † | 30.6 | 35.6 | 17.4 | 27.7 |
| XSTNet (Ye et al., 2021) | ✓ | - | ✓ | 27.1 | 30.8 | 38.0 | 18.5 | 28.6 |
| STEMM (Fang et al., 2022) | ✓ | - | ✓ | 28.7 | 31.0 | 37.4 | 17.8 | 28.7 |
| ConST (Ye et al., 2022) | ✓ | - | ✓ | 28.3 | 32.0 | 38.3 | 18.9 | 29.4 |
| MTL baseline | ✓ | - | ✓ | 27.1 | 31.2 | 37.3 | 18.2 | 28.5 |
| Ours | ✓ | - | ✓ | 28.5 | 32.5 | 38.5 | 19.1 | 29.7 |
Table 2: BLEU scores of different models on the MuST-C *tst-common* set. "Speech" indicates unlabelled speech data. "MTL baseline" is the implemented strong baseline using the same architecture as our model, excluding CCRD
and SDKD. †denotes that large-scale Opensubtitles (Lison and Tiedemann, 2016) data are used as the external MT data.
size to 512. The shared encoder and decoder were configured with the base Transformer setting: 6 layers, 512 as hidden size, 8 attention heads, and 2048 as FFN hidden size.
Implementation Details We implemented our model based on fairseq toolkit.6 For experiments with external data, we used external MT data for pre-training. For those without any external data, only the MT data from the ST triplet data were considered. For fine-tuning, we used the same hyperparameters for experiments with/without external MT data. Particularly, we used the Adam optimizer wiht 25K warm-up updates. The learning rate was 1e-4. The maximal number of tokens was 0.8M
per batch. Both the dropout and the value of label smoothing were set to 0.1. We set the update frequency to 2. The temperature τ was 0.1 and the non-target class knowledge weight β was 4.0. We set the maximal number of updates to 200000, and used the early-stop training strategy if the performance did not improve for 10 consecutive validation runs. We trained all the models on 4 Nvidia TeslaV100 GPUs.
During inference, we averaged the checkpoints of the last 10 epochs for evaluation. We used beam search with a beam size of 10 and length penalty was 1.0. We evaluated case-sensitive detokenized BLEU and ChrF++ by sacreBLEU7(Post, 2018).
Additionally, we also evaluated translation quality with COMET (Rei et al., 2020), which leverages pretrained language models to achieve high correlations with human quality judgments. Specifically, we used COMET-22 (wmt22-COMET-da)8.
Baselines We compared our CKDST with multiple strong E2E ST baselines including: (1) Fairseq ST (Wang et al., 2020a) and (2) Espnet ST (Inaguma et al., 2020) trained only with the ST task data, (3) W-Transf (Ye et al., 2021) that uses a pretrained speech model to extract speech features,
(4) XSTNet (Ye et al., 2021) that trains the ST
model based on W-Transf in a multitask learning framework, (5) Chimera (Han et al., 2021) that learns a shared memory space to align speech and text, (6) STEMM (Fang et al., 2022) that mixes speech and text representations and (7) ConST (Ye et al., 2022) that applies contrastive learning to bridge the modality gap between speech and text,
(8) JT-S-MT (Tang et al., 2021) that employs an online-KD method to transfer knowledge from MT
MTL baseline 55.02 57.92 61.71 43.39 54.51 82.15 82.10 81.00 80.34 81.29
Ours 55.99 58.77 62.54 45.04 55.59 **82.67 82.69 81.55 82.38 82.32**
ChrF++ **COMET**
En-De En-Es En-Fr En-Ru AVG **En-De En-Es En-Fr En-Ru AVG**
Table 3: Results of ChrF++ and COMET for the four language pairs on the MuST-C benchmark dataset.
to ST and (9) SATE (Xu et al., 2021) that leverages an adapter to incorporate pre-trained ASR and MT models into E2E ST, and uses classical KD for knowledge transfer. In addition to these baselines, we implemented a strong baseline "MTL baseline" that uses the same neural architecture (excluding the proposed CCRD and SDKD) as our model to jointly train ST and MT.
| Models | En-De | En-Fr |
|-------------------------------|---------|---------|
| Cascaded | | |
| Espnet (Inaguma et al., 2020) | 23.6 | 33.8 |
| (Xu et al., 2021) | 28.1 | - |
| Cascaded baseline | 27.2 | 36.6 |
| End-to-end | | |
| Ours | 28.5 | 38.5 |
## 4.3 Main Results
Comparison to End-to-End Baselines. We compared our model with several strong baselines for four language pairs on the MuST-C benchmark dataset. Results are shown in Table 2. Without the external MT data, our model achieves a substantial improvement of 1.2 BLEU over the MTL baseline on average and outperforms the strongest baseline, ConST, in all translation directions. When we use the external MT data, we achieve new state-of-theart results in terms of the average BLEU score over the four translation directions and gain a 1.2 BLEU
improvement over the MTL baseline. These results demonstrate that our approaches are able to effectively improve ST with knowledge distillation.
Compared to previous works that explore knowledge distillation for E2E ST, we outperform JT-SMT (Tang et al., 2021) and SATE (Xu et al., 2021)
by 1.4 BLEU and 0.4 BLEU on average, respectively. This suggests that our proposed knowledge distillation approaches are more effective than previous KD methods used in E2E ST. To better evaluate our approach, we used ChrF++ and COMET,
which are more relevant to human evaluation, to assess our model. As shown in Table 3, our model achieves an average improvement of 1.08 ChrF++
and 1.03 COMET compared to the MLT baseline model.
Comparison to Cascaded Baselines. We also compared our end-to-end model with cascade baselines. Espnet (Inaguma et al., 2020) and the cascaded ST system presented by Xu et al. (2021) are two strong cascaded systems trained with MuSTC and external ASR and MT data (LibriSpeech, WMT, and Opensubtitles). We implemented a strong "Cascaded baseline" using the ASR data from the ST data and the same external MT data as ours. Its ASR module is the same as our speech encoder and was trained with the CTC loss. The MT module is a standard Transformer, trained with the traditional MT loss. As shown in Table 4, our implemented Cascaded baseline is competitive to the other two cascaded baselines. Impressively, our end-to-end model outperforms all cascaded baselines in all translation directions.
## 4.4 Ablation Study
To better evaluate the contribution of our proposed knowledge distillation approaches, we progressively removed the CCRD module and the SDKD
module to conduct ablation study on the MUST-C
benchmark. As shown in Table 5, without CCRD,
we get an average drop of 0.5 BLEU on all four translation directions. And, SDKD also contributes 0.6 BLEU on average on all translation directions.
These demonstrate the effectiveness of both approaches in enhancing ST.
## 5 Analysis
Additionally, we conducted a series of in-depth analyses to further investigate how the proposed methods improve E2E ST.
## 5.1 Does Ccrd Increase The Mutual Information?
The proposed CCRD distills knowledge from MT
to ST by optimizing the mutual information be-
| Ablation | MuST-C | | | |
|------------|----------|-------|-------|------|
| En-De | En-Es | En-Fr | En-Ru | |
| Ours | 28.5 | 32.5 | 38.5 | 19.1 |
| - LCCRD | 28.2 | 32.1 | 38.2 | 18.7 |
| - LSDKD | 28.0 | 31.9 | 37.9 | 18.5 |
![7_image_0.png](7_image_0.png)
tween text and speech representations. Mutual information (MI) can be represented by the degree of overlap between two distributions. Thus, we plot the bivariate kernel density estimation (Parzen, 1962) (KDE) contour of speech and text dimreduced representations to visualize their distributions as shown in Figure 2, where t-SNE (Van der Maaten and Hinton, 2008) is used to reduce the dimension of representations into 2D. As shown in Figure 2(a), without CCRD, the overlap of the speech representation distribution and the text representation distribution is small. This shows that even with the shared encoder, the distributions of representations from the two modalities have very low MI. In contrast, when we apply CCRD, the distribution of speech representations and the distribution of text representations almost overlap. This indicates our proposed CCRD can significantly improve the MI between the two representation distributions.
## 5.2 Is Sdkd Better Than Classical Kd?
As discussed in Section 3.3, the classical KD suppresses the knowledge of non-target classes, which limits its performance. To verify this, we conducted experiments on the MuST-C benchmark to compare the effects of SDKD and classical KD. In order to
| Loss | MuST-C | | | |
|--------|----------|-------|-------|------|
| En-De | En-Es | En-Fr | En-Ru | |
| LSDKD | 28.2 | 32.1 | 38.2 | 18.7 |
| LKD | 27.9 | 31.6 | 38.1 | 18.3 |
eliminate the interference of other factors, we did not apply CCRD during training. The loss function LKD of the classical KD is estimated according to Eq. (5). During training, LKD is interpolated with the primary loss (i.e., ST loss) with weight α (Hinton et al., 2015). Therefore, the training objective for E2E ST with the classical KD is:
$${\cal L}=(1-\alpha){\cal L}_{\rm ST}+\alpha{\cal L}_{\rm KD}+{\cal L}_{\rm MT}\tag{10}$$
We followed Tang et al. (2021) to set α to 0.8. As shown in Table 6, SDKD outperforms the classical KD on all translation directions and achieves an average improvement of 0.4 BLEU. This indicates that separately exploring the target and non-target class knowledge is better than the coupled form.
## 5.3 Impact Of The Non-Target Class Knowledge Weight
For SDKD, it is important to choose an appropriate non-target class knowledge weight β. To understand the impact of β, we employed a grid search from [0, 8] to search desirable β with a stride of 2 on the MuST-C En-De dev set. Results are shown in Figure 3. The orange dashed line indicates the baseline model which uses the classical KD during training. If β = 0, it indicates that the nontarget class knowledge is ignored when distilling knowledge from prediction logits. Compared with the classical KD baseline, the model performance drops significantly if we ignore the non-target class knowledge. This suggests that the non-target class knowledge is important and useful. The curve with varying β clearly shows that the performance of the model first increases and then drops as β increases. We achieve the best BLEU score when β = 4. This indicates that appropriately increasing the importance of the non-target class knowledge is beneficial for knowledge distillation, but too large weights would undermine the performance of the model.
![8_image_0.png](8_image_0.png)
## 5.4 Impact Of The Performance Of The Pre-Trained Mt Model
Our proposed approaches aim to effectively distill knowledge from MT to ST, thus the pre-trained MT performance is of importance to our model. In order to study the impact of MT performance on our model, we randomly sample 1M, 2M and 3M
MT data from the external MT data to pre-train the MT model so as to have different MT models with varying performance. When the size of external MT data is 0, we use the MT data from the ST triplet to pre-train the MT model. Results are shown in Figure 4. We observe that as the performance of the pre-trained MT model improves, the BLEU score of our model also keeps improving.
This demonstrates that our approaches benefit from strong pre-trained MT models.
## 6 Conclusion
In this paper, we have presented CKDST, which comprehensively and effectively distills the knowledge of MT to boost the performance of E2E ST
through two key approaches: CCRD and SDKD.
The former leverages a contrastive objective to maximize the mutual information lower bound between speech and text representations for representation knowledge distillation. The later reformulates the classical KD loss to decouple the target class knowledge and the non-target class knowledge for more effective logits knowledge distillation. Our experiments strongly demonstrate that our approaches are able to significantly improve E2E ST and achieve new state-of-the-art results on the MUST-C benchmark dataset.
![8_image_1.png](8_image_1.png)
## Limitations
Although the proposed CKDST distills the knowledge of MT more comprehensively and efficiently from encoder representations and prediction logits, and obtains significant improvements over previous methods, it still has limitations: (1) The batch size is not very large, limited by the memory capacity of the used hardware and the extremely long sequence length of speech inputs, which leads to a small number of negative samples used in CCRD and does not fully exploit the ability of contrastive learning.
In future work, we attempt to expand the negative sample size using a mechanism like memory bank
(He et al., 2020). (2) As we distill knowledge from MT to ST, the performance of the pretrained MT
model has an impact on our framework.
## Ethics Statement
This work presents CKDST, a knowledge distillation framework for ST to more comprehensively and effectively distill knowledge from MT to improve the performance of E2E ST. The datasets used in this study include both MuST-C and WMT.
They are all public datasets and are widely used in the MT community.
## Acknowledgments
The present research was supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (No. 2022D01D43). We would like to thank the anonymous reviewers for their insightful comments.
## References
Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, and Furu Wei.
2022. SpeechT5: Unified-modal encoder-decoder pre-training for spoken language processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5723–5738, Dublin, Ireland. Association for Computational Linguistics.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
Advances in Neural Information Processing Systems, 33:12449–12460.
Parnia Bahar, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2019. On using specaugment for end-to-end speech translation. *arXiv preprint arXiv:1911.08876*.
Alexandre Bérard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A
proof of concept for end-to-end speech-to-text translation. *arXiv preprint arXiv:1612.01744*.
Jang Hyun Cho and Bharath Hariharan. 2019. On the efficacy of knowledge distillation. In *Proceedings of* the IEEE/CVF international conference on computer vision, pages 4794–4802.
Mattia A Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019a. Must-c: a multilingual speech translation corpus. In *2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2012–2017. Association for Computational Linguistics.
Mattia A Di Gangi, Matteo Negri, and Marco Turchi.
2019b. One-to-many multilingual end-to-end speech translation. In *2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)*, pages 585–592. IEEE.
Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. 2016. An attentional model for speech translation without transcription.
In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 949–959.
Qingkai Fang, Rong Ye, Lei Li, Yang Feng, and Mingxuan Wang. 2022. STEMM: Self-learning with speech-text manifold mixup for speech translation. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 7050–7062, Dublin, Ireland.
Association for Computational Linguistics.
Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018.
Born again neural networks. In *International Conference on Machine Learning*, pages 1607–1616.
PMLR.
Michael Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 297–304. JMLR
Workshop and Conference Proceedings.
Chi Han, Mingxuan Wang, Heng Ji, and Lei Li. 2021.
Learning shared semantic space for speech-to-text translation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2214–2225, Online. Association for Computational Linguistics.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738.
Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, and Jin Young Choi. 2019. A comprehensive overhaul of feature distillation. In *Proceedings of the IEEE/CVF International Conference on* Computer Vision, pages 1921–1930.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7).
Zehao Huang and Naiyan Wang. 2017. Like what you like: Knowledge distill via neuron selectivity transfer. arXiv preprint arXiv:1707.01219.
Hirofumi Inaguma, Shun Kiyono, Kevin Duh, Shigeki Karita, Nelson Enrique Yalta Soplin, Tomoki Hayashi, and Shinji Watanabe. 2020. Espnet-st: Allin-one speech translation toolkit. *arXiv preprint* arXiv:2004.10234.
Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947.
Taku Kudo and John Richardson. 2018. SentencePiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium.
Association for Computational Linguistics.
Tsz Kin Lam, Shigehiko Schamoni, and Stefan Riezler.
2022. Sample, translate, recombine: Leveraging audio alignments for data augmentation in end-toend speech translation. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 245–
254, Dublin, Ireland. Association for Computational Linguistics.
Hang Le, Juan Pino, Changhan Wang, Jiatao Gu, Didier Schwab, and Laurent Besacier. 2020. Dual-decoder transformer for joint automatic speech recognition
and multilingual speech translation. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3520–3533, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Quanquan Li, Shengying Jin, and Junjie Yan. 2017.
Mimicking very efficient network for object detection.
In Proceedings of the ieee conference on computer vision and pattern recognition, pages 6356–6364.
Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In *Proceedings of the Tenth* International Conference on Language Resources and Evaluation (LREC'16), pages 923–929, Portorož, Slovenia. European Language Resources Association
(ELRA).
Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019.
End-to-end speech translation with knowledge distillation. *arXiv preprint arXiv:1904.08075*.
Yuchen Liu, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2020. Bridging the modality gap for speechto-text translation. *arXiv preprint arXiv:2010.14920*.
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 5191–5198.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In *2015* IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210.
IEEE.
Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho.
2019. Relational knowledge distillation. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 3967–3976.
Emanuel Parzen. 1962. On estimation of a probability density function and mode. *The annals of mathematical statistics*, 33(3):1065–1076.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets.
arXiv preprint arXiv:1412.6550.
Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019. Multilingual neural machine translation with knowledge distillation. arXiv preprint arXiv:1902.10461.
Yun Tang, Hongyu Gong, Ning Dong, Changhan Wang, Wei-Ning Hsu, Jiatao Gu, Alexei Baevski, Xian Li, Abdelrahman Mohamed, Michael Auli, and Juan Pino. 2022. Unified speech-text pre-training for speech translation and recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1488–1499, Dublin, Ireland. Association for Computational Linguistics.
Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021. Improving speech translation by understanding and learning from the auxiliary text translation task. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4252–4261, Online. Association for Computational Linguistics.
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2019.
Contrastive representation distillation. arXiv preprint arXiv:1910.10699.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. Journal of machine learning research, 9(11).
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020a. fairseq s2t: Fast speech-to-text modeling with fairseq. *arXiv* preprint arXiv:2010.05171.
Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020b. Bridging the gap between pretraining and fine-tuning for end-to-end speech translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9161–9168.
Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. *arXiv* preprint arXiv:1703.08581.
Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, and Jingbo Zhu. 2021.
Stacked acoustic-and-textual encoding: Integrating the pre-trained models into speech translation encoders. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics*
and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 2619–2630, Online. Association for Computational Linguistics.
Chenglin Yang, Lingxi Xie, Chi Su, and Alan L Yuille.
2019. Snapshot distillation: Teacher-student optimization in one generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2859–2868.
Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-toend speech translation via cross-modal progressive training. *arXiv preprint arXiv:2104.10380*.
Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5099–5113, Seattle, United States. Association for Computational Linguistics.
Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim.
2017. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning.
In *Proceedings of the IEEE conference on computer* vision and pattern recognition, pages 4133–4141.
Biao Zhang, Ivan Titov, Barry Haddow, and Rico Sennrich. 2020. Adaptive feature selection for end-toend speech translation. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 2533–2544, Online. Association for Computational Linguistics.
Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, and Jiajun Liang. 2022. Decoupled knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11953–11962.
Jiawei Zhao, Wei Luo, Boxing Chen, and Andrew Gilman. 2021. Mutual-learning improves end-toend speech translation. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 3989–3994, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A Reformulation Details
In Sec 3.3, we define the classical KD loss as follows:
$$\begin{split}\mathcal{L}_{KD}&=\sum_{i=1}^{|V|}p_{i}^{T}\log\left(\frac{p_{i}^{T}}{p_{i}^{S}}\right)\\ &=p_{t}^{T}\log\left(\frac{p_{t}^{T}}{p_{t}^{S}}\right)+\sum_{i=1,i\neq t}^{|V|}p_{i}^{T}\log\left(\frac{p_{i}^{T}}{p_{i}^{S}}\right)\end{split}\tag{11}$$
According to the definition of p\t and pˆiin Sec 3.3, we can reformulate Eq. (11) to:
i=1,i̸=t p T \tpˆ T ilog p T \tpˆ T i p S \t pˆ S i ! LKD =p T tlog p T t p S t +X |V | =p T tlog p T t p S t i=1,i̸=t p T \tpˆ T i log pˆ T i pˆ S i + log p T \t p S \t !! +X |V | =p T tlog p T t p S t +X |V | i=1,i̸=t p T \tpˆ T ilog pˆ T i pˆ S i i=1,i̸=t p T \tpˆ T ilog p T \t p S \t ! +X |V | (12) Since p T and p S are irrelevant to the class index
\t
\t
i, we have:
$$\sum_{i=1,i\neq t}^{|V|}p_{\backslash t}^{T}\hat{p}_{i}^{T}\log\left(\frac{p_{\backslash t}^{T}}{p_{\backslash t}^{S}}\right)=p_{\backslash t}^{T}\log\left(\frac{p_{\backslash t}^{T}}{p_{\backslash t}^{S}}\right)\sum_{i=1,i\neq t}^{|V|}\hat{p}_{i}^{T}\tag{13}$$ Moreover, $\sum_{i=1,i\neq t}^{|V|}\hat{p}_{i}^{T}=1$, so: $$\sum_{i=1,i\neq t}^{|V|}p_{\backslash t}^{T}\hat{p}_{i}^{T}\log\left(\frac{p_{\backslash t}^{T}}{p_{\backslash t}^{S}}\right)=p_{\backslash t}^{T}\log\left(\frac{p_{\backslash t}^{T}}{p_{\backslash t}^{S}}\right)\tag{14}$$
Bringing Eq. (14) back to Eq. (12), we have:
LKD =p T tlog p T t p S t + p T \tlog p T \t p S \t ! +p T \tX |V | i=1,i̸=t pˆ T ilog pˆ T i pˆ S i (15) =p T tlog p T t p S t + p T \tlog p T \t p S \t ! +(1 − p T t )X |V | i=1,i̸=t pˆ T ilog pˆ T i pˆ S i
## B Whether To Freeze Mt Parameters
Knowledge distillation usually freezes the teacher model (i.e., the gradient propagation of the teacher model is disable). We assume that this is because the teacher model is not supervised by the primary
| Methods | MT | MuST-C En-De |
|-----------|------|----------------|
| CCRD | ✓ | 27.7 |
| ✗ | 28.0 | |
| SDKD | ✓ | 28.2 |
| ✗ | 28.2 | |
loss and freezing the teacher model prevents it from being degraded by the student model. Howerver, our model is trained on ST and MT, simultaneously.
The teacher knowledge can be preserved by the auxiliary MT task. Moreover, we assume that not freezing teacher knowledge during knowledge distillation can make it more student-friendly. To investigate this, we conducted experiments on MustC En-De. Results are shown in Table 7. When we freeze MT in CCRD, we find the performance drops 0.3 BLEU. In SDKD, there is no difference in the performance of freezing MT or not. In general, not freezing MT when performing knowledge distillation is more suitable for our model.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
limitations
✓ A2. Did you discuss any potential risks of your work?
limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2
✓ B1. Did you cite the creators of artifacts you used?
3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix D
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3.2
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
3.2
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix C
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix C
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wein-etal-2023-follow | Follow the leader(board) with confidence: Estimating p-values from a single test set with item and response variance | https://aclanthology.org/2023.findings-acl.196 | Among the problems with leaderboard culture in NLP has been the widespread lack of confidence estimation in reported results. In this work, we present a framework and simulator for estimating p-values for comparisons between the results of two systems, in order to understand the confidence that one is actually better (i.e. ranked higher) than the other. What has made this difficult in the past is that each system must itself be evaluated by comparison to a gold standard. We define a null hypothesis that each system{'}s metric scores are drawn from the same distribution, using variance found naturally (though rarely reported) in test set items and individual labels on an item (responses) to produce the metric distributions. We create a test set that evenly mixes the responses of the two systems under the assumption the null hypothesis is true. Exploring how to best estimate the true p-value from a single test set under different metrics, tests, and sampling methods, we find that the presence of response variance (from multiple raters or multiple model versions) has a profound impact on p-value estimates for model comparison, and that choice of metric and sampling method is critical to providing statistical guarantees on model comparisons. | # Follow The Leader(Board) With Confidence: Estimating P**-Values From A Single Test Set With Item And Response Variance**
Shira Wein Georgetown Univ.∗
[email protected] Christopher M. Homan Rochester Inst. Tech.
[email protected] Lora Aroyo and **Chris Welty**
Google Research
{l.m.aroyo,cawelty}@gmail.com
## Abstract
Among the problems with leaderboard culture in NLP has been the widespread lack of confidence estimation in reported results. In this work, we present a framework and simulator for estimating p-values for comparisons between the results of two systems, in order to understand the confidence that one is actually better
(i.e. ranked higher) than the other. What has made this difficult in the past is that each system must itself be evaluated by comparison to a gold standard. We define a null hypothesis that each system's *metric scores* are drawn from the same distribution, using variance found naturally (though rarely reported) in test set items and individual labels on an item (responses) to produce the metric distributions. We create a test set that evenly mixes the responses of the two systems under the assumption the null hypothesis is true. Exploring how to best estimate the true p-value from a single test set under different metrics, tests, and sampling methods, we find that the presence of response variance
(from multiple raters or multiple model versions) has a profound impact on p-value estimates for model comparison, and that choice of metric and sampling method is critical to providing statistical guarantees on model comparisons.
## 1 **Introduction**
AI and NLP evaluation is facing a scientific reproducibility crisis that, despite increasing awareness, continues to worsen (Gundersen and Kjensmo, 2018). Published results may often show only epsilon improvements to state-of-the-art results, with no effort to estimate whether or not the results are statistically significant. The reasons for this crisis are complex, and it is easy to implicate the culture created by leaderboards (e.g., Wang et al. (2018)).
Our work is motivated by the need to **provide**
statistical testing alongside NLP results in order to reliably demonstrate model improvement, as opposed to solely depending on leaderboards.
Our work naturally ensues from studies of rater response disagreement (see e.g., (R Artstein; Snow et al., 2008; L Aroyo, 2013; Plank et al., 2014; Fornaciari et al., 2022), among others). Further, the issue of insufficient statistical analysis in NLP
work is well-documented, with many ACL papers not reporting statistical significance (Dror et al.,
2018). Considering the reliance on system comparison for benchmarking and leaderboards, statistical guarantees that consider the performance of both systems are critical, yet understudied.
Statistical tests for paired data (e.g. McNemar
(1947)) are not appropriate for this setting because of their reliance on strong assumptions about the data (Dietterich, 1998b); even extensions of McNemar's test such as the Cochran-Mantel-Haenszel test (Mantel, 1963) only apply when the metric can be applied independently to each item or responder (human or machine), and is then aggregated.
Therefore, these metrics are not applicable for this use case, in large part due to three potential challenges: (1) three sets of data are involved in this comparison, (2) there is variance in all three of those sets, and (3) many different metrics are used in NLP evaluation. Moreover, variance can come at the item or response level, due to stochastic inference or training, changes in training data such as cross-validation, or annotator disagreement in gold labels.
We investigate the use of null hypothesis significance tests (NHST) to add a dimension of confidence to NLP evaluations. The purpose of NHST
is to determine whether differences between multiple sets of observations are significant, after accounting for sampling variance in the observations.
When comparing two NLP systems, each is first compared to a gold standard, resulting in some metric score (e.g. BLEU (Papineni et al., 2002)), and then those metric scores for the two models are compared to each other. While all p-values are esti-
∗Work completed while interning at Google.
3138 mates, there are many ways to sample and measure the results from a single test set, each producing a different p-value estimate. We explore how to determine which method (of sampling, aggregating, and measuring responses) produces the most accurate p-value estimate from a single test set in comparison to the true/ground truth p-value.
In this work, we present a framework for effective p-value estimation when comparing two systems against gold judgments, with the aim of identifying with statistical rigor if one system outperforms the other. Our findings indicate that the amount of response variance has an impact on pvalue estimates, item-wise mean absolute error is consistently a reliable metric, and—while most metrics and sampling methods perform well when machine output is dissimilar—metric choice and sampling method is especially critical when the performance of the two machines is similar.
Our primary contributions include:
- combining, for the first time, the related notions of response disagreement from machines
(Gorman and Bedrick, 2019) and from raters
(Aroyo and Welty, 2015);
- a new framework for NHST that allows comparisons across different test metrics and sampling strategies;
- a simulator capable of producing informative null hypotheses and computing p-values that account for both item and response variance,
- a thorough evaluation of how well eight metrics and six re-sampling strategies estimate the
"true" p-value from a single test, on simulated data; and
- a demonstration of our framework on realworld data.
Our findings give insight into which statistics are most informative when designing NHSTs for contemporary NLP systems, and is applicable to any NLP setting that makes use of comparisons to quantitative gold judgments (e.g. sentiment analysis, semantic similarity, search relevance, translation quality, etc.), when response variance is prevalent.
We plan to share our code upon publication.
## 2 **Related Work**
Our approach to generating a statistical guarantee associated with the comparison of two NLP systems (a p-value) is rooted in the statistical inference method of NHST. Our formulation also incorporates variance in rater and system responses.
## 2.1 **Nhst For Evaluation**
Existing notions of p-values are built on a null hypothesis H0 which states that the effect size between the control and test set is zero. The p-value is then the probability that an effect of the observed size or greater would occur under the assumption that H0 is true. Here, the "control" and "test" sets are the outputs of distinct models that we wish to compare, and the effect size represents the performance of the first system compared to the second on gold standard data.
Dietterich (1998a) considers hypothesis testing on machine learning problems (specifically comparing the performance of two learning algorithms with a small amount of data), but does not consider response variance or accuracy of the p-value estimate. Our approach builds on Dietterich's (1998a);
we also observe that the standard null hypotheses do not quite fit the use case of comparing the output of two systems, since the error is the result of a comparison with a third, gold standard, dataset, and we investigate the effect of different sources of variance, as well as different metrics, on the p-value estimate from a single test set.
Søgaard et al. (2014) explore the effects of sample size, covariates (such as sentence length), and the variance introduced by multiple metrics, and conclude that current approaches to p-value tests are not reproducible or sufficient. They suggest that the usual upper bound of p < 0.05 is too high, and that p < 0.0025 provides a better guarantee that the *false positive rate* is less than 5%. One problem faced in coming to this conclusion was how to determine what the correct p-value actually is. Note that they use the false positive rate as the target of the guarantee, which is an intuitive but completely non-standard approach to hypothesis testing. We address this by utilizing a simulator that is capable of generating thousands of test sets, which then allows us to make a better estimate as to the true p-value, and compare the effects of many more sources of variance.
Related work has surveyed statistical significance testing techniques in NLP systems (Dror et al., 2020) and studied permutation and bootstrapping methods for computing significance tests and confidence intervals on text summarization evaluation metrics (Deutsch et al., 2021). Haroush et al.
(2021) observe that out of distribution detection can be recast as a p-value problem, using p-values for inference, not significance testing.
Prior work critical of the utility of the p-value cites the impact of sample size and bias on the level of significance (Sullivan and Feinn, 2012; Thiese et al., 2016), as well as the variability of p-values across samples (Halsey et al., 2015).
Kohavi et al. (2022) examine the misunderstandings and errors related to statistics reported on A/B
test experiments, including the erroneous perception that p-value indicates the chance of a false positive. Kohavi et al. (2022) suggest that p-values are widely inaccurately applied even by experts and that intentional efforts need to be made to report meaningful statistical measures.
Though there is some criticism of the use of p-values, we propose that they can be useful in bridging the lack of confidence estimation in NLP
system evaluations. Further, we aim to address the effect of variability across samples by using a large number of samples to determine the best approach to p-value generation.
Null hypothesis statistical testing alone as a method for significance testing also does not lead to reproducibility, due to the use in evaluation of inconsistent train-test splitting (Gorman and Bedrick, 2019). We address this as well in our approach by incorporating response variance, discussed in more detail below.
## 2.2 **Response Variance**
For each item in a test set, a human rater can provide a *response*, such as a class label or a Likert scale. Prior work indicates the importance of eliciting such responses from multiple raters per item, to account for ambiguity and different perspectives
(L Aroyo, 2014; Uma et al., 2021). Regardless of the task, gathering multiple responses results in disagreement. Machine systems also provide a response for each item, and these responses can vary with stochastic training conditions, hyperameter changes, cross-validation, and other causes.
System variance can be incorporated into model prediction by merging answers rather than simply ranking (Gondek et al., 2012).
Response variance may be indicative of true features of the data and thus be incorporated into the model (Reidsma and Carletta, 2008). Recent work has indicated that taking a majority vote aggregation may not be effective at resolving/incorporating annotator variance (Davani et al., 2022; Barile et al., 2021).
Prior work has explored the role of variance and data collection in metrics on human annotated datasets (Welty et al., 2019). Homan et al. (2022)
provides a framework for analyzing the amount of variance, and types of disagreement, in crowdsourced datasets. Wong et al. (2021) addresses the variance in crowdsourced annotations by presenting a more contextualized measure of inter-rater reliability based on Cohen's kappa. Bayesian models of annotation have also been used and evaluated as potential methods for identifying annotator accuracy and item difficulty (Paun et al., 2018).
Recent work has also considered incorporating logical justifications of human viewpoints as a twodimensional judgment (Draws et al., 2022).
Our simulator produces scores with variance according to different distributions (specified as hyperparameters), allowing us to include response variance in our evaluation.
## 3 **Evaluation Framework** 3.1 **Problem Formulation**
Comparing two NLP systems often involves measuring a baseline B and a candidate A against gold judgments G, to determine whether A is an improvement over B. This comparison is made using a *metric* δ run over a *test set* that is drawn from a population of data. For each item i in the test set, both A and B have a distribution of responses Ai and Bi, and it is possible to have multiple responses for each item. In addition, due to rater disagreement, there is a distribution of human responses, Gi. The metric δ compares each system's responses to the human responses and produces a pair of metric scores, δ(A,G) and δ(B,G). Finally, the per-system metric scores are compared to each other so that when δ(A,G) > δ(B,G) we can say A is an improvement over B.
The *null hypothesis*, denoted H0, is that the two sets of responses being compared (i.e. Ai, j and Bi, j, where i is an item and and j is a responses for a given item) *are drawn from the same distribution*.
This is compared against an *alternative hypothesis*,
denoted H1, that Ai, j and Bi, j are true to the underlying distributions from which A, B, and G were drawn, and therefore that the comparison δ(A,G)
and δ(B,G) is a fair representation of the comparison between A and B. We aim to provide a p-value for this comparison.
By contrast, in the vast majority of NHST settings, A and B are sets of individual responses and there is no notion of variance in i once it is drawn; the only source of variance comes from the sampling of the items. For simple test statistics like mean, a closed-form estimate such as a paired t-test
(Student, 1908) will suffice.
However, many metrics used in NLP are not amenable to such closed-form estimates, and the presence of response-level variance means that even many simple metrics cannot be reliably estimated in closed form. Therefore, it is necessary to rely on resampling methods, such as bootstrapping or permutation sampling, to estimate p-values.
Here we focus on bootstrapping variants, where variance is estimated by resampling from a dataset with replacement.
Usually, the most important design issue in NHST is whether the sample has enough statistical power to detect a difference between A and B when one exists; in our setting, there are two equally fundamental questions: what approach to resampling to use in order to estimate variance, and what metric to use for reliably estimating p-values. These design issues led us to the following research questions:
RQ1. Can response-level variance be used to estimate p-values?
RQ2. What method of sampling response variance generates the most accurate p-value?
RQ3. What metrics generate the most accurate p-value?
RQ4. How sensitive are the measurements as two systems' responses draw closer to each other?
## 3.2 **Simulator**
To produce and analyze p-value estimates from a test set, we built a simulator that operates in three stages. The main idea is to sample a *reference test* set from a known, fixed underlying distribution of items and responses and use a resampling method to estimate the p-value of that test set. Then, we use the same underlying distribution to directly estimate the "true" p-value of the distribution.
## 3.2.1 **Generating Reference Test Data**
First, we generate the reference test data, described in detail in Algorithm 3 in appendix A. The reference test set consists of samples for ground truth
(G
ref) and two NLP systems (A
ref and B
ref). Each sample has N items and K responses per item.
The responses are continuous values in the interval [0,1]. To construct G
ref, for each item i we sample a mean µi and standard deviation σi from specific uniform distributions. Then, we sample K responses from a normal distribution parameterized by µi and σi. For A
ref and B
ref, we use the same sample of means and standard deviations as for G
ref, but with µi replaced by µi +ε i X
, where ε i X
is chosen uniformly at random over the interval
[−εX ,εX ] for X ∈ {A,B}, respectively. This process makes the items in the three sets the same, while keeping the responses in each set independent (conditioned on each item i) where the magnitudes of difference in the response distributions are parameterized by εA and εB.
## 3.2.2 **Sampling For Comparative Hypothesis** Testing
Next, we simulate each of the sampling strategies on items and responses. Algorithm 1 describes this process in detail. It takes hyperparameters that specify the item and response *sampling strategies*,
respectively (described in §4.1). Here, it is only important to note that our sampling strategies provide rules for resampling from a dataset, such as sample the items, take all responses or sample the items, then sample from the responses for each item.
Algorithm 1 is actually used twice: once for the data needed to estimate the p-value based on the reference test set and once for the true p-value. In each case it produces data supporting H0 and H1.
For the reference test set (rts) H1, we construct three samples corresponding to A
ref B
ref and G
ref by resampling from each according to the given sampling strategy.
For the reference test set H0, we reuse the sample of G
ref constructed for H1. For A
ref and B
ref, we operate under the H0 assumption that they are drawn from the same underlying distribution (when in fact they were drawn from similar distributions, perturbed according to εA and εB). We do this by first combining A
ref and B
ref into a single set A
ref∣B
ref, where each item i in the combined set has all of the responses from both A
ref iand B
ref i.
We sample responses for each of A
ref and B
ref by sampling from A
ref∣B
ref.
For the true p-value (true) H1, Algorithm 1 constructs the samples corresponding to each of A, B
and G by ignoring A
ref, B
ref, and G
ref and instead sampling directly from the underlying distribution described in §3.2.1. For the H0 data, we use the same underlying distribution, except that in order to operate under the H0 assumption that A
ref and B
ref are drawn from the same distribution, each item i and response for each of A and B (the process is unchanged for G) is sampled by first uniformly drawing X ∼ {A,B} and then sampling from the normal distribution parameterized by (µi +ε i X
,σi).
## 3.2.3 **Applying Hypothesis Tests To** (Sub)Sampled Distributions
Finally, for each of the reference test sets and the true distribution, we sample from the distribution M times and feed the output to Algorithm 2, which estimates p-values with respect to a given metric.
## Algorithm 1 Sample
Input parameters G,A,B: pointers to reference data or underlying distributions Φ: item index sampler Π: response sampler r ∈ {rts,true} whether to use the input sets for re-sampling or to sample directly from the true underlying distribution.
Results G∗,A
alt,B
alt: vector (or matrix) samples A∅,B∅: null hypothesis samples j ← 0 for all i ∈ Φ(S) do G∗
j ← Πr(Gi)
A
alt j ← Πr(Ai)
B
alt j ← Πr(Bi)
A∅
j ← Πr(Ai∣Bi)
B∅
j ← Πr(Ai∣Bi)
j ← j+1 end for
## 4 **Experiments**
We perform a set of experiments on datasets where N = 1000 and K = 5. These numbers are representative of the number of items in typical test sets and of the numbers of responses in test sets where multiple responses are reported. We consider 6 sampling methods, 8 metrics, and 5 levels of perturbation of system B (we fix the perturbation εA = 0 and treat it as an ideal model1).
## 4.1 **Sampling Strategies For Response Variance**
We experiment with 6 test set sampling methods to calculate a p-value. By implementing these methods, we are able to determine which of these ap1We fix the perturbation to zero and focus on comparing the sampling methods and metrics under this ideal setting, though varying εA in further experimentation will provide additional insight to the generalizability of our results
## Algorithm 2 Htest
Input parameters Gj,A
alt j
,B
alt j
,A∅
j
,B∅
j
,1 ≤ j ≤ M constructed from M calls to Algorithm 1 δ: a test metric α ← 0 β ← 0 for all j ∈ {1*,...,*M} do αj = δ(A
alt j
,Gj)−δ(B
alt j
,Gj)
βj = δ(A∅
j
,Gj)−δ(B∅
j
,Gj)
end for p ← 0 for all j ∈ {1*,...,*M} do p ← p+∣{αj′ ∣ βj < αj′}∣/M
end for p ← p/M
proaches on a single test set best approximates the true p-value.
- Randomly sampling one response. *(all_items,*
sample(1)) uses all items and randomly selects one response per item, e.g.
[0.6,0.4,0.8,0.5,0.4]2 → 0.4.
- Bootstrapping responses. *(all_items, sample(5))* uses all items and samples n=5 responses per item as in "bootstrapping"
(Welty et al., 2019), with replacement, e.g.
[0.6,0.4,0.8,0.5,0.4] → [0.6,0.6,0.5,0.4,0.5].
- Bootstrapping items. *(bootstrap_items, all)*
bootstrap samples n=1000 items with replacement and uses all responses for each item.
- Bootstrapping items, selecting one response per item. *(bootstrap_items, first_element)*
bootstrap samples n=1000 items with replacement and selects the first response per item, e.g. [0.6,0.4,0.8,0.5,0.4] → 0.6.3
- Bootstrapping items, randomly selecting one response per item. *(bootstrap_items, all)* bootstrap samples n=1000 items with replacement and randomly selects one response per item, e.g. [0.6,0.4,0.8,0.5,0.4] → 0.4.
- Bootstrapping items, bootstrapping responses.
(bootstrap_items, sample(5)) bootstrap samples n=1000 items with replacement samples
n=5 responses per item with replacement, e.g.
[0.6,0.4,0.8,0.5,0.4] → [0.8,0.6,0.4,0.6,0.5].
## 4.2 **Metrics**
We implement 8 metrics to compare the gold scores and the systems output:
- Mean absolute error (MAE). Calculate the error for each item, i.e. the distance (absolute value of the difference) from gold to system responses, then take the mean of the item-wise error. Note that if the size of the response sample per item is greater than 1, the responses per item are aggregated to the mean.
- (Inverse) Mean-squared error (MSE). Mean squared error (inverted so that higher is better)
across all items.
- Item-wise metric wins (Winsδ). Compare the system responses to gold for each item using a metric δ, and count the number of items in the set for which each system performs better
(i.e. wins). In Table 2, we show the results only for WinsMAE.
- Cosine distance. First, vectorize each matrix. Transform each from an n × k to an nk×1 dimensional matrix. Then δcos(A,G) =
1−A⋅G
∥A∥∥G∥
, δcos(B,G) = 1−B⋅G
∥B∥∥G∥
,
- Aggregated EMD. Mean of each item and earth mover's distance of the entire vertical distribution.
- Aggregated EMD vectorized. Transform each from an n×k to an nk×1 dimensional matrix.
Then take the earth mover's distance on the entire vectorized distribution.
- Mean of EMDs. Earth mover's distance of each individual item and mean of all of the EMD scores.
- Spearman Rho. Spearman's correlation between the vectors of mean responses per item.
## 5 **Results Of Simulation Study**
We examine which of the metrics and sampling methods on a single test set best estimate the true p-value, by calculating the error between the estimated p-value and true p-value across five response distribution perturbations (εA = 0,εB ∈
{0.0,0.05,0.1,0.3,0.7} , *q.v.* Alg. 3).
We expect that as the amount of perturbation applied to system B increases, it should be clearer that the data is drawn from two separate distributions. A metric that is more sensitive to the effect of perturbation/distance should have a smaller difference between the estimated p**-value and true**
p**-value when the perturbation is increased**. Consequently, the metric should have a harder time producing the estimated p-value when the systems are closer together—meaning a larger difference between the estimated p-value and true p-value when the perturbation is less.
Table 1 shows estimation error for each of the sampling methods (minimized across all metrics).
The estimation error is the difference between the p-value estimated from a single test set and the true p-value. With εB = 0.1, the (all_items, sample(5)),
(bootstrap_items, all), and (bootstrap_items, sample(5)) all perform well. These three sampling methods are clearly the best performers for all ε > 0.
On the other hand, sampling strategies which reduce the amount of responses per item (i.e. sample(1) and first_element) are not as effective. These findings indicate that **incorporating the variance**
into the evaluation enables a more accurate statistical comparison.
Table 2 shows estimation error for each of the metrics (minimized across all sampling methods).
To illuminate trends across perturbation levels, Figure 1 visualizes the results from Table 2, and some interesting patterns emerge. As discussed above, we expect a good method to decrease its p-value estimates as the perturbation of B (the x axis) increases.
Multiple metrics (cosine similarity, WinsMAE,
and Spearman Rho) show lower minimum differences at each increasing interval of perturbation.
This suggests that these metrics, when operating under unknown conditions / distances between system A and system B, may behave most predictably.
WinsMAE has the lowest difference in true and estimated p-value for ε > 0, making this the preferred metric.
The least consistent metric is Aggregated EMD
vectorized, which increased, decreased, and increased again in minimum difference between estimated and true p-values at increasing levels of perturbation (Table 2).
It is important to note that p=0.05 is a critical value when considering statistical guarantees, so differences in estimated and true p-values close to or exceeding 0.05 are not acceptable; if the difference in estimated and true p-value is close to 0.05, there is sufficient room for error for it to seem like there is evidence of model difference, when in fact there *is not*.
| Sampling Method | n | εB = 0 | εB = 0.05 | εB = 0.1 | εB = .3 | εB = .7 |
|----------------------------------|-----|----------|-------------|------------|-----------|-----------|
| (all_items,sample(1)) | 6 | 0.04261 | 0.02185 | 0.00285 | < 10−5 | < 10−5 |
| (all_items,sample(5)) | 9 | 0.02152 | 0.00289 | < 10−5 | < 10−5 | < 10−5 |
| (bootstrap_items, all) | 9 | 0.00621 | 0.00166 | < 10−5 | < 10−5 | < 10−5 |
| (bootstrap_items, first_element) | 6 | 0.04243 | 0.06268 | 0.00462 | < 10−5 | < 10−5 |
| (bootstrap_items, sample(1)) | 6 | 0.05317 | 0.00171 | 0.00184 | < 10−5 | < 10−5 |
| (bootstrap_items, sample(5)) | 9 | 0.02094 | 0.00680 | < 10−5 | < 10−5 | < 10−5 |
Table 1: Minimum p-value estimation error by sampling method (a tuple of item and response sampler), based on n experiments per method, for five different levels of M2 perturbation (εB), with εA = 0. n is the number of experiments using a given method (i.e. number of metrics used in combination with this sampling method).
Metric n εB = 0 εB = 0.05 εB = 0.1 εB = 0.3 εB = .7
![6_image_0.png](6_image_0.png)
Cosine Similarity 6 0.13246 0.02128 0.00184 < 10−5< 10−5 Aggregated EMD 6 0.02094 0.00444 0.00342 0.03633 < 10−5 Aggregated EMD vectorized 3 0.01807 0.00415 0.00478 0.00808 < 10−5 MSE 6 0.00621 0.03206 0.01349 < 10−5< 10−5 MAE 6 0.01071 0.02929 0.00020 < 10−5< 10−5 WinsMAE 9 0.08724 0.00166 < 10−5< 10−5< 10−5 Mean of EMDs 3 0.02152 0.02721 0.03219 0.00022 < 10−5 Spearman Rho 6 0.07934 0.02114 0.01110 < 10−5< 10−5
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
## 6 **Application To Real-World Data**
To apply our method on actual data, we need the item and response data for the ground truth and the two machines (G
ref, A
ref, and B
ref, respectively).
For our example, we chose Kumar et al. (2021), a dataset of 107,620 social media comments that are labeled by five annotators each on the toxicity of each comment, using a 5-level Likert scale from 0–4. We randomly sampled 1000 items from it for G
ref, normalizing the annotations into [0,1],
yielding possible responses {0,0.2,0.4,0.6,0.8}.
Next, we match the hyperparameters of Algorithm 3 to the actual underlying distributions. We assume that each response Gi,k is drawn from a normal distribution with a specific mean and standard deviation for each item, as before, except rather than assuming they come from uniform distributions as in Algorithm 3 we now take parameterized models foldednormal([0, 0.28]) and triangular([-0.05, 0.21, 0.45] for the means and standard deviations, respectively, fitted to the 107,620-comment dataset. We visually inspect the histograms to determine the probabilistic model
Sampling Method n εB = 0 εB = 0.05 εB = 0.1 εB = .3 εB = .7
(all_items,sample(1)) 6 0.00108 0.00545 0.02909 0.01037 < 10−5
(all_items,sample(5)) 9 0.02020 0.00390 0.00585 < 10−5< 10−5
(bootstrap_items, all) 9 0.00014 0.00120 0.00604 < 10−5< 10−5
(bootstrap_items, first_element) 6 0.10096 0.03511 0.02359 0.01801 < 10−5
(bootstrap_items, sample(1)) 6 0.00193 0.00893 0.02965 0.00958 < 10−5
(bootstrap_items, sample(5)) 9 0.00406 0.00265 0.03939 < 10−5< 10−5
Table 3: On real toxicity data: minimum p-value estimation error by sampling method, a tuple of item and response sampler, based on n experiments per method, for five different levels of M2 perturbation (εB), with εA = 0.
to use, and then choose the hyperparameters that minimizes the mean absolute error between the observed data distributions and those predicted by the models. This process is described in appendix B.
We also assume that, after sampling from a normal distribution the results in the range [0,0.2) are converted to 0.2, those in the range [0.2,0.4) are converted to 0.4 etc. to simulate the discrete nature of Likert responses.
With these parameters set, we can run the framework described in §3.2, with the toxicity dataset sample as our reference test set and simulated system responses to choose the best metric and sampling method to use on G
ref.
We expect to see results similar to those from the pure simulation study, although the fact that responses are now discrete, rather than continuous, there will be sharper differences in performance between different values of εB.
The results on the toxicity dataset (Table 3, Table 4, Figure 2) exhibit some of the same patterns seen in the pure simulation results. (Bootstrap_items, all) is the best sampling strategy, and the strategies that take only one response per item seem to do the worst. Among the metrics, Spearman Rho has the best overall performance.
However, it should be noted that for εB ∈
{0,0.05,0.1} the maximum amount of perturbation is relatively small compared to the 0.2 interval between successive elements in the response do-
| Metric | n | εB = 0 | εB = 0.05 | εB = 0.1 | εB = .3 | εB = .7 |
|---------------------------|-----|----------|-------------|------------|-----------|-----------|
| Cosine Similarity | 6 | 0.00108 | 0.00120 | 0.02909 | 0.12034 | < 10−5 |
| Aggregated EMD | 6 | 0.11932 | 0.01835 | 0.00585 | 0.01801 | < 10−5 |
| Aggregated EMD vectorized | 3 | 0.29085 | 0.05877 | 0.04086 | 0.16090 | 0.27378 |
| MSE | 6 | 0.01866 | 0.00545 | 0.04629 | < 10−5 | < 10−5 |
| MAE | 6 | 0.02020 | 0.02973 | 0.13357 | < 10−5 | < 10−5 |
| WinsMAE | 9 | 0.01942 | 0.03391 | 0.131146 | < 10−5 | < 10−5 |
| Mean of EMDs | 3 | 0.01382 | 0.00390 | 0.14331 | 0.04434 | < 10−5 |
| Spearman Rho | 6 | 0.00014 | 0.03511 | 0.02359 | < 10−5 | < 10−5 |
main. There is not much observable difference in the performance between A and B until εB = 0.3.
At this point, many of the metrics do well. Another interesting pattern in the metric results is that Spearman Rho is among the better performers in most cases, particularly for εB ∈ {0.3,0.7}.
## 7 **Discussion**
These experiments suggest answers to our four research questions:
RQ1. Can response-level variance be used to estimate p-values? Yes. In Table 1 and Table 3 we see that (bootstrap_items, first_element)—the only sampling method that does not make use of response variance—generally performs poorly in the three lowest perturbation settings, and becomes competitive with the best approach only at εB = 0.7.
The response variance appears to make the measurements more sensitive to smaller differences between evaluated systems.
RQ2. What method of sampling response variance generates the most accurate p-value? The most promising sampling method is (bootstrap_items, all) (Table 1 and Table 3).
RQ3. What metrics generate the most accurate p-value? In the purely simulated data, WinsMAE
is the best (Table 2). In the toxicity dataset, MSE,
MAE, WinsMAE, and Spearman Rho all do very well for εB ≥ 0.3, and MSE and Spearman Rho do 3145 better than the rest for smaller perturbation levels
(Table 4).
RQ4. How sensitive are the measurements as two systems' response distributions draw closer to each other? On the purely simulated data, WinsMAE is the most consistent metric when considering sensitivity to distance between system A and system B
(Figure 1).
Compared to the purely simulated data, the real toxicity dataset exhibits a much sharper difference among the performance of the better metrics as εB increases. This is likely due to binning the responses into five discrete levels, meaning that levels of perturbation that are detectable in a continuous domain (which the purely simulated data has) are negligible over a discrete domain (which the toxicity data has) when they are much smaller than the size of each bin. However, as the perturbation levels approach or exceed the bin size, the binning quite suddenly creates starker differences in the toxicity dataset than in the purely simulated data
(Table 2 and Table 4).
Our results suggest that, of the methods explored here, the WinsMAE metric, in combination with the
(bootstrap_items, all) sampling technique, provides the most effective p-value estimate on a single test set. That WinsMAE performed poorly on εB = 0
(or, on the toxicity dataset, εB ≤ 0.1) should not distract from its superior performance for other choices of εB. Recall that εB = 0 (or, on the toxicity dataset, εB ≤ 0.1) means that the null hypothesis is (effectively) true (Hung et al., 1997; Boos and Stefanski, 2011; Colquhoun, 2014) and p-values are very large, so larger errors are less critical.
Compared to MAE, WinsMAE's counting wins likely outperforms taking the mean due to the small sample of responses taken for each item (i.e., no more than five for each). With such a small sample it is hard to estimate the mean with any degree of precision, so when these means are aggregated over all 1000 items, this lack of precision accumulates.
Even though we cannot reliably estimate the mean with two samples, in comparing two samples of size five, it is still possible to tell when one mean is likely greater than the other: a win is a binary, whereas the mean is a continuous, variable, so the mean carries more information. Thus, at lower sample sizes it is harder to estimate.
Our results suggest that, among the metrics and sampling methods studied here, the choice of best metric is independent of the choice of sampling
## 8 **Conclusion** 8.1 **Overview & Findings**
In this work we address the lack of statistical rigor in system evaluation and propose a framework to help tackle this problem. Here, we constructed a statistical approach to comparing two systems against gold/human judgments. After developing a simulator to test the utility of sampling methods and metrics on many test sets, we experimented with 6 sampling methods, 8 metrics, and 5 levels of distance between system A (proposed system)
and system B (baseline). We find that sampling methods which incorporate variance perform better, and that WinsMAE and Spearman Rho are reliable metrics.
## 8.2 **Recommendations For Practitioners**
While this testing regime is our general recommendation for future work evaluating NLP systems, our findings indicate that evaluation protocol requires tuning to the specific task and data. Generally, our results show that incorporating variance into sampling strategy enables more rigorous statistical evaluation, and both WinsMAE and Spearman Rho are metrics which seem to be strong in their sensitivity to perturbation.
These methods are useful for designing an experiment, as they can indicate an optimal metric or sampling strategy, as well as number of necessary items or annotators for the task.
Beyond specific recommendations for metrics and sampling methods, our results demonstrate that machine similarity (distance in distribution between the baseline and the proposed system), sampling method, and metric chosen affect leaderboard performance, and statistical guarantees should be provided when claiming that a proposed model outperforms an existing model.
## 8.3 **Future Work**
In future work, we would like to consider further hyperparameters, such as the effect of number of responses on the measurement sensitivity, categorical responses as opposed to continuous numerical data, and different item and response distributions.
In the latter case, we believe that understanding the item and response distributions of an evaluated system will be an important element in choosing sampling strategies and metrics.
## Limitations
As the contributions of this work include a framework and preliminary experimentation, there are a number of constraints that we leave to future work.
Firstly, we considered only one family of response distributions. We chose normal distributions because their behavior is well-understood and they are easy to work with. However, the structural similarities between normal distributions and the best performing metrics—namely, absolute errorsuggests that, more generally, the best test metrics for NHST may vary depending on the underlying response distributions. Therefore, we recommend that use of our framework should potentially vary depending on the dataset being considered, and might have other distributions commonly found in model and gold standard items and responses, such as exponential or multinomial distributions.
Similarly, we only considered p-value estimators that are based on bootstrap sampling. Implementation of our framework in future use would benefit from matching the estimator to the test metric. For instance, permutation tests are the most common way to estimate p-values for Spearman correlation, and analytical tests such as Student's or MacNemar's, which are commonly used even when the underlying assumptions on which they are based are not likely to hold (as, we expect is the case here). As such, the sampling method could change based on which metric is best for the task/data.
## Acknowledgements
We thank anonymous reviewers for their helpful feedback. This work is supported by Google Research and a Clare Boothe Luce Scholarship.
## References
Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd Truth and the seven myths of human annotation. AI
Magazine, 36(1):15–24.
Francesco Barile, Shabnam Najafian, Tim Draws, Oana Inel, Alisa Rieger, Rishav Hada, and Nava Tintarev.
2021. Toward benchmarking group explanations:
Evaluating the effect of aggregation strategies versus explanation. In *Perspectives@ RecSys*.
Dennis D Boos and Leonard A Stefanski. 2011. Pvalue precision and reproducibility. *The American* Statistician, 65(4):213–221.
David Colquhoun. 2014. An investigation of the false discovery rate and the misinterpretation of p-values.
Royal Society open science, 1(3):140216.
Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. 2022. Dealing with disagreements:
Looking beyond the majority vote in subjective annotations. *Transactions of the Association for Computational Linguistics*, 10:92–110.
Daniel Deutsch, Rotem Dror, and Dan Roth. 2021. A
statistical analysis of summarization evaluation metrics using resampling methods. Transactions of the Association for Computational Linguistics, 9:1132–
1146.
Thomas G Dietterich. 1998a. Approximate statistical tests for comparing supervised classification learning algorithms. *Neural computation*, 10(7):1895–1923.
Thomas G Dietterich. 1998b. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization.
Machine learning, 32:1–22.
Tim Draws, Oana Inel, Nava Tintarev, Christian Baden, and Benjamin Timmermans. 2022. Comprehensive viewpoint representations for a deeper understanding of user interactions with debated topics. In ACM
SIGIR Conference on Human Information Interaction and Retrieval, pages 135–145.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics.
Rotem Dror, Lotem Peled-Cohen, Segev Shlomov, and Roi Reichart. 2020. Statistical significance testing for natural language processing. *Synthesis Lectures* on Human Language Technologies, 13(2):1–116.
Tommaso Fornaciari, Alexandra Uma, Massimo Poesio, and Dirk Hovy. 2022. Hard and soft evaluation of NLP models with BOOtSTrap SAmpling - BooStSa.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics: System Demonstrations, pages 127–134, Dublin, Ireland. Association for Computational Linguistics.
D. C. Gondek, A. Lally, A. Kalyanpur, J. W. Murdock, P. A. Duboue, L. Zhang, Y. Pan, Z. M. Qiu, and C. Welty. 2012. A framework for merging and ranking of answers in deepqa. IBM Journal of Research and Development, 56(3.4):14:1–14:12.
Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786–2791, Florence, Italy. Association for Computational Linguistics.
Odd Erik Gundersen and Sigbjørn Kjensmo. 2018. State of the art: Reproducibility in artificial intelligence.
In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 32.
Lewis G Halsey, Douglas Curran-Everett, Sarah L
Vowler, and Gordon B Drummond. 2015. The fickle p value generates irreproducible results. *Nature methods*, 12(3):179–185.
Matan Haroush, Tzviel Frostig, Ruth Heller, and Daniel Soudry. 2021. A statistical framework for efficient out of distribution detection in deep neural networks.
In *International Conference on Learning Representations*.
Christopher Homan, Tharindu Cyril Weerasooriya, Lora Mois Aroyo, and Chris Welty. 2022. Annotator response distributions as a sampling frame. In LREC Workshop on Perspectivist NLP.
HM James Hung, Robert T O'Neill, Peter Bauer, and Karl Kohne. 1997. The behavior of the p-value when the alternative hypothesis is true. *Biometrics*, pages 11–22.
Ron Kohavi, Alex Deng, and Lukas Vermeer. 2022. A/b testing intuition busters: Common misunderstandings in online controlled experiments. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3168–3177.
Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. 2021. Designing toxic content classification for a diversity of perspectives.
In *Seventeenth Symposium on Usable Privacy and* Security (SOUPS 2021), pages 299–318.
C Welty L Aroyo. 2013. Crowd truth: Harnessing disagreement in crowdsourcing a relation extraction gold standard. In *Web Science 2013*.
C Welty L Aroyo. 2014. The three sides of crowdtruth.
Human Computation, 1(1):31–44.
Nathan Mantel. 1963. Chi-square tests with one degree of freedom; extensions of the mantel-haenszel procedure. *Journal of the American Statistical Association*,
58(303):690–700.
Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. *Psychometrika*, 12(2):153–157.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018.
Comparing bayesian models of annotation. *Transactions of the Association for Computational Linguistics*, 6:571–585.
Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014.
Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 507–511, Baltimore, Maryland.
Association for Computational Linguistics.
M Poesio R Artstein. Inter-coder agreement for computational linguistics. *Computational linguistics*,
34(4):555–596.
Dennis Reidsma and Jean Carletta. 2008. Squibs: Reliability measurement without limits. Computational Linguistics, 34(3):319–326.
Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast - but is it good?
evaluating non-expert annotations for natural language tasks. In *Proceedings of the 2008 Conference* on Empirical Methods in Natural Language Processing, pages 254–263, Honolulu, Hawaii. Association for Computational Linguistics.
Anders Søgaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and Hector Martínez Alonso. 2014.
What's in a p-value in NLP? In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 1–10, Ann Arbor, Michigan. Association for Computational Linguistics.
Student. 1908. The probable error of a mean.
Biometrika, pages 1–25.
Gail M Sullivan and Richard Feinn. 2012. Using effect size—or why the p value is not enough. *Journal of* graduate medical education, 4(3):279–282.
Matthew S Thiese, Brenden Ronna, and Ulrike Ott.
2016. P value interpretations and considerations.
Journal of thoracic disease, 8(9):E928.
Alexandra N Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021.
Learning from disagreement: A survey. *Journal of* Artificial Intelligence Research, 72:1385–1470.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Chris Welty, Praveen Paritosh, and Lora Aroyo. 2019.
Metrology for ai: From benchmarks to instruments.
arXiv preprint arXiv:1911.01875.
Ka Wong, Praveen Paritosh, and Lora Aroyo. 2021.
Cross-replication reliability - an empirical approach to interpreting inter-rater reliability.
## A **Algorithm Used By The Simulation** Framework
Here we include formalizations of an algorithms used in our work. In Algorithm 3 we specify the process for generating the reference test data.
## Algorithm 3 Gentestset Input Parameters
N: test set size K: number of responses per item εA: Perturbation of A scores from G
εB: Perturbation of B scores from G
Results µi: Response means per item σi: Response standard deviations per item G: item, response matrix for human annotations A: item, response matrix for test system B: item, response matrix for baseline system for all i ∈ [0,N) do µi ∼ uniform([0,1])
σi ∼ uniform([0,.2])
νA ∼ uniform([−εA,εA]) νB ∼ uniform([−εB,εB]) for all k ∈ [0,k) do Gi,k ∼ normal(µi,σi) Ai,k ∼ normal(µi +ν0,σi) Bi,k ∼ normal(µi +ν1,σi)
end for end for
## B **Fitting The Mean And Standard**
![11_Image_1.Png](11_Image_1.Png)
deviation models to the toxicity dataset To fit the distribution of the simulated system responses to the dataset, we take the mean and standard deviation of the responses of each item in the dataset. We then inspect histograms of theses values. We noted that the distribution of the item-wise means (Figure 3, left) seems to follow a folded normal distribution that has been *clamped* to the range
[0,.8] (i.e., values falling outside that range are assigned to the nearest value in the range, namely
![11_image_0.png](11_image_0.png)
0 or 1). The standard deviations (Figure 4, right)
seem to follow a triangular distribution clamped to the range [0,∞] (i.e., only values less than 0 are reassigned).
We generated means in our simulator by sampling from a folded normal distribution clamped to [0,.8]. Using grid search, we found that assigning this distribution a mean of 0.0 and standard deviation 0.28 minimized the mean absolute error
(MAE) between the bars of the histograms in Figure 3. Similarly, a triangular distribution clamped to [0,1.0] and with minimum, apex, and maximum of −0.05, 0.21 and 0.45 minimized the (MAE) between the bars of the histograms in Figure 4. Both MAE scores were estimated to be around 2000, which is very small considering that the dataset has 107,620 items.
## C **Complete Results**
Here we include the full results for our experiments on both simulated and real data.
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0.410587 | 0.496402 | -0.085815 |
| (all_items sample(1)) | MAE | 0.381939 | 0.497279 | -0.115340 |
| (bootstrap_items sample(5)) | MAE | 0.427611 | 0.496402 | -0.068791 |
| (bootstrap_items sample(1)) | MAE | 0.423322 | 0.476487 | -0.053165 |
| (bootstrap_items first_element) | MAE | 0.024045 | 0.492053 | -0.468008 |
| (bootstrap_items all) | MAE | 0.485691 | 0.496402 | -0.010711 |
| (all_items sample(5)) | Wins(MAE) | 0.371267 | 0.510317 | -0.139050 |
| (all_items sample(1)) | Wins(MAE) | 0.367486 | 0.496606 | -0.129120 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0.374182 | 0.510317 | -0.136135 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.403785 | 0.493224 | -0.089439 |
| (bootstrap_items first_element) | Wins(MAE) | 0.083394 | 0.479193 | -0.395799 |
| (bootstrap_items all) | Wins(MAE) | 0.423081 | 0.510317 | -0.087236 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0.221291 | 0.504003 | -0.282712 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0.251065 | 0.483919 | -0.232854 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0.383131 | 0.500626 | -0.117495 |
| (all_items sample(5)) | MSE | 0.430407 | 0.48391 | -0.053503 |
| (all_items sample(1)) | MSE | 0.356333 | 0.497763 | -0.141430 |
| (bootstrap_items sample(5)) | MSE | 0.452586 | 0.48391 | -0.031324 |
| (bootstrap_items sample(1)) | MSE | 0.400131 | 0.478923 | -0.078792 |
| (bootstrap_items first_element) | MSE | 0.012927 | 0.49465 | -0.481723 |
| (bootstrap_items all) | MSE | 0.490123 | 0.48391 | 0.006213 |
| (all_items sample(5)) | Spearman Rho | 0.355155 | 0.494165 | -0.139010 |
| (all_items sample(1)) | Spearman Rho | 0.369992 | 0.488593 | -0.118601 |
| (bootstrap_items sample(5)) | Spearman Rho | 0.374628 | 0.494165 | -0.119537 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.412421 | 0.491757 | -0.079336 |
| (bootstrap_items first_element) | Spearman Rho | 0.007716 | 0.497834 | -0.490118 |
| (bootstrap_items all) | Spearman Rho | 0.397471 | 0.494165 | -0.096694 |
| (all_items sample(5)) | EMD Agg | 0.4807 | 0.502944 | -0.022244 |
| (all_items sample(1)) | EMD Agg | 0.4634 | 0.506008 | -0.042608 |
| (bootstrap_items sample(5)) | EMD Agg | 0.482 | 0.502944 | -0.020944 |
| (bootstrap_items sample(1)) | EMD Agg | 0.4174 | 0.504515 | -0.087115 |
| (bootstrap_items first_element) | EMD Agg | 0.4391 | 0.481533 | -0.042433 |
| (bootstrap_items all) | EMD Agg | 0.4574 | 0.502944 | -0.045544 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.4418 | 0.495472 | -0.053672 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.4283 | 0.495472 | -0.067172 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.4774 | 0.495472 | -0.018072 |
| (all_items sample(5)) | Mean Agg | 0.4723 | 0.493816 | -0.021516 |
| (bootstrap_items sample(5)) | Mean Agg | 0.4495 | 0.493816 | -0.044316 |
| (bootstrap_items all) | Mean Agg | 0.3281 | 0.493816 | -0.165716 |
| (all_items sample(5)) | COS (vectorized) | 0.186133 | 0.478781 | -0.292648 |
| (all_items sample(1)) | COS (vectorized) | 0.308011 | 0.493675 | -0.185664 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.206125 | 0.493335 | -0.287210 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.348773 | 0.481228 | -0.132455 |
| (bootstrap_items first_element) | COS | 0.014585 | 0.497565 | -0.482980 |
| (bootstrap_items all) | COS (vectorized) | 0.154479 | 0.463897 | -0.309418 |
Table 5: Full results on the purely simulated data for εB = 0.
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0.331591 | 0.067622 | 0.263969 |
| (all_items sample(1)) | MAE | 0.416351 | 0.350643 | 0.065708 |
| (bootstrap_items sample(5)) | MAE | 0.340759 | 0.067622 | 0.273137 |
| (bootstrap_items sample(1)) | MAE | 0.407312 | 0.378019 | 0.029293 |
| (bootstrap_items first_element) | MAE | 0.418707 | 0.356032 | 0.062675 |
| (bootstrap_items all) | MAE | 0.19662 | 0.067622 | 0.128998 |
| (all_items sample(5)) | Wins(MAE) | 0.044571 | 0.001909 | 0.042662 |
| (all_items sample(1)) | Wins(MAE) | 0.109459 | 0.087604 | 0.021855 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0.046853 | 0.001909 | 0.044944 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.111549 | 0.113261 | -0.001712 |
| (bootstrap_items first_element) | Wins(MAE) | 0.040036 | 0.10596 | -0.065924 |
| (bootstrap_items all) | Wins(MAE) | 0.003566 | 0.001909 | 0.001657 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0.004189 | 0.001297 | 0.002892 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0.008305 | 0.001504 | 0.006801 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0.010192 | 0.001759 | 0.008433 |
| (all_items sample(5)) | MSE | 0.479884 | 0.262248 | 0.217636 |
| (all_items sample(1)) | MSE | 0.482863 | 0.450803 | 0.032060 |
| (bootstrap_items sample(5)) | MSE | 0.484121 | 0.262248 | 0.221873 |
| (bootstrap_items sample(1)) | MSE | 0.512241 | 0.464467 | 0.047774 |
| (bootstrap_items first_element) | MSE | 0.116947 | 0.442982 | -0.326035 |
| (bootstrap_items all) | MSE | 0.486482 | 0.262248 | 0.224234 |
| (all_items sample(5)) | Spearman Rho | 0.458313 | 0.263784 | 0.194529 |
| (all_items sample(1)) | Spearman Rho | 0.486023 | 0.442924 | 0.043099 |
| (bootstrap_items sample(5)) | Spearman Rho | 0.473643 | 0.263784 | 0.209859 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.485766 | 0.464627 | 0.021139 |
| (bootstrap_items first_element) | Spearman Rho | 0.211795 | 0.434099 | -0.222304 |
| (bootstrap_items all) | Spearman Rho | 0.485429 | 0.263784 | 0.221645 |
| (all_items sample(5)) | EMD Agg | 0.4788 | 0.483241 | -0.004441 |
| (all_items sample(1)) | EMD Agg | 0.513 | 0.482578 | 0.030422 |
| (bootstrap_items sample(5)) | EMD Agg | 0.5077 | 0.483241 | 0.024459 |
| (bootstrap_items sample(1)) | EMD Agg | 0.4974 | 0.517659 | -0.020259 |
| (bootstrap_items first_element) | EMD Agg | 0.3833 | 0.489908 | -0.106608 |
| (bootstrap_items all) | EMD Agg | 0.5365 | 0.483241 | 0.053259 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.4774 | 0.495554 | -0.018154 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.3859 | 0.495554 | -0.109654 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.4914 | 0.495554 | -0.004154 |
| (all_items sample(5)) | Mean Agg | 0.2238 | 0.195486 | 0.028314 |
| (bootstrap_items sample(5)) | Mean Agg | 0.2387 | 0.195486 | 0.043214 |
| (bootstrap_items all) | Mean Agg | 0.2227 | 0.195486 | 0.027214 |
| (all_items sample(5)) | COS (vectorized) | 0.471735 | 0.390999 | 0.080736 |
| (all_items sample(1)) | COS (vectorized) | 0.475717 | 0.449534 | 0.026183 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.46214 | 0.375208 | 0.086932 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.480865 | 0.459581 | 0.021284 |
| (bootstrap_items first_element) | COS | 0.132421 | 0.437445 | -0.305024 |
| (bootstrap_items all) | COS (vectorized) | 0.35917 | 0.384993 | -0.025823 | | Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0.003156 | 2.00E-06 | 0.003154 |
| (all_items sample(1)) | MAE | 0.112564 | 0.115949 | -0.003385 |
| (bootstrap_items sample(5)) | MAE | 0.006948 | 2.00E-06 | 0.006946 |
| (bootstrap_items sample(1)) | MAE | 0.120493 | 0.112452 | 0.008041 |
| (bootstrap_items first_element) | MAE | 0.253229 | 0.11249 | 0.140739 |
| (bootstrap_items all) | MAE | 0.000204 | 2.00E-06 | 0.000202 |
| (all_items sample(5)) | Wins(MAE) | 3.10E-05 | 0 | 0.000031 |
| (all_items sample(1)) | Wins(MAE) | 0.002333 | 0.005179 | -0.002846 |
| (bootstrap_items sample(5)) | Wins(MAE) | 5.90E-05 | 0 | 0.000059 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.00328 | 0.006268 | -0.002988 |
| (bootstrap_items first_element) | Wins(MAE) | 0.002366 | 0.006986 | -0.004620 |
| (bootstrap_items all) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | MSE | 0.164925 | 0.010974 | 0.153951 |
| (all_items sample(1)) | MSE | 0.330706 | 0.317212 | 0.013494 |
| (bootstrap_items sample(5)) | MSE | 0.191154 | 0.010974 | 0.180180 |
| (bootstrap_items sample(1)) | MSE | 0.341757 | 0.303606 | 0.038151 |
| (bootstrap_items first_element) | MSE | 0.207481 | 0.298607 | -0.091126 |
| (bootstrap_items all) | MSE | 0.061281 | 0.010974 | 0.050307 |
| (all_items sample(5)) | Spearman Rho | 0.167256 | 0.009594 | 0.157662 |
| (all_items sample(1)) | Spearman Rho | 0.329187 | 0.318091 | 0.011096 |
| (bootstrap_items sample(5)) | Spearman Rho | 0.191409 | 0.009594 | 0.181815 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.330523 | 0.301009 | 0.029514 |
| (bootstrap_items first_element) | Spearman Rho | 0.143159 | 0.313247 | -0.170088 |
| (bootstrap_items all) | Spearman Rho | 0.099042 | 0.009594 | 0.089448 |
| (all_items sample(5)) | EMD Agg | 0.4212 | 0.480088 | -0.058888 |
| (all_items sample(1)) | EMD Agg | 0.4815 | 0.484922 | -0.003422 |
| (bootstrap_items sample(5)) | EMD Agg | 0.5018 | 0.480088 | 0.021712 |
| (bootstrap_items sample(1)) | EMD Agg | 0.4958 | 0.501032 | -0.005232 |
| (bootstrap_items first_element) | EMD Agg | 0.3286 | 0.505215 | -0.176615 |
| (bootstrap_items all) | EMD Agg | 0.36 | 0.480088 | -0.120088 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.4289 | 0.433678 | -0.004778 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.4751 | 0.433678 | 0.041422 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.4194 | 0.433678 | -0.014278 |
| (all_items sample(5)) | Mean Agg | 0.1104 | 0.032391 | 0.078009 |
| (bootstrap_items sample(5)) | Mean Agg | 0.1454 | 0.032391 | 0.113009 |
| (bootstrap_items all) | Mean Agg | 0.0002 | 0.032391 | -0.032191 |
| (all_items sample(5)) | COS (vectorized) | 0.149166 | 0.130691 | 0.018475 |
| (all_items sample(1)) | COS (vectorized) | 0.293829 | 0.322725 | -0.028896 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.183086 | 0.131847 | 0.051239 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.305439 | 0.307283 | -0.001844 |
| (bootstrap_items first_element) | COS | 0.228971 | 0.307595 | -0.078624 |
| (bootstrap_items all) | COS (vectorized) | 0.111144 | 0.1314 | -0.020256 |
Table 7: Full results on the purely simulated data for εB = 0.1. A result of "0" means that the p-value was less than the simulator's minimum level of precision (10−5).
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items all) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MSE | 9.00E-06 | 0.000237 | -0.000228 |
| (bootstrap_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MSE | 5.20E-05 | 0.000272 | -0.000220 |
| (bootstrap_items first_element) | MSE | 0.009451 | 0.000117 | 0.009334 |
| (bootstrap_items all) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Spearman Rho | 3.00E-06 | 0.000263 | -0.000260 |
| (bootstrap_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Spearman Rho | 8.30E-05 | 0.000105 | -0.000022 |
| (bootstrap_items first_element) | Spearman Rho | 0.012638 | 0.000111 | 0.012527 |
| (bootstrap_items all) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(5)) | EMD Agg | 0.2976 | 0.141411 | 0.156189 |
| (all_items sample(1)) | EMD Agg | 0.294 | 0.257674 | 0.036326 |
| (bootstrap_items sample(5)) | EMD Agg | 0.3763 | 0.141411 | 0.234889 |
| (bootstrap_items sample(1)) | EMD Agg | 0.2968 | 0.252052 | 0.044748 |
| (bootstrap_items first_element) | EMD Agg | 0.3665 | 0.291064 | 0.075436 |
| (bootstrap_items all) | EMD Agg | 0.2933 | 0.141411 | 0.151889 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.0198 | 0.027877 | -0.008077 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.1785 | 0.027877 | 0.150623 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.0983 | 0.027877 | 0.070423 |
| (all_items sample(5)) | Mean Agg | 0.0047 | 8.40E-05 | 0.004616 |
| (bootstrap_items sample(5)) | Mean Agg | 0.0094 | 8.40E-05 | 0.009316 |
| (bootstrap_items all) | Mean Agg | 0.0003 | 8.40E-05 | 0.000216 |
| (all_items sample(5)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(1)) | COS (vectorized) | 2.50E-05 | 0.000425 | -0.000400 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.000111 | 0.000373 | -0.000262 |
| (bootstrap_items first_element) | COS | 0.017425 | 0.000224 | 0.017201 |
| (bootstrap_items all) | COS (vectorized) | 0 | 0 | 0.000000 |
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items all) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items all) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(5)) | EMD Agg | 0 | 2.00E-06 | -0.000002 |
| (all_items sample(1)) | EMD Agg | 0 | 0.00047 | -0.000470 |
| (bootstrap_items sample(5)) | EMD Agg | 0 | 2.00E-06 | -0.000002 |
| (bootstrap_items sample(1)) | EMD Agg | 0.0049 | 0.000828 | 0.004072 |
| (bootstrap_items first_element) | EMD Agg | 0.0006 | 0.000672 | -0.000072 |
| (bootstrap_items all) | EMD Agg | 0.0068 | 2.00E-06 | 0.006798 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Mean Agg | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Mean Agg | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Mean Agg | 0 | 0 | 0.000000 |
| (all_items sample(5)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(1)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | COS | 0 | 0 | 0.000000 |
| (bootstrap_items all) | COS (vectorized) | 0 | 0 | 0.000000 |
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0.45934 | 0.479535 | -0.020195 |
| (all_items sample(1)) | MAE | 0.437902 | 0.499429 | -0.061527 |
| (bootstrap_items sample(5)) | MAE | 0.450871 | 0.479526 | -0.028655 |
| (bootstrap_items sample(1)) | MAE | 0.456307 | 0.485554 | -0.029247 |
| (bootstrap_items first_element) | MAE | 0.266924 | 0.492477 | -0.225553 |
| (bootstrap_items all) | MAE | 0.397145 | 0.479537 | -0.082392 |
| (all_items sample(5)) | Wins(MAE) | 0.43663 | 0.497893 | -0.061263 |
| (all_items sample(1)) | Wins(MAE) | 0.457158 | 0.50005 | -0.042892 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0.416203 | 0.496002 | -0.079799 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.467904 | 0.487324 | -0.019420 |
| (bootstrap_items first_element) | Wins(MAE) | 0.211593 | 0.496652 | -0.285059 |
| (bootstrap_items all) | Wins(MAE) | 0.365914 | 0.496656 | -0.130742 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0.384364 | 0.476789 | -0.092425 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0.394187 | 0.469652 | -0.075465 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0.34856 | 0.480752 | -0.132192 |
| (all_items sample(5)) | MSE | 0.457059 | 0.479458 | -0.022399 |
| (all_items sample(1)) | MSE | 0.433271 | 0.507752 | -0.074481 |
| (bootstrap_items sample(5)) | MSE | 0.460799 | 0.47946 | -0.018661 |
| (bootstrap_items sample(1)) | MSE | 0.449009 | 0.480377 | -0.031368 |
| (bootstrap_items first_element) | MSE | 0.298127 | 0.492372 | -0.194245 |
| (bootstrap_items all) | MSE | 0.498544 | 0.479457 | 0.019087 |
| (all_items sample(5)) | Spearman Rho | 0.493405 | 0.473137 | 0.020268 |
| (all_items sample(1)) | Spearman Rho | 0.471087 | 0.50108 | -0.029993 |
| (bootstrap_items sample(5)) | Spearman Rho | 0.49305 | 0.472682 | 0.020368 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.478448 | 0.480376 | -0.001928 |
| (bootstrap_items first_element) | Spearman Rho | 0.26784 | 0.501546 | -0.233706 |
| (bootstrap_items all) | Spearman Rho | 0.472451 | 0.472311 | 0.000140 |
| (all_items sample(5)) | EMD Agg | 0.3172 | 0.492373 | -0.175173 |
| (all_items sample(1)) | EMD Agg | 0.3247 | 0.48702 | -0.162320 |
| (bootstrap_items sample(5)) | EMD Agg | 0.384 | 0.503324 | -0.119324 |
| (bootstrap_items sample(1)) | EMD Agg | 0.3562 | 0.48553 | -0.129330 |
| (bootstrap_items first_element) | EMD Agg | 0.2671 | 0.486825 | -0.219725 |
| (bootstrap_items all) | EMD Agg | 0.3417 | 0.490803 | -0.149103 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.1856 | 0.496253 | -0.310653 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.2054 | 0.496253 | -0.290853 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.1448 | 0.496253 | -0.351453 |
| (all_items sample(5)) | Mean Agg | 0.3894 | 0.479924 | -0.090524 |
| (bootstrap_items sample(5)) | Mean Agg | 0.4661 | 0.479924 | -0.013824 |
| (bootstrap_items all) | Mean Agg | 0.1139 | 0.479924 | -0.366024 |
| (all_items sample(5)) | COS (vectorized) | 0.47414 | 0.49728 | -0.023140 |
| (all_items sample(1)) | COS (vectorized) | 0.486513 | 0.487594 | -0.001081 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.478938 | 0.483 | -0.004062 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.495858 | 0.481335 | 0.014523 |
| (bootstrap_items first_element) | COS | 0.398975 | 0.49993 | -0.100955 |
| (bootstrap_items all) | COS (vectorized) | 0.508019 | 0.497584 | 0.010435 |
Table 10: Full results on the toxicity data for εB = 0.
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0.451311 | 0.236154 | 0.215157 |
| (all_items sample(1)) | MAE | 0.451132 | 0.421398 | 0.029734 |
| (bootstrap_items sample(5)) | MAE | 0.443935 | 0.236095 | 0.207840 |
| (bootstrap_items sample(1)) | MAE | 0.472664 | 0.416839 | 0.055825 |
| (bootstrap_items first_element) | MAE | 0.34272 | 0.418018 | -0.075298 |
| (bootstrap_items all) | MAE | 0.304942 | 0.236123 | 0.068819 |
| (all_items sample(5)) | Wins(MAE) | 0.464451 | 0.268947 | 0.195504 |
| (all_items sample(1)) | Wins(MAE) | 0.447476 | 0.413566 | 0.033910 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0.439536 | 0.271841 | 0.167695 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.474745 | 0.40652 | 0.068225 |
| (bootstrap_items first_element) | Wins(MAE) | 0.441751 | 0.404092 | 0.037659 |
| (bootstrap_items all) | Wins(MAE) | 0.480294 | 0.271587 | 0.208707 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0.394146 | 0.29071 | 0.103436 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0.378982 | 0.293645 | 0.085337 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0.246202 | 0.293862 | -0.047660 |
| (all_items sample(5)) | MSE | 0.512868 | 0.308322 | 0.204546 |
| (all_items sample(1)) | MSE | 0.460799 | 0.455346 | 0.005453 |
| (bootstrap_items sample(5)) | MSE | 0.484069 | 0.308322 | 0.175747 |
| (bootstrap_items sample(1)) | MSE | 0.478558 | 0.449968 | 0.028590 |
| (bootstrap_items first_element) | MSE | 0.310163 | 0.446163 | -0.136000 |
| (bootstrap_items all) | MSE | 0.480889 | 0.308322 | 0.172567 |
| (all_items sample(5)) | Spearman Rho | 0.478749 | 0.291773 | 0.186976 |
| (all_items sample(1)) | Spearman Rho | 0.495227 | 0.455432 | 0.039795 |
| (bootstrap_items sample(5)) | Spearman Rho | 0.497833 | 0.292022 | 0.205811 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.503192 | 0.44878 | 0.054412 |
| (bootstrap_items first_element) | Spearman Rho | 0.405459 | 0.440566 | -0.035107 |
| (bootstrap_items all) | Spearman Rho | 0.415004 | 0.291148 | 0.123856 |
| (all_items sample(5)) | EMD Agg | 0.4512 | 0.492407 | -0.041207 |
| (all_items sample(1)) | EMD Agg | 0.4608 | 0.479153 | -0.018353 |
| (bootstrap_items sample(5)) | EMD Agg | 0.3984 | 0.506423 | -0.108023 |
| (bootstrap_items sample(1)) | EMD Agg | 0.436 | 0.504794 | -0.068794 |
| (bootstrap_items first_element) | EMD Agg | 0.5262 | 0.485954 | 0.040246 |
| (bootstrap_items all) | EMD Agg | 0.349 | 0.488988 | -0.139988 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.4067 | 0.478072 | -0.071372 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.4193 | 0.478072 | -0.058772 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.3572 | 0.478072 | -0.120872 |
| (all_items sample(5)) | Mean Agg | 0.3623 | 0.358396 | 0.003904 |
| (bootstrap_items sample(5)) | Mean Agg | 0.3967 | 0.358396 | 0.038304 |
| (bootstrap_items all) | Mean Agg | 0.2297 | 0.358396 | -0.128696 |
| (all_items sample(5)) | COS (vectorized) | 0.427134 | 0.431173 | -0.004039 |
| (all_items sample(1)) | COS (vectorized) | 0.464628 | 0.477367 | -0.012739 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.430813 | 0.433462 | -0.002649 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.473743 | 0.464818 | 0.008925 |
| (bootstrap_items first_element) | COS | 0.349589 | 0.465382 | -0.115793 |
| (bootstrap_items all) | COS (vectorized) | 0.441221 | 0.440026 | 0.001195 |
Table 11: Full results on the toxicity data for εB = 0.05
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0.339637 | 0.013932 | 0.325705 |
| (all_items sample(1)) | MAE | 0.495166 | 0.275734 | 0.219432 |
| (bootstrap_items sample(5)) | MAE | 0.345923 | 0.013927 | 0.331996 |
| (bootstrap_items sample(1)) | MAE | 0.474302 | 0.290926 | 0.183376 |
| (bootstrap_items first_element) | MAE | 0.471536 | 0.286101 | 0.185435 |
| (bootstrap_items all) | MAE | 0.147494 | 0.013921 | 0.133573 |
| (all_items sample(5)) | Wins(MAE) | 0.369515 | 0.034306 | 0.335209 |
| (all_items sample(1)) | Wins(MAE) | 0.473703 | 0.235071 | 0.238632 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0.368552 | 0.03339 | 0.335162 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.461407 | 0.238133 | 0.223274 |
| (bootstrap_items first_element) | Wins(MAE) | 0.483026 | 0.242806 | 0.240220 |
| (bootstrap_items all) | Wins(MAE) | 0.164385 | 0.033239 | 0.131146 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0.394516 | 0.061719 | 0.332797 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0.398901 | 0.057616 | 0.341285 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0.300014 | 0.065973 | 0.234041 |
| (all_items sample(5)) | MSE | 0.357053 | 0.058006 | 0.299047 |
| (all_items sample(1)) | MSE | 0.475893 | 0.359268 | 0.116625 |
| (bootstrap_items sample(5)) | MSE | 0.365982 | 0.058006 | 0.307976 |
| (bootstrap_items sample(1)) | MSE | 0.490422 | 0.386699 | 0.103723 |
| (bootstrap_items first_element) | MSE | 0.417344 | 0.371051 | 0.046293 |
| (bootstrap_items all) | MSE | 0.150775 | 0.058006 | 0.092769 |
| (all_items sample(5)) | Spearman Rho | 0.229324 | 0.054669 | 0.174655 |
| (all_items sample(1)) | Spearman Rho | 0.467556 | 0.345706 | 0.121850 |
| (bootstrap_items sample(5)) | Spearman Rho | 0.238225 | 0.054456 | 0.183769 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.456305 | 0.374117 | 0.082188 |
| (bootstrap_items first_element) | Spearman Rho | 0.374624 | 0.351033 | 0.023591 |
| (bootstrap_items all) | Spearman Rho | 0.017202 | 0.054292 | -0.037090 |
| (all_items sample(5)) | EMD Agg | 0.4446 | 0.438747 | 0.005853 |
| (all_items sample(1)) | EMD Agg | 0.4008 | 0.49448 | -0.093680 |
| (bootstrap_items sample(5)) | EMD Agg | 0.4805 | 0.441111 | 0.039389 |
| (bootstrap_items sample(1)) | EMD Agg | 0.3422 | 0.476702 | -0.134502 |
| (bootstrap_items first_element) | EMD Agg | 0.228 | 0.469601 | -0.241601 |
| (bootstrap_items all) | EMD Agg | 0.4349 | 0.440939 | -0.006039 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.2881 | 0.388563 | -0.100463 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.3477 | 0.388563 | -0.040863 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.3013 | 0.388563 | -0.087263 |
| (all_items sample(5)) | Mean Agg | 0.3243 | 0.18099 | 0.143310 |
| (bootstrap_items sample(5)) | Mean Agg | 0.3476 | 0.18099 | 0.166610 |
| (bootstrap_items all) | Mean Agg | 0.4127 | 0.18099 | 0.231710 |
| (all_items sample(5)) | COS (vectorized) | 0.396048 | 0.319219 | 0.076829 |
| (all_items sample(1)) | COS (vectorized) | 0.436377 | 0.407283 | 0.029094 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.411232 | 0.317353 | 0.093879 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.471161 | 0.44151 | 0.029651 |
| (bootstrap_items first_element) | COS | 0.351366 | 0.41475 | -0.063384 |
| (bootstrap_items all) | COS (vectorized) | 0.23454 | 0.320495 | -0.085955 |
Table 12: Full results on the toxicity data for εB = 0.1
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MAE | 0.034235 | 0.000458 | 0.033777 |
| (bootstrap_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MAE | 0.029396 | 0.000589 | 0.028807 |
| (bootstrap_items first_element) | MAE | 0.077095 | 0.000665 | 0.076430 |
| (bootstrap_items all) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) | 1.50E-05 | 0 | 0.000015 |
| (all_items sample(1)) | Wins(MAE) | 0.010578 | 0.000207 | 0.010371 |
| (bootstrap_items sample(5)) | Wins(MAE) | 6.30E-05 | 0 | 0.000063 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0.009861 | 0.000277 | 0.009584 |
| (bootstrap_items first_element) | Wins(MAE) | 0.029022 | 0.000549 | 0.028473 |
| (bootstrap_items all) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MSE | 0.114515 | 0.007642 | 0.106873 |
| (bootstrap_items sample(5)) | MSE | 9.70E-05 | 0 | 0.000097 |
| (bootstrap_items sample(1)) | MSE | 0.102692 | 0.007427 | 0.095265 |
| (bootstrap_items first_element) | MSE | 0.187849 | 0.007277 | 0.180572 |
| (bootstrap_items all) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Spearman Rho | 0.072887 | 0.029437 | 0.043450 |
| (bootstrap_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Spearman Rho | 0.069153 | 0.028034 | 0.041119 |
| (bootstrap_items first_element) | Spearman Rho | 0.075912 | 0.033092 | 0.042820 |
| (bootstrap_items all) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(5)) | EMD Agg | 0.0647 | 0.010304 | 0.054396 |
| (all_items sample(1)) | EMD Agg | 0.3256 | 0.146758 | 0.178842 |
| (bootstrap_items sample(5)) | EMD Agg | 0.1355 | 0.011847 | 0.123653 |
| (bootstrap_items sample(1)) | EMD Agg | 0.3838 | 0.14794 | 0.235860 |
| (bootstrap_items first_element) | EMD Agg | 0.1294 | 0.147411 | -0.018011 |
| (bootstrap_items all) | EMD Agg | 0.3004 | 0.011109 | 0.289291 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.2419 | 0.003602 | 0.238298 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.1899 | 0.003602 | 0.186298 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.1645 | 0.003602 | 0.160898 |
| (all_items sample(5)) | Mean Agg | 0.1394 | 0.001862 | 0.137538 |
| (bootstrap_items sample(5)) | Mean Agg | 0.1537 | 0.001862 | 0.151838 |
| (bootstrap_items all) | Mean Agg | 0.0462 | 0.001862 | 0.044338 |
| (all_items sample(5)) | COS (vectorized) | 0.12329 | 0.002952 | 0.120338 |
| (all_items sample(1)) | COS (vectorized) | 0.306477 | 0.106948 | 0.199529 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0.166159 | 0.003684 | 0.162475 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.29751 | 0.092561 | 0.204949 |
| (bootstrap_items first_element) | COS | 0.412021 | 0.100953 | 0.311068 |
| (bootstrap_items all) | COS (vectorized) | 0.125169 | 0.003615 | 0.121554 |
| Sampling method | Metric | Estimated p | True p | diff |
|---------------------------------|------------------------|---------------|----------|-----------|
| (all_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | MAE | 0 | 0 | 0.000000 |
| (bootstrap_items all) | MAE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | Wins(MAE) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items all) | Wins(MAE) (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(1)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(5)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | MSE | 0 | 0 | 0.000000 |
| (bootstrap_items first_element) | MSE | 2.30E-05 | 0 | 0.000023 |
| (bootstrap_items all) | MSE | 0 | 0 | 0.000000 |
| (all_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(1)) | Spearman Rho | 3.30E-05 | 2.60E-05 | 0.000007 |
| (bootstrap_items sample(5)) | Spearman Rho | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | Spearman Rho | 8.40E-05 | 1.50E-05 | 0.000069 |
| (bootstrap_items first_element) | Spearman Rho | 0.002207 | 1.20E-05 | 0.002195 |
| (bootstrap_items all) | Spearman Rho | 0 | 0 | 0.000000 |
| (all_items sample(5)) | EMD Agg | 0.0055 | 0 | 0.005500 |
| (all_items sample(1)) | EMD Agg | 0.3862 | 1.80E-05 | 0.386182 |
| (bootstrap_items sample(5)) | EMD Agg | 0.0301 | 0 | 0.030100 |
| (bootstrap_items sample(1)) | EMD Agg | 0.3843 | 0.001 | 0.383300 |
| (bootstrap_items first_element) | EMD Agg | 0.0522 | 0.000999 | 0.051201 |
| (bootstrap_items all) | EMD Agg | 0.0008 | 0 | 0.000800 |
| (all_items sample(5)) | EMD Agg (vectorized) | 0.3511 | 0.000816 | 0.350284 |
| (bootstrap_items sample(5)) | EMD Agg (vectorized) | 0.3889 | 0.000816 | 0.388084 |
| (bootstrap_items all) | EMD Agg (vectorized) | 0.2746 | 0.000816 | 0.273784 |
| (all_items sample(5)) | Mean Agg | 0.0104 | 1.00E-06 | 0.010399 |
| (bootstrap_items sample(5)) | Mean Agg | 0.0255 | 1.00E-06 | 0.025499 |
| (bootstrap_items all) | Mean Agg | 0 | 1.00E-06 | -0.000001 |
| (all_items sample(5)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (all_items sample(1)) | COS (vectorized) | 0.006554 | 0.00048 | 0.006074 |
| (bootstrap_items sample(5)) | COS (vectorized) | 0 | 0 | 0.000000 |
| (bootstrap_items sample(1)) | COS (vectorized) | 0.010388 | 0.000377 | 0.010011 |
| (bootstrap_items first_element) | COS | 0.09628 | 0.000329 | 0.095951 |
| (bootstrap_items all) | COS (vectorized) | 5.10E-05 | 0 | 0.000051 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 6
✓ B1. Did you cite the creators of artifacts you used?
Left blank.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 6
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
tang-hardmeier-2023-parallel | Parallel Data Helps Neural Entity Coreference Resolution | https://aclanthology.org/2023.findings-acl.197 | Coreference resolution is the task of finding expressions that refer to the same entity in a text. Coreference models are generally trained on monolingual annotated data but annotating coreference is expensive and challenging. Hardmeier et al. (2013) have shown that parallel data contains latent anaphoric knowledge, but it has not been explored in end-to-end neural models yet. In this paper, we propose a simple yet effective model to exploit coreference knowledge from parallel data. In addition to the conventional modules learning coreference from annotations, we introduce an unsupervised module to capture cross-lingual coreference knowledge. Our proposed cross-lingual model achieves consistent improvements, up to 1.74 percentage points, on the OntoNotes 5.0 English dataset using 9 different synthetic parallel datasets. These experimental results confirm that parallel data can provide additional coreference knowledge which is beneficial to coreference resolution tasks. | # Parallel Data Helps Neural Entity Coreference Resolution
Gongbo Tang Beijing Language and Culture University [email protected] Christian Hardmeier IT University of Copenhagen Uppsala University [email protected]
## Abstract
Coreference resolution is the task of finding expressions that refer to the same entity in a text. Coreference models are generally trained on monolingual annotated data but annotating coreference is expensive and challenging. Hardmeier et al. (2013) have shown that parallel data contains latent anaphoric knowledge, but it has not been explored in end-to-end neural models yet. In this paper, we propose a simple yet effective model to exploit coreference knowledge from parallel data. In addition to the conventional modules learning coreference from annotations, we introduce an unsupervised module to capture cross-lingual coreference knowledge. Our proposed cross-lingual model achieves consistent improvements, up to 1.74 percentage points, on the OntoNotes 5.0 English dataset using 9 different synthetic parallel datasets. These experimental results confirm that parallel data can provide additional coreference knowledge which is beneficial to coreference resolution tasks.
## 1 Introduction
Coreference resolution is the task of finding expressions, called mentions, that refer to the same entity in a text. Current neural coreference models are trained on monolingual annotated data, and their performance heavily relies on the amount of annotations (Lee et al., 2017, 2018; Joshi et al., 2019, 2020). Annotating such coreference information is challenging and expensive. Thus, annotation data is a bottleneck in neural coreference resolution.
Hardmeier et al. (2013) have explored parallel data in an unsupervised way and shown that parallel data has latent cross-lingual anaphoric knowledge. Figure 1 shows a coreference chain in an English–
Chinese parallel sentence pair. "ACL 2023", "it" in the English sentence, and "ACL 2023", "它"(it)
in the Chinese sentence are coreferential to each other. Compared to the two separate monolingual coreferential pairs: <ACL 2023, it>, <ACL 2023,
- [ACL 2023] is a top-tier NLP conference, [it] is coming.
- [ACL 2023] 是NLP领域的一个顶会,[它]即将召开。
Figure 1: A coreference chain in an English–Chinese parallel sentence pair. Mentions in brackets are coreferential to each other. Links in blue are monolingual and dashed liks in orange are cross-lingual.
它>, there are four more cross-lingual coreferential pairs <it, ACL 2023>, <it, 它>, <ACL 2023, ACL
2023>, <ACL 2023, 它> in this parallel sentence pair. This cross-lingual coreference chain suggests that parallel multilingual data can provide extra coreferential knowledge compared to monolingual data which could be useful for training coreference models.
Parallel data has been applied to project coreference annotations in non-neural coreference models (de Souza and Orasan ˘ , 2011; Rahman and Ng, 2012; Martins, 2015; Grishina and Stede, 2015; Novák et al., 2017; Grishina and Stede, 2017). Instead, we focus on neural coreference models and ask the following main research question: Can parallel data advance the performance of coreference resolution on English, where a relatively large amount of annotations are available?
We propose a cross-lingual model which exploits cross-lingual coreference knowledge from parallel data. Our model is based on the most popular neural coreference model (Lee et al., 2018), which consists of an encoder, a mention span scorer, and a coreference scorer. We extend these three modules, which are applied to the source-side data, with a target-side encoder and adapters for the mention span scorer and the coreference scorer, allowing these to resolve cross-lingual coreference. As there is no annotated cross-lingual coreference data, the model computes the coreference scores between target spans and source spans without any supervision. We conduct experiments on the most popular OntoNotes 5.0 English dataset (Pradhan et al.,
2012). Given the English data, we generate 9 different synthetic parallel datasets with the help of pretrained neural machine translation (NMT) models. The target languages consist of Arabic, Catalan, Chinese, Dutch, French, German, Italian, Russian, and Spanish. The experimental results show that our cross-lingual models achieve consistent improvements, which confirms that parallel data helps neural entity coreference resolution.
## 2 Related Work
Lee et al. (2017) first propose end-to-end neural coreference models (*neural-coref*) and achieve better performance on the OntoNotes English dataset compared to previous models. Most current neural coreference models are based on *neural-coref* and replace the statistic word embeddings used by Lee et al. (2017) with contextualized word embeddings from ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), SpanBERT (Joshi et al., 2020), etc.
(Lee et al., 2018; Joshi et al., 2019, 2020).
Neural-coref only models the relation between pairs of mentions. Many studies propose to consider entity-level information while predicting clusters (Lee et al., 2018; Kantor and Globerson, 2019; Xu and Choi, 2020). However, Xu and Choi (2020)
find that these models considering higher-order inference are not significantly better or even worse.
Instead, the observed differences can be explained by the powerful performance of SpanBERT.
Because these models are expensive in terms of memory and time, especially when using higherdimensional representations. Xia et al. (2020) and Toshniwal et al. (2020) propose models that only keep a limited number of entities in the memory, without much performance drop. Kirstain et al.
(2021) introduce a *start-to-end* model where the model computes mention and antecedent scores only through bilinear functions of span boundary representations. To cope with the enormous number of spans, Dobrovolskii (2021) proposes a wordlevel coreference model, where the model first considers the coreference links between single words, and then reconstruct the word spans.
All these models are trained on monolingual coreference annotations. In this paper, we introduce a simple model building on the top of neural-coref, which exploits cross-lingual coreference from parallel data in an unsupervised way.
![1_image_0.png](1_image_0.png)
## 3 Coreference Models 3.1 **Neural-Coref**
Most neural coreference models are variants of neural-coref (Lee et al., 2017), whose structure is illustrated in Figure 2 (a). It consists of a text encoder, a mention scorer, and a coreference scorer.
The final coreference clusters are predicted based on the scores of these modules.
Given a document, the encoder first generates representations for each token. Then the model creates a list of spans, varying the span width.1 Each span representation is the concatenation of 1) the first token representation, 2) the last token representation, 3) the span head representation, and 4) the feature vector, where the span head representation is learned by an attention mechanism (Bahdanau et al., 2015) and the feature vector encodes the size of the span. Then the mention scorer, a feedforward neural network, assigns a score to each span. Afterwards, the coreference scorer computes how likely it is that a mention refers to each of the preceding mentions.
During training, given a span i, the model predicts a set of possible antecedents Y =
{ϵ, 1*, . . . , i* − 1}, a dummy antecedent ϵ and preceding spans. The model generates a probability distribution P(yi) over antecedents for the span i, as shown in Equation 1 below. s(*i, j*) denotes the coreference score between span pair i and j.
The coreference loss is the marginal log-likelihood of the correct antecedents. During inference, the 1The number of generated spans is decided by hyperparameters, i.e., the maximum width of a span, the ratio of entire span space, the maximum number of spans.
model first recognizes potential antecedents for each mention, then it predicts the final coreference clusters. More specifically, given a mention, the model considers the preceding mention with the highest coreference score as the antecedent.
$ P(y_i)=\frac{e^{s(i,y_i)}}{\sum_{y'\in\mathcal{Y}(i)}e^{s(i,y')}}$ (1) - I know!Model.
$\mathbf{a}$
## 3.2 Cross-Lingual Model
We hypothesize that parallel data can provide additional coreference information which benefits learning coreference. As there is no supervision to the target-side and cross-lingual modelling, we attempt to transfer the source-side learned parameters to the target-side unsupervised modules by adding additional adapters, which has been shown efficient and effective (Houlsby et al., 2019). Therefore, we extend *neural-coref* by introducing a target-side encoder, adapters for target-side mention scorer, and cross-lingual coreference scorer, where each adapter is a one-layer feed-forward neural network with 500 hidden nodes. The overview of our crosslingual model is shown in Figure 2 (b).
For the target-side, we can use a shared crosslingual encoder or a target-side monolingual encoder. The coreference scorer computes coreference scores between target-side spans and sourceside spans. This is the key component to learn cross-lingual coreference knowledge. The strategy we follow is the same as that in *neural-coref* during inference: Given a source mention, the target mention with the highest coreference score is considered as the corresponding cross-lingual antecedent.
This component serves to capture latent coreference information. During training, as source-side modules are shared across languages, source-side parameters are jointly updated when optimizing the cross-lingual coreference loss.
There is no specific range for antecedents in the cross-lingual setting. Thus, we introduce a restriction to target-side antecedents, where the crosslingual antecedent's position number in the target sentence should not surpass the source mention's position number in the source sentence more than 50. This pruning can make the model more efficient and effective.
Say the model has predicted a source mention list Ms: {ms1
, ms2
, . . . , msm} and a target mention list Mt: {mt1
, mt2
, . . . , mtn }. The model has also generated a two-dimensional coreference score matrix, where sij represents the coreference score between msi and mtj
. We denote Y(i) as the possible antecedent set of the source mention i. The cross-lingual coreference loss is defined in Equation 2, where ˆj = arg max j∈Y(i)
sij for a given i.
2
$${\mathcal{L}}_{x}=\sum_{i=1}^{m}e^{-s_{i j}}\qquad\qquad(2)$$
During training, the model learns to minimize both the coreference loss and the cross-lingual coreference loss Lx with a ratio 1 : 1. During inference, we only employ the source-side modules, which are trained with coreference supervision and latent cross-lingual coreference knowledge, to predict coreference clusters.
## 4 Experiments
Due to the page limit, we leave our experimental settings in Appendix A.
## 4.1 Data
We experiment with the OntoNotes 5.0 English dataset. The number of documents for training, development, and test is 2,802, 343, and 348, respectively. The data is originally from newswire, magazines, broadcast news, broadcast conversations, web, conversational speech, and the Bible.
It has been the benchmark dataset for coreference resolution since it is released. The annotation in OntoNotes covers both entities and events, but with a very restricted definition of events. Noun phrases, pronouns, and head of verb phrases are considered as potential mentions. Singleton clusters3are not annotated in OntoNotes.
Given the English data, we use open access pretrained NMT models released by Facebook and the Helsinki NLP group to generate synthetic parallel data (Wu et al., 2019; Ng et al., 2019; Tiedemann and Thottingal, 2020).
The input to monolingual models are the English data and the inputs to cross-lingual models are these parallel data. They have the same amount of data entries. These parallel data have the same coreference annotations as the data fed into monolingual models, the only difference is that the English data is paired with its target translations, and there are no annotations in the translations at all.
2We assume that there should be at least one antecedent on the other side for each mention, either the translation of the mention or a translation of its antecedent. In practice, the quality of synthetic parallel data is not guaranteed which introduces noise. On the other hand, synthetic data may actually be more parallel than natural translations.
3An entity cluster that only contains a single mention.
Data F1mentionMUC B3 CEAFe F1avg ∆ F1 R P F1 R P F1 R P F1
English 85.42 80.31 81.40 80.85 71.31 70.92 71.10 65.81 70.97 68.30 73.42 0
English–Arabic 86.13 81.73 81.80 81.77 72.91 71.77 72.34 67.85 71.53 69.64 74.58 1.16 English–Catalan 86.17 81.38 82.36 81.87 72.55 72.75 72.65 67.77 72.19 69.91 74.81 1.39
English–Chinese 86.02 81.16 82.43 81.78 71.91 72.74 72.32 66.96 72.17 69.47 74.53 1.11 English–Dutch **86.29** 81.53 **82.84 82.18** 72.67 **73.31 72.99 68.36 72.41 70.33 75.16 1.74**
English–French 85.93 81.12 82.15 81.63 72.06 72.36 72.20 67.36 71.31 69.28 74.37 0.95
English–German 86.02 81.86 81.28 81.56 73.06 70.82 71.92 67.42 70.93 69.14 74.20 0.78 English–Italian 86.13 81.71 82.09 81.90 72.82 72.09 72.45 67.73 71.60 69.61 74.65 1.23
English–Russian 86.17 **82.38** 81.31 81.84 **73.75** 70.62 72.15 67.94 71.12 69.49 74.50 1.08
English–Spanish 86.21 81.72 81.88 81.80 72.62 71.88 72.25 67.88 71.11 69.45 74.50 1.08
## 4.2 Experimental Results
Table 1 shows the detailed scores of each model on the OntoNotes 5.0 English test set. Compared to the baseline model, which is trained only on English data, our cross-lingual model trained on different synthetic parallel datasets achieves consistent and statistically significant (t-test, p < 0.05)
improvements, varying from 0.78 to 1.74 percentage points. The model trained on English–Dutch achieves the best F1 performance on coreference resolution. The model trained on English–Russian achieves the best recall score on MUC and B3.
It is interesting to see that the model trained on English–German achieves the least improvement, although German together with Dutch are closer to English compared to other languages. Meanwhile, the models trained on English–Arabic, English–
Chinese, English–Russian obtain moderate improvements, even though Arabic, Chinese, and Russian are more different from English. Given the reported BLEU scores of the pre-trained NMT models, we find that the improvements do not correlate with the quality of generated translations.
In addition to the results on coreference resolution, we also report the mention detection results, which are based on mention scores, i.e., the outputs of mention scorers. Models trained on parallel data are consistently superior to the monolingual model, and the model trained on English–Dutch gets the best F1 score of 86.29. We can tell that models with a higher mention detection F1 score do not always achieve higher coreference F1 score. There is no consistency across different language pairs, so the improvements are not merely from better mention detection performance, namely, memorizing mentions.
As Table 1 shows, our cross-lingual model, which exploits parallel data, is superior to the model trained only on monolingual data. This confirms that parallel data can provide additional coreference knowledge to coreference models, which is beneficial to coreference modelling, even if the parallel data is synthetic and noisy.4
## 5 Analysis 5.1 Unsupervised Cross-Lingual Coreference
To explore whether the unsupervised module can capture cross-lingual coreference information, we check the cross-lingual mention pairs predicted by the cross-lingual coreference scorer.
ParCorFull (Lapshinova-Koltunski et al., 2018)
is an English–German parallel corpus annotated with coreference chains. We first feed the data to the model and let the model predict English– German mention pairs. We go through the these pairs quickly and find that some of these pairs are coreferential, some of these pairs are translation pairs, but most of them are irrelevant. As the coreference chains in English and German are not aligned, we cannot conduct quantitative evaluation.
Alternatively, we evaluate the ability of the model to capture cross-lingual coreference knowledge using a synthetic mention pair set: an English–
English mention pair set. Now we have "aligned" coreference chains, and we can evaluate the mention pairs automatically. Specifically, we first train 4We also conduct preliminary experiments with parallel data from multiple language pairs, concatenating the parallel data of EN-DE, EN-ES, EN-IT, EN-NL, and EN-RU five language pairs. Our proposed cross-lingual model achieves better performance compared to using data from one single language pair, showing the capability of our model to work with multiple parallel data.
a cross-lingual model with English–English synthetic data, and we then feed the OntoNotes English validation set to the model, both the source and target sides, to predict English–English mention pairs.
The model predicts 18,154 pairs in total, including 131 mention pairs that are the same mention, 1,257 mention pairs that are coreferential, and 758 mention pairs with the same surface. This indicates that the model is able to resolve some cross-lingual coreference. However, since the cross-lingual module is trained without any supervision, most of predicted mention pairs are not coreferential.
Table 2 shows some correctly predicted coreferential mention pairs, in English–English and English–German settings. We can tell that our cross-lingual models are not simply generating a pair of two identical mentions, but coreferential mentions as well, which is different from word alignment. These mention pairs support our hypothesis that the cross-lingual model can capture cross-lingual coreference knowledge.
| Source Mentions(English) | Target Mentions(English/German) |
|----------------------------------|-------------------------------------|
| Hong Kong | the city 's |
| It | the Supreme Court |
| he | 28-jähriger Koch (28-Year-Old Chef) |
| The 19-year-old American gymnast | Simone Biles |
## 5.2 Separate Monolingual Encoders
Multilingual pretrained models suffer from the curse of multilinguality which makes them less competitive as monolingual models. Thus, we test the robustness of our model with separate encoders, i.e., we replace the unified cross-lingual encoder
(XLM-R) with two separate monolingual encoders.
The baseline is a monolingual model trained with SpanBERT, and the cross-lingual model is trained with SpanBERT and BERT on source- and targetside text, on the English–German synthetic dataset.
Our experimental results show that models employing SpanBERT perform much better, which is consistent with previous findings by Joshi et al.
(2020). The monolingual model achieves 77.26 F1 score on the OntoNotes 5.0 English test set.
Our cross-lingual model obtains an even higher F1 score, 77.79, which is statistically significant (t-test, p=0.044). Thus, our proposed model is applicable to settings with separate monolingual encoders.
The improvement on SpanBERT is smaller than that on XLM-R. One explanation is that SpanBERT
is already very powerful and parallel data provides less additional knowledge. Another explanation is that the target-side encoder, a BERT model, is much weaker than SpanBERT, which makes it harder to learn the cross-lingual coreference.
## 6 Conclusions And Future Work
In this paper, we introduce a simple yet effective cross-lingual coreference resolution model to learn coreference from synthetic parallel data. Compared to models trained on monolingual data, our crosslingual model achieves consistent improvements, varying from 0.78 to 1.74 percentage points, on the OntoNotes 5.0 English dataset, which confirms that parallel data benefits neural coreference resolution.
We have shown that the unsupervised crosslingual coreference module can learn limited coreference knowledge. In future work, it would be interesting if we can provide the model some aligned cross-lingual coreference knowledge for supervision, to leverage parallel data better.
## Limitations
We expect that our cross-lingual models have learnt some coreference knowledge on the target languages and we conduct experiments on some languages in zero-shot settings. However, we do not get consistent and significant improvements compared to monolingual models. This should be further investigated which potentially helps languages with few or no coreference annotations. Compared to monolingual models, our cross-lingual model improves the source-side coreference resolution but it requires almost two times GPU memory during training. Thus, this model architecture imposes restrictions on using larger pretrained models given limited resources.
## Acknowledgments
We thank all reviewers for their valuable and insightful comments. This project is mainly funded by the Swedish Research Council (grant 2017-930),
under the project Neural Pronoun Models for Machine Translation. GT is also supported by Science Foundation of Beijing Language and Culture University (supported by "the Fundamental Research Funds for the Central Universities") (22YBB36).
We also acknowledge the CSC - IT Center for Science Ltd., for computational resources, with the help of Jörg Tiedemann.
## References
Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563–566.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, USA.
José Guilherme Camargo de Souza and Constantin Orasan. 2011. ˘ Can projected chains in parallel corpora help coreference resolution? In *Anaphora Processing and Applications*, pages 59–69, Berlin, Heidelberg. Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Vladimir Dobrovolskii. 2021. Word-level coreference resolution. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 7670–7675, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yulia Grishina and Manfred Stede. 2015. Knowledgelean projection of coreference chains across languages. In *Proceedings of the Eighth Workshop on* Building and Using Comparable Corpora, pages 14–
22, Beijing, China. Association for Computational Linguistics.
Yulia Grishina and Manfred Stede. 2017. Multi-source annotation projection of coreference chains: assessing strategies and testing opportunities. In Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes (CORBON 2017), pages 41–50, Valencia, Spain. Association for Computational Linguistics.
Christian Hardmeier, Jörg Tiedemann, and Joakim Nivre. 2013. Latent anaphora resolution for crosslingual pronoun prediction. In *Proceedings of the* 2013 Conference on Empirical Methods in Natural Language Processing, pages 380–391, Seattle, Washington, USA. Association for Computational Linguistics.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings
of Machine Learning Research, pages 2790–2799.
PMLR.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77.
Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5803–5808, Hong Kong, China. Association for Computational Linguistics.
Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 673–677, Florence, Italy. Association for Computational Linguistics.
Yuval Kirstain, Ori Ram, and Omer Levy. 2021. Coreference resolution without span representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 14–19, Online. Association for Computational Linguistics.
Ekaterina Lapshinova-Koltunski, Christian Hardmeier, and Pauline Krielke. 2018. ParCorFull: a parallel corpus annotated with full coreference. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
Miyazaki, Japan. European Language Resources Association (ELRA).
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics.
Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018.
Higher-order coreference resolution with coarse-tofine inference. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)*, pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics.
Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In *Proceedings of Human Language* Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25–32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
André F. T. Martins. 2015. Transferring coreference resolvers with posterior regularization. In *Proceedings* of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1427–1437, Beijing, China. Association for Computational Linguistics.
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission.
In *Proceedings of the Fourth Conference on Machine* Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics.
Michal Novák, Anna Nedoluzhko, and Zdenek Žabokrt- ˇ
ský. 2017. Projection-based coreference resolution using deep syntax. In Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes
(CORBON 2017), pages 56–64, Valencia, Spain. Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Eduard Hovy, Vincent Ng, and Michael Strube. 2014.
Scoring coreference partitions of predicted mentions:
A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 30–35, Baltimore, Maryland. Association for Computational Linguistics.
Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In *Joint Conference on* EMNLP and CoNLL - Shared Task, pages 1–40, Jeju Island, Korea. Association for Computational Linguistics.
Altaf Rahman and Vincent Ng. 2012. Translation-based projection for multilingual coreference resolution. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 720–730, Montréal, Canada. Association for Computational Linguistics.
Jörg Tiedemann and Santhosh Thottingal. 2020. OPUSMT - building open translation services for the world.
In *Proceedings of the 22nd Annual Conference of* the European Association for Machine Translation, pages 479–480, Lisboa, Portugal. European Association for Machine Translation.
Shubham Toshniwal, Sam Wiseman, Allyson Ettinger, Karen Livescu, and Kevin Gimpel. 2020. Learning
to ignore: Long document coreference with bounded memory neural networks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8519–8526, Online. Association for Computational Linguistics.
Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995.
Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In *International Conference on Learning Representations*, New Orleans, USA.
Patrick Xia, João Sedoc, and Benjamin Van Durme.
2020. Incremental neural coreference resolution in constant memory. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 8617–8624, Online. Association for Computational Linguistics.
Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 8527–8533, Online. Association for Computational Linguistics.
## A Experimental Settings
Our experiments are based on the code released by Xu and Choi (2020).5 We keep the original settings and do not do hyper-parameter tuning. As Xu and Choi (2020) have shown that higher-order, cluster-level inference does not further boost the performance on coreference resolution given the powerful text encoders, we do not consider higherorder inference in our experiments. Even though the mention boundaries are provided in the data, we still let the model learn to detect mentions by itself. For evaluation, we follow previous studies and employ the CONLL-2012 official scorer (Pradhan et al., 2014, v8.01)6to compute the F1 scores of three metrics (MUC(Vilain et al., 1995), B3
(Bagga and Baldwin, 1998), *CEAF*e(Luo, 2005))
and report the average F1 score.
Regarding the pretrained NMT models, the English-German/French/Russian models are transformer.wmt19* and *transformer.wmt14.en-fr* from https://github.com/pytorch/
fairseq/blob/main/examples/
5https://github.com/lxucs/coref-hoi 6https://github.com/conll/
reference-coreference-scorers translation/README.md, the NMT
models for other translation directions are opus-mt-en-* or *opus-mt-*en* from https:
//huggingface.co/Helsinki-NLP.
The baseline model is trained on monolingual data while the cross-lingual models are trained on synthetic parallel data. Note that we use the trained monolingual model to initialize the source-side modules of the cross-lingual model. We randomly initialize the parameters of adapters. As we train a unified cross-lingual model, we mainly employ cross-lingual pretrained models, the XLM-R base model, as our encoders, but we also explore using two separate monolingual encoders in Section 5.2.
All the models are trained for 24 epochs with 2 different seeds, and the checkpoint that performs best on the development set is chosen for evaluation.
We only report the average scores. Each model is trained on a single NVIDIA V100 GPU with 32GB
memory.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
in the Limitation section A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wen-etal-2023-towards | Towards Open-Domain {T}witter User Profile Inference | https://aclanthology.org/2023.findings-acl.198 | Twitter user profile inference utilizes information from Twitter to predict user attributes (e.g., occupation, location), which is controversial because of its usefulness for downstream applications and its potential to reveal users{'} privacy. Therefore, it is important for researchers to determine the extent of profiling in a safe environment to facilitate proper use and make the public aware of the potential risks. Contrary to existing approaches on limited attributes, we explore open-domain Twitter user profile inference. We conduct a case study where we collect publicly available WikiData public figure profiles and use diverse WikiData predicates for profile inference. After removing sensitive attributes, our data contains over 150K public figure profiles from WikiData, over 50 different attribute predicates, and over 700K attribute values. We further propose a prompt-based generation method, which can infer values that are implicitly mentioned in the Twitter information. Experimental results show that the generation-based approach can infer more comprehensive user profiles than baseline extraction-based methods, but limitations still remain to be applied for real-world use. We also enclose a detailed ethical statement for our data, potential benefits and risks from this work, and our efforts to mitigate the risks. | # Towards Open-Domain Twitter User Profile Inference
Haoyang Wen∗†, Zhenxin Xiao∗†, Eduard H. Hovy†‡**, Alexander G. Hauptmann**†
†Language Technologies Institute, Carnegie Mellon University
‡School of Computing and Information Systems, The University of Melbourne
{hwen3, zhenxinx, hovy, alex}@cs.cmu.edu
## Abstract
Twitter user profile inference utilizes information from Twitter to predict user attributes (*e.g.*,
occupation, location), which is controversial because of its usefulness for downstream applications and its potential to reveal users' privacy. Therefore, it is important for researchers to determine the extent of profiling in a safe environment to facilitate proper use and make the public aware of the potential risks. Contrary to existing approaches on limited attributes, we explore open-domain Twitter user profile inference. We conduct a case study where we collect publicly available WikiData public figure profiles and use diverse WikiData predicates for profile inference. After removing sensitive attributes, our data contains over 150K
public figure profiles from WikiData, over 50 different attribute predicates, and over 700K
attribute values. We further propose a promptbased generation method, which can infer values that are implicitly mentioned in the Twitter information. Experimental results show that the generation-based approach can infer more comprehensive user profiles than baseline extraction-based methods, but limitations still remain to be applied for real-world use. We also enclose a detailed ethical statement for our data, potential benefits and risks from this work, and our efforts to mitigate the risks. 1
## 1 Introduction
Users' profile information provides invaluable user features. Accurate automatic user profile inference is helpful for downstream applications such as personalized search (Shen et al., 2005; Teevan et al.,
2009; Zhu et al., 2008; Yao et al., 2020) and recommendations (Lu et al., 2015; Balog et al., 2019; Guy, 2015), and computational social media analysis (Arunachalam and Sarkar, 2013; Bamman et al.,
2014; Tang et al., 2015; Amplayo, 2019). However, there are increasing privacy concerns that conducting profiling without appropriate regulations may reveal people's private information. Therefore, it is essential to investigate the extent of profiling to promote proper use and make the potential risks clear to public and policy makers.
Previous work on user profile inference has focused on a very limited set of attributes, and models for different attributes employ different strategies.
One line of research has formulated it as a classification problem for attributes such as gender (Rao et al., 2011; Liu et al., 2012; Liu and Ruths, 2013; Sakaki et al., 2014), age (Rosenthal and McKeown, 2011; Sap et al., 2014; Chen et al., 2015; Fang et al.,
2015; Kim et al., 2017), and political polarity (Rao et al., 2010; Al Zamal et al., 2012; Demszky et al.,
2019). In such classification settings, each attribute has it own ontology or label set, which is difficult to generalize to other attributes, especially for attributes that have many possible candidate values
(*e.g.* geo-location, occupation). In addition, some work involves human annotation, which is expensive to be acquired and may raise fairness questions for labeled individuals (Larson, 2017).
Another line of research uses an extraction-based method, such as graph-based (Qian et al., 2017)
and unsupervised inference (Huang et al., 2016)
for geolocation, distant supervision-based extraction (Li et al., 2014; Qian et al., 2019). However, they still only cover limited attributes that cannot produce comprehensive profiles. Besides, many attribute values are only implicitly mentioned in Twitter context, which cannot be directly extracted.
In this paper, instead of limited attributes, we explore whether open-domain profiles can be effectively inferred. Taking WikiData (Vrandeciˇ c´
and Krötzsch, 2014) as the source of profile information, which provides a much more diverse
![1_image_0.png](1_image_0.png)
Entity ID Q76
Name Barack Obama
Country of citizenshipUnited States of
America
Occupation Politician
Position held President of the
United States
Work location Washington, D.C.
Spouse Michelle Obama ... ...
(a) WikiData information.
Barack Obama
![1_image_1.png](1_image_1.png)
![1_image_2.png](1_image_2.png)
Dad, husband, President, citizen. Washington, DC
Across the country, Americans are standing up for abortion rights—and I'm proud of everyone making their voices heard. Join a march near you:
...
Happy Mother's Day! I hope you all let the moms and mother-figures in your life know how much they mean to you. @MichelleObama, thank you for being a wonderful mother and role model to our daughters and to so many others around the world.
...
(b) Twitter information.
predicate set, we find WikiData profiles that have Twitter accounts. We further collect Twitter information for each account, including their recent tweets and Twitter metadata, and build models to infer profiles from collected Twitter information, which is solely based on publicly available information and does not involve any additional human annotation efforts.
We first follow Li et al. (2014) to use profile information to generate distant supervised instances and build a sequence labeling-based profile extraction model, similar to Qian et al. (2019). In order to allow open-domain inference, we propose to use attribute names as prompts (Lester et al., 2021)
for input sequences to capture the semantics for attribute predicates instead of involving attribute names into the tag set. However, the extraction approach requires that answers must appear in the Twitter context, which ignores some implicit text clues. Therefore, we further propose a promptbased generation method (Raffel et al., 2020) to infer user profiles, which can additionally produce values that are not straightforwardly mentioned in the Twitter information.
Our statistics show that only a limited number of WikiData attribute values can be directly extracted from Twitter information. Our experiments demonstrate a significant improvement when using the generation-based approach compared to the extraction-based approach, indicating that performing inference instead of pure extraction will be able to obtain more information from tweets. Further analysis shows that the improvement comes mainly from the power of combining extraction and inference on information not explicitly mentioned. However, we still find several challenges and limitations for the model to be applied for realworld use, including performances of low-resource attributes, distributional variances between celebrities and normal people, and spurious generation.
Our contributions are summarized as follows:
- To the best of our knowledge, this is the first work to explore open-domain Twitter user profiles.
- We create a new dataset for user profile inference from WikiData, providing with rich and accurate off-the-shelf profile information that can facilitate future social analysis research.
- We propose a prompt-based generation-based method for user profile inference that provides a unified view to infer different attributes.
## 2 Problem Definition And Dataset
In this section, we first define the open-domain user profile inference and then describe the dataset collection in detail.
## 2.1 Problem Formulation
The ultimate goal of user profile inference is to infer certain attribute value given the Twitter information of a user. In Twitter, as shown in Figure 1b, we mainly use the collection of recent Twitter tweets from a user u to represent Twitter information, which we denote as
$\mathbf{X}_{\text{twect}},u=\left[\mathbf{2}_{\text{twect}},u,1,\ldots,\mathbf{2}_{\text{twect}},u,n_{\text{twect}},u\right]$
where each xtweet*,u,i* represents a sequence from a single tweet. In addition, we also concatenate the user's publicly available Twitter metadata (username, display name, bio and location) into a single sequence as complementary user information
| Category | # |
|----------------------------------|------------|
| # predicates | 58 |
| # average examples / predicate | 12,238 |
| # average candidates / predicate | 1,179 |
| # average tokens / answer | 1.99 |
| # tweets | 13,570,664 |
| # average words per tweet | 15.3 |
| # users (train) | 106,699 |
| # users (dev) | 15,243 |
| # users (test) | 30,486 |
Table 1: Statistics of our collected data from WikiData and Twitter.
xuser,u. The final input from Twitter is the combination of user metadata and recent tweets
$${\mathrm{:}},u;\left[x_{\mathrm{user},u}\right]\right].$$
## Xu = [Xtweet,U; [Xuser,U]] .
We then assume that user profiles follow the keyvalue representation
$R_{u}=\{(p_{u,1},v_{u,1}),\ldots,(p_{u,n_{r},u},v_{u,n_{r},u})\}$
where each pair (pu,i, vu,i) represents the predicate and value of an attribute. Figure 1a shows an example key-value profile obtained from WikiData.
The model for open-domain user profile inference is to infer the value v of an attribute p from an user u given their Twitter information and a specific attribute predicate with parameter θ
## F(Xu, P; Θ) = V. 2.2 Dataset Creation
Our dataset consists of WikiData public figure profiles and corresponding Twitter information. An example of paired WikiData profile and Twitter information is shown in Figure 1. We first discuss the collection of WikiData profiles and then discuss the collection of Twitter information.
WikiData processing. WikiData is a structural knowledge base, which can be easily queried with database such as MongoDB2 using its dump3. It contains rich encyclopedia information, including information for public figures. Each WikiData entity consists of multiple properties and corresponding claims, which can be considered as the predicate value pairs as shown in Figure 1a4.
First, we use WikiData to filter entities that are persons with Twitter accounts. This can be done by checking whether each entity contains the propertyclaim pair "instance of" (P31) "human" (Q5) and then checking whether the entity includes the property "Twitter username" (P2002). Then we extract the account of those filtered persons using the claim
(value) of property "Twitter username" (P2002). If there are multiple claims, we use the first only.
Next, for each entity we check all its properties to build the person's profile. In Figure 1a, as an example, we can see that the property "occupation" is "politician". For each property and claim, we only consider their text information, and we use English information only. If there are multiple claims for a property, we use the first one. We drop all properties that do not have an English name for either predicate or value, or properties that do not contain any claims.
Since WikiData profiles usually contain many noisy properties that are not suitable (*e.g.*, blood type) for Twitter user profile inference, we clean the data by 1) filtering extremely low-frequency properties; 2) manually selecting some meaningful and discriminative properties and 3) removing sensitive personal information listed in the Twitter Developer Agreement and Policy, such as political affiliation, ethnic group, religion, and sex or gender5. Please refer to Appendix B for the complete list of properties that we use.
Twitter processing. We collect publicly available Twitter information for users that we gather from WikiData, as shown in Figure 1b. The Twitter information consists of the user's at most 100 recent publicly available tweets, as well as their metadata that includes username, display name, bio (a short description that a user can edit in their profile) and location. We remove all web links and hashtags from those tweets.
Statistics. We collect more than 168k public figures from Wikidata and filter out users whose Twitter accounts are no longer accessible. We obtain about 152K users with 13 million tweets in total.
We randomly split the users into train, development 4Please refer to https://www.mediawiki.org/
wiki/Wikibase/DataModel for further details of Wikibase DataModel.
5https://developer.twitter.com/en/dev eloper-terms/agreement-and-policy
| Category | Our | Li et al. | Fang et al. |
|--------------|--------|-------------|---------------|
| Data | (2014) | (2015) | |
| # predicates | 58 | 3 | 6 |
| # users | 152K | 10.6K | 2.5K |
| # values | 709K | 10.6K | 15K |
| # tweets | 13M | 39M | 846K |
![3_image_0.png](3_image_0.png)
and test sets by 7:1:2. The detailed statistics are shown in Table 1. We compare it with previous work such as Li et al. (2014) and Fang et al. (2015),
demonstrated in Table 2. We find that our dataset contains much more diverse predicates compared to Li et al. (2014) and Fang et al. (2015). We also have a much larger number of users and attribute values compared to the previous work. Although Li et al. (2014) contains more tweets than ours, they only consider the extraction setting, and most of the tweets in their datasets are negative samples.
Long tail distribution of predicates. As shown in Figure 2, the number of examples per predicate follows a long tail distribution. Only a few predicates have many training examples, while most appear only partially in the user's entity list. This raises a huge challenge for us to develop a good model to utilize and transfer the knowledge from rich-resource predicates to low-resource predicates.
We discuss the details in the following section.
| . . . | On | behalf | of | the | United |
|---------|------|-----------|------|---------|----------|
| O | O | O | O | O | B |
| Nations | , | Secretary | - | General | . . . |
| I | O | O | O | O | O |
## 3 Methods
In this section, we discuss our methods for opendomain Twitter user profile inference. First, we introduce an extraction-based method that largely follows the principle from Li et al. (2014) and Qian et al. (2019). Then we discuss our proposed prompt-based generation approach that provides a unified view to infer different attribute values, and can further infer values that do not appear in the Twitter context.
## 3.1 Extraction-Based Method
We follow Li et al. (2014) and Qian et al. (2019) to generate distantly supervised training instances for user profile extraction. Since our problem is open domain, we propose using attribute predicates as prompts in input sequences and perform sequence labeling over them. This method can be divided into three steps: label generation, modeling, and result aggregation.
Label generation. Distant supervised labeling assumes that if a user u's profile contains attribute value v, we can find mentions in their Twitter information expressing the value.
Specifically, we consider each sequence xiin Xu independently. For each attribute predicatevalue pair (pj , vj ) in u's profile, we construct a tag sequence ti,pj for xi and the predicate pj . For a span [xb*, . . . , x*e] that matches vj , we make
$$t_{i,p_{j},b}=\mathbb{B},$$ $$t_{i,p_{j},b+1}=\ldots=t_{i,p_{j},e}=\mathbb{I}.$$
If a position k does not match the value, then ti,pj ,k = O. For simplicity, we use exact string matching between vj and spans in the sequence.
An example tag sequence is shown in Figure 3.
Modeling. Sequence labeling tasks usually include the label name in the tag set (*e.g.* B-PER for the beginning of a mention representing a person; Lample et al.,2016). In the open-domain profile inference setting, we have numerous attributes and
![4_image_0.png](4_image_0.png)
many of them have only a few instances as shown in Figure 2, which are not sufficient to be considered as separate tag labels.
Therefore, we propose to use prompt-guided sequence labeling, where we append the attribute predicate p to the front of the sequence as the prompt as follows:
[CLS] p [SEP] xi Then we perform sequence labeling on the second part of the input xi using the generated labels. We use RoBERTa (Liu et al., 2019) as the backbone encoder, and we denote the last hidden states of xi by H = [h1*, . . . ,* hn] where n represents the length of xi. The probability of predicted labels is
$$\therefore p,k\ \parallel$$
P(ti,p,k | xi, p) = softmax (Whhk + bh) ∈ R
3, where k represent the position in xi.
During training, we randomly drop negative instances that do not contain any B labels to keep the positive-negative sample ratio steady.
Result aggregation. During inference, for each user, we first perform sequence labeling on every sequence predicate pair exhaustively. Then we aggregate sequence-level labeling results into userlevel results. For each attribute predicate, we select the span that has the largest averaged logit as the final answer.
## 3.2 Generation-Based Method
Extraction-based methods suffer from the fact that attribute values must appear in the Twitter context.
Instead with user profile inference, it is very likely that we cannot directly find those values in the context and therefore need to infer them using implicit evidence. To address this issue, we propose to use the conditional generation method, which has been shown to be effective in both extracting input information (Raffel et al., 2020; Li et al., 2021)
and performing inference and summarization (See et al., 2017; Alshomary et al., 2020). The overall framework is illustrated in Figure 4.
Modeling. We use T5 (Raffel et al., 2020), a generative transformer based model, to directly generate the answer given the predicate. Similar to the extraction-based method, to address the longtail distribution problem we use the attribute predicate as prompt at the beginning of the input sequence, which can capture rich semantics of those open-domain attribute predicates, especially when the attribute predicate lacks examples in the data.
Specifically, the input is the concatenation of prefix predicate (*e.g.* predicate:occupation),
user's Twitter metadata, and the sequence of tweets that the user has recently published. We train the model to generate the attribute value (y1*, . . . , y*n)
by minimizing the cross-entropy loss:
$${\mathcal{L}}_{C E}=-\frac{1}{n}\sum_{i=1}^{n}\log p(\mathbf{y}_{i}|\mathbf{y}_{<i},\mathbf{x}),$$
$\mathbf{v}=\mathbf{v}$.
where x is the input to the model and n represents the length of the output sequence.
Since we have at most 100 recent tweets of each user whose total length normally exceeds the limit of the model, we use sliding window and divide recent tweets organized in chronological order into different windows where each window can represent information within a time range. Then we train the model on these divided examples separately.
Each example contains the same prefix predicate and Twitter metadata but uses different parts of the tweets to infer the attribute value.
Result aggregation. During inference, we use the same sliding window strategy and divide the input into different examples to make predictions independently. Then, similar to the extraction-based method, we aggregate those window-level predictions into a user-level prediction. We count the occurrences of each predicted text for a predicate and then use majority vote to find the aggregated result of that predicate.
| Development Set | Test Set | | | | | |
|-------------------|------------|--------|-------|-----------|--------|-------|
| Model | Precision | Recall | F1 | Precision | Recall | F1 |
| Random | 0.22 | 0.22 | 0.22 | 0.23 | 0.23 | 0.23 |
| Majority | 14.56 | 14.56 | 14.56 | 14.19 | 14.19 | 14.19 |
| Extraction | 18.36 | 9.69 | 12.69 | 18.39 | 9.80 | 12.79 |
| Generation | 59.05 | 43.71 | 50.23 | 58.73 | 43.40 | 49.92 |
Result filtering. The generation-based method aggressively generates output without estimating whether the generated output is spurious. Therefore, it is important to filter those incorrect predictions during inference.
After result aggregation, we first take the product of probability for each generated token as the score for each aggregated prediction, and then use the averaged score over all aggregated predictions as the confidence score for the aggregated result.
A low confidence score indicates that the model cannot determine whether the prediction is valid.
For each predicate, we search the best threshold and set predictions with confidence scores lower than threshold as "no prediction". We consider all predicted confidence scores from the development set as candidate thresholds and choose the threshold that yields the best performance on the development set. The best searched threshold is then directly applied to filter results on the test set.
## 4 Experiments
In this section, we conduct experiments on our constructed dataset and user profile extraction dataset (Li et al., 2014). Then we provide a qualitative analysis and discuss the remaining challenges.
## 4.1 Experimental Setup
We use roberta-base6as the base model for the extraction-based model, as it demonstrates its effectiveness on multiple sequence labeling tasks.
We use t5-small7for the generation-based model, which has much fewer parameters than roberta-base. Please refer to Appendix A
for a detailed hyperparameter setup and estimated training and inference time.
Evaluation metric. We choose user-level F1 as our evaluation metric. Specifically, we suppose
## 6https://huggingface.co/roperta-base-7https://huggingface.co/t5-small
Random 0.26 0.26 0.26
Majority 4.77 4.77 4.77
Extraction 72.14 **71.47** 71.80
Generation **77.64** 68.60 **72.84**
| Model | Precision | Recall | F1 |
|------------|-------------|----------|-------|
| Random | 0.26 | 0.26 | 0.26 |
| Majority | 4.77 | 4.77 | 4.77 |
| Extraction | 72.14 | 71.47 | 71.80 |
a user profile consists n different attributes. We use C(·) to represent the count of different types of output. C(no prediction) refers to the count of
"no predictions" and C(correct prediction) refers to the count of predictions that match the WikiData profile. Then we obtain the user-level F1 as follow:
$$\begin{array}{c}{{\mathrm{precision}=\frac{C(\mathrm{correct~prediction})}{n-C(\mathrm{no~prediction})}}}\\ {{\mathrm{recall}=\frac{C(\mathrm{correct~prediction})}{n}}}\\ {{F_{1}=2\frac{\mathrm{precision}\cdot\mathrm{recall}}{\mathrm{precision}+\mathrm{recall}}}}\end{array}$$
We consider the prediction go be valid when it identically matches the ground truth. We do not use entity-level or tag-level F1 as Qian et al.
(2019) because it is not applicable to the generation model. We do not use generation-based metrics (*e.g.*, BLEU) because we observe that most predictions are very short. In addition, compared to no prediction, we want to penalize wrong predictions more. In F1, the basis of precision does not include "no prediction" results from models while it still has a penalty for wrong predictions.
## 4.2 Results 4.2.1 Results On User Profile Inference
The main results are shown in Table 3. The random result means that predictions are uniformly
| Category | EDUCATION | JOB | | | | |
|------------|-------------|-------|-----------|--------|-------|-------|
| Precision | Recall | F1 | Precision | Recall | F1 | |
| GraphIE | 92.87 | 79.74 | 85.77 | 76.03 | 61.01 | 67.66 |
| Generation | 94.28 | 91.40 | 92.82 | 78.97 | 65.78 | 71.76 |
randomly selected and the majority means that predictions are selected with the values that occur most frequently in the training set. We find that both simple methods perform poorly. Overall, we find that our generation-based method significantly outperforms other methods by a large margin. We also find that the extraction-based method cannot even outperform the majority baseline. The reason is that the majority vote can achieve relatively high accuracy on attributes that have a relatively small number of candidates, or one specific candidate takes a large portion of the data, while we cannot find corresponding occurrences of some of those attributes in the Twitter context.
To verify the above claim, we perform another test on a subset of the test set data, for which we can find corresponding occurrences of attribute values in the Twitter context. We find that only 13.56% of the test data can find those value occurrences, which indicates that the majority of the data cannot be directly extracted from Twitter context. The results are shown in Table 4. By comparing the results with overall results, we can find that both extraction and generation systems can get better performance on the subset that we can find occurrences of attribute values. We find that the extraction method performs quite closely to the generation-based method in this setting, though the generation-based method performs better on precision and F1 and the extraction-based method better on recall. This result indicates that when attribute values occur in Twitter context, the extraction model can effectively extract them, while the generation-based method can additionally infer values that are not included in the Twitter content.
## 4.2.2 Results On User Profile Extraction
We conduct additional experiments on the profile extraction dataset from Li et al. (2014), where we can provide a direct comparison between our generation-based model and previous work. We follow the same preprocessing as Qian et al. (2019)
| Model | Precision | Recall | F1 |
|--------------|-------------|----------|-------|
| Our model | 59.05 | 43.71 | 50.23 |
| -threshold | 45.95 | 45.95 | 45.95 |
| -aggregation | 57.39 | 43.35 | 49.39 |
| -metadata | 53.59 | 40.45 | 46.10 |
on EDUCATION and JOB. We make two changes to our generation-based model for this dataset. 1)
This dataset does not contain a timestamp for each tweet, so we use each tweet as an independent sample instead of the sliding window strategy. 2) This dataset is designed for extraction, so for tweets from which the answers cannot be extracted we train the generation model to output "no prediction".
The experiment results are shown in Table 5. We compare with GraphIE (Qian et al., 2019), one of the state-of-the-art model on this dataset. We reproduce the results from their script8and re-evaluate on user-level with majority vote. We use the averaged results over 5-fold cross validation as Qian et al. (2019). The results show that our model can significantly outperform GraphIE on both EDUCA-TION and JOB attributes, which indicates that even if the attributes are limited, the generation-based method can still achieve promising performance.
## 4.3 Ablation Study
We conduct an ablation study on two of our components, result filtering and result aggregation, on our profile inference data, as shown in Table 6. We find that result filtering can successfully filter spurious results by improving over 13% on precision, while only dropping about 2% on recall. We also find that result aggregation improves both precision and recall, indicating that we can obtain better 8https://github.com/thomas0809/GraphIE
![7_image_0.png](7_image_0.png)
inference by using a larger Twitter context. Twitter metadata also provides rich information about the user's background. We train and evaluate another model without Twitter metadata, and find that we see a significant performance drop. But we still find that many attributes inferred by the model are not dependent on those metadata.
## 4.4 Qualitative Analysis
Figure 5 demonstrates four window-level predictions from generation-based model with relevant input context. The first case shows that the model can directly copy relevant information from context.
The second and third cases show that the model can infer the information based on the context. The last case shows an error that the model does not fully utilize the information provided by "wrestle" and generates incorrect information, possibly affected by the other word "show". This case indicates the importantce of background information for a specific attribute value.
## 4.5 Remaining Challenges
Although achieving improvement on open-domain attribute inference, we still find that the model's performance on attributes with low training samples is generally much lower than on attributes with rich samples. It is still under investigation for better generalization on these low-resource attributes.
WikiData provides rich profiles for many Twitter users. However, the distribution of these Twitter users with WikiData profiles may not align with the need for downstream tasks. For example, most people with WikiData profiles are celebrities, such as politicians and athletes, which lacks information for general occupations such as farm worker.
The granularity of prediction results is also another important directions to investigate. We observe some cases that the prediction and the grountruth are in different levels of granularity. For example, the groundtruth can be "Tokyo" while the prediction may be "Japan". Therefore, it is also important to address this issue with both better modeling as well as evaluation.
We consider that the model can predict all collected attribute values because we have manually selected meaningful and discriminative properties from WikiData during dataset construction. However, it is still possible that a specific property value cannot be detected well based on Twitter content, leading to spurious generation output. For example, if a user is a medical doctor but did not discuss any medical information on Twitter, the occupation is very hard to predict. It is still important to further investigate this "cannot predict" cases in both dataset construction and model design.
## 5 Related Work
User Profile Inference. One line of user modeling research focuses on profile inference or extraction. Previous work on user profile inference focuses on some specific attributes such as gender (Rao et al., 2011; Liu et al., 2012; Liu and Ruths, 2013; Sakaki et al., 2014), age (Rosenthal and McKeown, 2011; Sap et al., 2014; Chen et al.,
2015; Fang et al., 2015; Kim et al., 2017), and political polarity (Rao et al., 2010; Al Zamal et al.,
2012; Demszky et al., 2019). They often consider them as multi-class classification problems. Most of these methods use the context of those social media posts. Alternatively, user name and profile in social media (Liu et al., 2012; Liu and Ruths, 2013), part-of-speech and dependency features (Rosenthal and McKeown, 2011), users' social circles (Chen et al., 2015) and photos (Fang et al., 2015) have been explored as additional important features for different attribute inference. But those classification settings have a pre-defined ontology or label set, which is difficult to extend to other attributes.
In addition to classification-based methods, there are also graph-based (Qian et al., 2017), distant supervision-based and unsupervised extraction (Huang et al., 2016). Compared to the classification method, extraction-based methods are capable of identifying attributes with a large ontology. But they rely on entities from the context as candidates, which limits the scope of the attributes that occur frequently in the social media context.
Our open-domain Twitter user profile inference uses a larger predicate set and data than previous work. We further propose the generation-based approach, which addresses the limited scope.
Another line of user modeling research focuses on leveraging behavior signals (Kobsa, 2001; Abel et al., 2013) or building implicit user representations (Islam and Goldwasser, 2021, 2022), which is more distantly related to our problem.
Sociolinguistic variation. The intuition of inferring user attributes from their posts aligns with sociolinguistic variation in which people investigate whether a linguistic variation can be attributed to different social variables (Labov, 1963). Computational efforts to discover these relationships include demographic dialectal variation (Blodgett et al.,
2016), geographical variation (Eisenstein et al.,
2010; Nguyen and Eisenstein, 2017), syntactic or stylistic variation over age and gender (Johannsen et al., 2015), socio-economic status (Flekova et al., 2016; Basile et al., 2019).
## 6 Conclusion
In this paper, we first explore open-domain Twitter user profile inference. We use the combination of WikiData and Twitter information to create a largescale dataset. We propose to use a generation-based method with attributes as prompts and compare it with the extraction-based method. The result shows that the generation-based method can significantly outperform the extraction-based method on opendomain profile inference, with the ability to perform both direct extraction and indirect inference.
Our further analysis still finds some of the errors and remaining challenges of the generation-based method, such as degraded performances for lowresource attributes and spurious generation, which reveals the limits of our current generation-based user profile inference model.
## Limitations
Besides the technical challenges discussed in Section 4.4-4.5, limitations of this work also include the issue of data imbalances that some attributes may have imbalance distributions. For example, we may find significantly more profiles with the country of citizenship as United States than any other countries, which may have a negative impact on generalization, especially when the distributions of training and inference diverge. Similarly, the distributional variances discussed in Section 4.5 indicate that the prediction results for non-celebrity distributions should be carefully adjudicated. The degraded performances on low-resource attributes also indicate that the prediction results may be unreliable when performing inference on attributes without enough training data.
In this paper, we assume that the attributes are already given. However, many WikiData attributes are not applicable to everyone. For example, attributes such as "position played on team" may be specific to athletes. Therefore, it is also important to investigate how to automatically detect applicable attributes for certain users.
In this work, we use at most 100 recent tweets and aggressively create training and inference examples between each attribute and those tweets.
Since we use sliding window on the collected tweets, involving more tweets in training or inference may significantly increase the time cost.
## Ethics Statement
The goal of this paper is to extend Twitter user profile inference from limited attributes to the open domain. We hope that this work will help to illustrate how people express their attributes both explicitly but especially also implicitly through their social media posts. We also believe that the NLP community has to produce detailed information about the potential, pitfalls, and basic limitations of profile inference methods so that we can establish standards to facilitate proper use of these technologies, as well as be vigilant and effective at combating nefarious applications.
Data and model biases. To mitigate potential distributional biases, we exhaustively collect entities from WikiData without selecting certain groups of users. However, we acknowledge that the collective information may still contain unintentional social biases. As an example, one of the potential issues is that people who have WikiData profiles are public figures, which may not reflect the actual distribution over general populations (*e.g.*, occupation). Besides, as in Abid et al. (2021), large language models themselves may contain biases.
WikiData is constantly edited by a large number of WikiData contributors and maintainers. Although we try to make our study as representative as possible, it is possible that a statement from WikiData may not reflect the preception from certain groups or individual (Shenoy et al., 2022). We would like stakeholders to be aware of these issues and we urge stakeholders to first investigate the effect of potential issues before drawing any conclusions for any individual or social group using this work.
Proper use v.s. improper use. The major difference between proper use and improper use is whether the use case follows necessary legal and ethical regulations or framework. For example, Williams et al. (2017) propose an ethical framework based on users' consent to conduct Twitter social research. If the information is not publicly available, one must obtain consent. Opt-out consent can be used when the information is not sensitive, otherwise opt-in consent is required. With proper regulations, this work can be used to enhance personalized user experience, investigate what stakeholders to know to effectively protect personal information.
Sensitivity of personal information. In this work we follow Twitter Developer Agreement and Policy and remove sensitive personal information. But it is still possible to infer sensitive information indirectly. For example, "candidacy in election" may be possibly used to infer political affiliation although the affiliations are generally public for those people. Similarly, personal pronouns, widely present in tweets, may also be used to infer gender. Furthermore, combinations of various sources might allow personal identification (Sweeney, 2000a,b). Even though we do not use private information in our work, based on our results, we speculate that there are unobserved risks of privacy loss for using Twitter. Therefore, We ask that future work should fully comply with regulations and any non-public or private results should be properly protected (Kreuter et al., 2022).
We have set up the following protocol to ensure the proper use and to prevent adverse impact:
- We believe that increasing the transparency of the pipeline can help prevent potential social harm.
We plan to release all necessary resources for research reproduction purposes so that others can audit and verify it and prevent overestimation of the model. We also provide a complete list of attributes in Table 7 to increase the transparency.
We are open to all further explorations that can prevent unintended impacts.
- Our constructed dataset for profile inference research is drawn solely from publicly available WikiData and Twitter, where the ethical consideration should be similar to other work using encyclopedia resources such as (Sun and Peng, 2021).
Furthermore, according to WikiData: Oversight, non-public personal information are monitored and removed by Wikidata. According to WikiData Term of Use, we can freely reuse and build upon on WikiData. According to the Twitter Developer Agreement and Policy, we will only release IDs instead of actual content for noncommercial research purposes from academic institutions.
- To ensure the proper use of this work, we will not release the data via a publicly available access point. Instead, we will release the data based on individual request and we will ask for consent that 1) requesters are from research institutions 2) they will follow all the regulations when using our work 3) they will not use the model to infer non-public users unless obtained proper consent from those users.
## References
Fabian Abel, Qi Gao, Geert-Jan Houben, and Ke Tao.
2013. Twitter-based user modeling for news recommendations. In *Twenty-Third International Joint* Conference on Artificial Intelligence. Citeseer.
Abubakar Abid, Maheen Farooqi, and James Zou. 2021.
Large language models associate muslims with violence. *Nature Machine Intelligence*, 3(6):461–463.
Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012.
Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors. In Proceedings of the International AAAI Conference on Web and Social Media, volume 6, pages 387–390.
Milad Alshomary, Shahbaz Syed, Martin Potthast, and Henning Wachsmuth. 2020. Target inference in argument conclusion generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4334–4345, Online. Association for Computational Linguistics.
Reinald Kim Amplayo. 2019. Rethinking attribute representation and injection for sentiment classification.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5602–
5613, Hong Kong, China. Association for Computational Linguistics.
Ravi Arunachalam and Sandipan Sarkar. 2013. The new eye of government: Citizen sentiment analysis in social media. In Proceedings of the IJCNLP 2013 Workshop on Natural Language Processing for Social Media (SocialNLP), pages 23–28, Nagoya, Japan.
Asian Federation of Natural Language Processing.
Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, scrutable and explainable user models for personalized recommendation.
In Proceedings of the 42nd International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR'19, page 265–274, New York, NY, USA. Association for Computing Machinery.
David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. *Journal of Sociolinguistics*, 18(2):135–
160.
Angelo Basile, Albert Gatt, and Malvina Nissim. 2019.
You write like you eat: Stylistic variation as a predictor of social stratification. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 2583–2593, Florence, Italy.
Association for Computational Linguistics.
Su Lin Blodgett, Lisa Green, and Brendan O'Connor.
2016. Demographic dialectal variation in social media: A case study of African-American English.
In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1119–1130, Austin, Texas. Association for Computational Linguistics.
Xin Chen, Yu Wang, Eugene Agichtein, and Fusheng Wang. 2015. A comparative study of demographic attribute inference in twitter. In *Proceedings of the* International AAAI Conference on Web and Social Media, volume 9, pages 590–593.
Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Jurafsky. 2019. Analyzing polarization in social media:
Method and application to tweets on 21 mass shootings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2970–
3005, Minneapolis, Minnesota. Association for Computational Linguistics.
Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277–1287, Cambridge, MA. Association for Computational Linguistics.
Quan Fang, Jitao Sang, Changsheng Xu, and M. Shamim Hossain. 2015. Relational user attribute inference in social media. *IEEE Transactions on* Multimedia, 17(7):1031–1044.
Lucie Flekova, Daniel Preo¸tiuc-Pietro, and Lyle Ungar.
2016. Exploring stylistic variation with age and income on Twitter. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 313–319, Berlin, Germany. Association for Computational Linguistics.
Ido Guy. 2015. The role of user location in personalized search and recommendation. In Proceedings of the 9th ACM Conference on Recommender Systems, RecSys '15, page 236, New York, NY, USA. Association for Computing Machinery.
Chao Huang, Dong Wang, Shenglong Zhu, and Daniel Yue Zhang. 2016. Towards unsupervised home location inference from online social media.
In *2016 IEEE International Conference on Big Data*
(Big Data), pages 676–685.
Tunazzina Islam and Dan Goldwasser. 2021. Analysis of twitter users' lifestyle choices using joint embedding model. In *Proceedings of the Fifteenth International AAAI Conference on Web and Social Media,*
ICWSM 2021, held virtually, June 7-10, 2021, pages 242–253. AAAI Press.
Tunazzina Islam and Dan Goldwasser. 2022. Twitter user representation using weakly supervised graph embedding. In *Proceedings of the Sixteenth International AAAI Conference on Web and Social Media,*
ICWSM 2022, Atlanta, Georgia, USA, June 6-9, 2022, pages 358–369. AAAI Press.
Anders Johannsen, Dirk Hovy, and Anders Søgaard.
2015. Cross-lingual syntactic variation over age and gender. In *Proceedings of the Nineteenth Conference on Computational Natural Language Learning*,
pages 103–112, Beijing, China. Association for Computational Linguistics.
Sunghwan Mac Kim, Qiongkai Xu, Lizhen Qu, Stephen Wan, and Cécile Paris. 2017. Demographic inference on Twitter using recursive neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 471–477, Vancouver, Canada.
Association for Computational Linguistics.
Alfred Kobsa. 2001. Generic user modeling systems. *User modeling and user-adapted interaction*,
11(1):49–63.
Anne Kreuter, Kai Sassenberg, and Roman Klinger.
2022. Items from psychometric tests as training data for personality profiling models of Twitter users. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 315–323, Dublin, Ireland. Association for Computational Linguistics.
William Labov. 1963. The social motivation of a sound change. *Word*, 19(3):273–309.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition.
In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics.
Brian Larson. 2017. Gender as a variable in naturallanguage processing: Ethical considerations. In *Proceedings of the First ACL Workshop on Ethics in* Natural Language Processing, pages 1–11, Valencia, Spain. Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jiwei Li, Alan Ritter, and Eduard Hovy. 2014. Weakly supervised user profile extraction from Twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 165–174, Baltimore, Maryland.
Association for Computational Linguistics.
Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 894–908, Online. Association for Computational Linguistics.
Wendy Liu and Derek Ruths. 2013. What's in a name?
using first names as features for gender inference in twitter. In *2013 AAAI Spring Symposium Series*.
Wendy Liu, Faiyaz Zamal, and Derek Ruths. 2012. Using social media to infer gender composition of commuter populations. In *Proceedings of the International AAAI Conference on Web and Social Media*,
volume 6, pages 26–29.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Kai Lu, Yi Zhang, Lanbo Zhang, and Shuxin Wang.
2015. Exploiting user and business attributes for personalized business recommendation. In *Proceedings* of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15, page 891–894, New York, NY, USA. Association for Computing Machinery.
Dong Nguyen and Jacob Eisenstein. 2017. A kernel independence test for geographical language variation.
Computational Linguistics, 43(3):567–592.
Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, and Regina Barzilay. 2019. GraphIE: A graph-based framework for information extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 751–761, Minneapolis, Minnesota. Association for Computational Linguistics.
Yujie Qian, Jie Tang, Zhilin Yang, Binxuan Huang, Wei Wei, and Kathleen M Carley. 2017. A probabilistic framework for location inference from social media. arXiv preprint arXiv:1702.07281.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierarchical bayesian models for latent attribute detection in social media. In *Proceedings of the International AAAI Conference on Web and Social Media*,
volume 5, pages 598–601.
Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in twitter. In *Proceedings of the 2nd International Workshop on Search and Mining UserGenerated Contents*, SMUC '10, page 37–44, New York, NY, USA. Association for Computing Machinery.
Sara Rosenthal and Kathleen McKeown. 2011. Age prediction in blogs: A study of style, content, and online behavior in pre- and post-social media generations.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 763–772, Portland, Oregon, USA. Association for Computational Linguistics.
Shigeyuki Sakaki, Yasuhide Miura, Xiaojun Ma, Keigo Hattori, and Tomoko Ohkuma. 2014. Twitter user gender inference using combined analysis of text and image processing. In *Proceedings of the Third Workshop on Vision and Language*, pages 54–61, Dublin, Ireland. Dublin City University and the Association for Computational Linguistics.
Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, and Hansen Andrew Schwartz. 2014. Developing age and gender predictive lexica over social media. In *Proceedings of the 2014 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 1146–1151, Doha, Qatar. Association for Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Xuehua Shen, Bin Tan, and ChengXiang Zhai. 2005.
Implicit user modeling for personalized search. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management, CIKM '05, page 824–831, New York, NY, USA. Association for Computing Machinery.
Kartik Shenoy, Filip Ilievski, Daniel Garijo, Daniel Schwabe, and Pedro Szekely. 2022. A study of the quality of wikidata. *Journal of Web Semantics*,
72:100679.
Jiao Sun and Nanyun Peng. 2021. Men are elected, women are married: Events gender bias on Wikipedia.
In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 350–360, Online. Association for Computational Linguistics.
Latanya Sweeney. 2000a. Simple demographics often identify people uniquely. *LIDAP-WP4, 2000*.
Latanya Sweeney. 2000b. Uniqueness of simple demographics in the u.s. population. *LIDAP-WP4, 2000*.
Duyu Tang, Bing Qin, and Ting Liu. 2015. Learning semantic representations of users and products for document level sentiment classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1014–1023, Beijing, China. Association for Computational Linguistics.
Jaime Teevan, Meredith Ringel Morris, and Steve Bush.
2009. Discovering and using groups to improve personalized search. In Proceedings of the Second ACM
International Conference on Web Search and Data Mining, WSDM '09, page 15–24, New York, NY,
USA. Association for Computing Machinery.
Denny Vrandeciˇ c and Markus Krötzsch. 2014. ´ Wikidata: A free collaborative knowledgebase. Commun.
ACM, 57(10):78–85.
Matthew L Williams, Pete Burnap, and Luke Sloan.
2017. Towards an ethical framework for publishing twitter data in social research: Taking into account users' views, online context and algorithmic estimation. *Sociology*, 51(6):1149–1168. PMID:
29276313.
Jing Yao, Zhicheng Dou, and Ji-Rong Wen. 2020. Employing Personal Word Embeddings for Personalized Search, page 1359–1368. Association for Computing Machinery, New York, NY, USA.
Yangbo Zhu, Jamie Callan, and Jaime Carbonell. 2008.
The impact of history length on personalized search.
In Proceedings of the 31st Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval, SIGIR '08, page 715–716, New York, NY, USA. Association for Computing Machinery.
## A Detailed Experiment Setup
We use two Nvidia GeForce RTX 3090 GPUs as our computing infrastructure.
Extraction-based method setup. We finetune the model with 10 epochs using AdamW. The learning rate is 5e-5 using the linear scheduler without warmup. The batch size is 128. The hidden size for classification is 768. The positive-negative sample ratio is 1:5. We use tag-level F1 as Qian et al.
(2019) to select the best results on the development set efficiently for a single run. The training time is about 16 hours and inference on test set is about 5 hours.
Generation-based method setup. We fine-tune the model on all sliding window examples for 5 epochs using AdamW. The learning rate is 1e-4 using linear scheduler with no warmup. The batch size is 96. We use gradient clipping with max norm 3 to increase stability during training. We use sliding windows with size 512 and stride 128. We use greedy search during inference. We use exact match to select the best results on development set efficiently for a single run. The training time is about 40 hours and inference on test set is about 3 hours.
## B Attribute Descriptions
We provide the descriptions of each attribute from Wikidata in Table 7 to facilitate the understanding of attributes and mitigate the potential impact from dataset biases.
| ID | Attribute | Description | |
|-------------------|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| P106 | occupation | occupation of a person; see also "field of work" (Property:P101), "position held" (Property:P39) | |
| P27 | country of citizenship | the object is a country that recognizes the subject as its citizen | |
| P19 | place of birth | most specific known (e.g. city instead of country, or hospital instead of city) birth location of a person, animal or fictional character | |
| P69 | educated at | educational institution attended by subject | |
| P1412 | languages spoken, | language(s) that a person or a people speaks, writes or signs, including the native language(s) | |
| written or signed | | | |
| P641 | sport | sport that the subject participates or participated in or is associated with | |
| P108 | employer | person or organization for which the subject works or worked | |
| P39 | position held | subject currently or formerly holds the object position or public office | |
| P1303 | instrument | musical instrument that a person plays or teaches or used in a music occupation | |
| P54 | member of sports | sports teams or clubs that the subject represents or represented | |
| team | | | |
| P166 | award received | award or recognition received by a person, organisation or creative work | |
| P413 | position played on | position or specialism of a player on a team | |
| team / speciality | | | |
| P551 | residence | the place where the person is or has been, resident | |
| P1344 | participant in | event in which a person or organization was/is a participant; inverse of P710 or P1923 | |
| P103 | native language | language or languages a person has learned from early childhood | |
| P937 | work location | location where persons or organisations were actively participating in employment, business or other work | |
| P3602 | candidacy in election | election where the subject is a candidate | |
| P463 | member of | organization, club or musical group to which the subject belongs. Do not use for membership in ethnic or social groups, nor for holding a political position, such as a member of parliament (use P39 for that). | |
| P101 | field of work | specialization of a person, organization, or of the work created by such a specialist; see P106 for the occupation | |
| P118 | league | league in which team or player plays or has played in | |
| P2094 | competition class | official classification by a regulating body under which the subject (events, teams, participants, or equipment) qualifies for inclusion | |
| P512 | academic degree | academic degree that the person holds | |
| P2416 | sports | discipline | discipline an athlete competed in within a sport |
| competed in | | | |
| P1411 | nominated for | award nomination received by a person, organisation or creative work (inspired from "award received" (Property:P166)) | |
| P361 | part of | object of which the subject is a part (if this subject is already part of object A which is a part of object B, then please only make the subject part of object A). Inverse property of "has part" (P527, see also "has parts of the class" (P2670)). | |
| P6886 | writing language | language in which the writer has written their work | |
| P6553 | personal pronoun | personal pronoun(s) this person goes by | |
| P241 | military branch | branch to which this military unit, award, office, or person belongs, e.g. Royal Navy | |
| P410 | military rank | military rank achieved by a person (should usually have a "start time" qualifier), or military rank associated with a position Continue on the next page Table 7: Attribute Description | |
| ID | Attribute | Description | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|
| P2348 | time period | time period (historic period or era, sports season, theatre season, legislative period etc.) in which the subject occurred | |
| P710 | participant | person, group of people or organization (object) that actively takes/took part in an event or process (subject). Preferably qualify with "object has role" (P3831). Use P1923 for participants that are teams. | |
| P1576 | lifestyle | typical way of life of an individual, group, or culture | |
| P2650 | interested in | item of special or vested interest to this person or organisation | |
| P740 | location of formation | location where a group or organization was formed | |
| P859 | sponsor | organization or individual that sponsors this item | |
| P812 | academic major | major someone studied at college/university | |
| P8413 | academic appointment | this person has been appointed to a role within the given higher education institution or department; distinct from employment or affiliation person who has been a member of a crew associated with the vessel or spacecraft. For | |
| crew of | spacecraft, inverse of crew member (P1029), backup or reserve team or crew (P3015) | | |
| P803 | professorship | professorship position held by this academic person | |
| P66 | ancestral home | place of origin for ancestors of subject | |
| P112 | founded by | founder or co-founder of this organization, religion or place | |
| P3828 | wears | clothing or accessory worn on subject's body lieu d'origine/Heimatort/luogo d'origine of a Swiss national. Not be confused with place | |
| (Switzerland) | of birth or place of residence | | |
| P495 | country of origin | country of origin of this item (creative work, food, phrase, product, etc.) | |
| P276 | location | location of the object, structure or event. | In the case of an administrative entity as |
| containing item use P131. For statistical entities use P8138. In the case of a geographic entity use P706. Use P7153 for locations associated with the object. country or region where a person has the legal status of permanent resident | | | |
| of | | | |
| P1429 | has pet | pet that a person owns | |
| P263 | official residence | the residence at which heads of government and other senior figures officially reside | |
| P1268 | represents | organization, individual, or concept that an entity represents | |
| P3716 | social | classifica | |
| tion | social class as recognized in traditional or state law | | |
| P17 | country | sovereign state of this item (not to be used for human beings) | |
| P488 | chairperson | presiding member of an organization, group or body | |
| P7779 | military unit | smallest military unit that a person is/was in | |
| P1716 | brand | commercial brand associated with the item | |
| P6 | head | of | govern |
| ment | head of the executive power of this town, city, municipality, state, country, or other governmental body | | |
| P159 | headquarters location | city, where an organization's headquarters is or has been situated. Use P276 qualifier for specific building | |
| P8047 | country of registry | country where a ship is or has been registered Table 7: Attribute Description | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
4.5 Limitations and Challenges Ethics Statement
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 1 Introduction 2 Problem Definition And Dataset
✓ B1. Did you cite the creators of artifacts you used?
1 Introduction
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
2.2 Dataset Creation Ethics Statement
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics Statement
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Ethics Statement
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
2.2 Dataset Creation Ethics Statement
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
2.2 Dataset Creation
## C ✓ **Did You Run Computational Experiments?** 4 Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4.1 Experiment Setup Appendix A Detailed Experiment Setup The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.2 Generation-based method 4.1 Experiment Setup Appendix A Detailed Experiment Setup
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix A Detailed Experiment Setup
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3.2 Generation-based method 4.1 Experiment Setup D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhuang-riloff-2023-eliciting | Eliciting Affective Events from Language Models by Multiple View Co-prompting | https://aclanthology.org/2023.findings-acl.199 | Prior research on affective event classification showed that exploiting weakly labeled data for training can improve model performance. In this work, we propose a simpler and more effective approach for generating training data by automatically acquiring and labeling affective events with Multiple View Co-prompting, which leverages two language model prompts that provide independent views of an event. The approach starts with a modest amount of gold data and prompts pre-trained language models to generate new events. Next, information about the probable affective polarity of each event is collected from two complementary language model prompts and jointly used to assign polarity labels. Experimental results on two datasets show that the newly acquired events improve a state-of-the-art affective event classifier. We also present analyses which show that using multiple views produces polarity labels of higher quality than either view on its own. | # Eliciting Affective Events From Language Models By Multiple View Co-Prompting
Yuan Zhuang and **Ellen Riloff**
Kahlert School of Computing University of Utah Salt Lake City, UT 84112
{yyzhuang, riloff}@cs.utah.edu
## Abstract
Prior research on affective event classification showed that exploiting weakly labeled data for training can improve model performance. In this work, we propose a simpler and more effective approach for generating training data by automatically acquiring and labeling affective events with *Multiple View Co-prompting*,
which leverages two language model prompts that provide independent views of an event.
The approach starts with a modest amount of gold data and prompts pre-trained language models to generate new events. Next, information about the probable affective polarity of each event is collected from two complementary language model prompts and jointly used to assign polarity labels. Experimental results on two datasets show that the newly acquired events improve a state-of-the-art affective event classifier. We also present analyses which show that using multiple views produces polarity labels of higher quality than either view on its own.
## 1 Introduction
People's emotional states are influenced by the events that they experience. For example, people typically feel happy when they graduate with a degree or get a new job, but become upset when they get fired or lose personal property. Prior work (Ding and Riloff, 2016, 2018) has referred to events that positively or negatively impact people as *affective events*. In this work, we study the task of affective event classification, which determines whether the polarity of a given event is positive, negative or neutral. For example, *"I graduated* from college" would be Positive, but "I broke my leg" would be Negative.
Previous research has shown that the performance of affective event classification models is limited by the amount of gold training data (Zhuang et al., 2020), which is costly to annotate and not readily available in large quantities. Recently, researchers have been developing methods to generate more training data by extracting events from text corpora and assigning polarity labels with weakly supervised methods (Saito et al., 2019; Zhuang et al., 2020). However, these methods pose practical challenges, including in some cases the need to acquire data from Twitter (Zhuang et al.,
2020), a computational bottleneck of applying a pipeline of NLP tools to a large text collection, and the limitations of lexical pattern matching.
In this work, we propose a simpler but more effective approach for automatically acquiring affective events by prompting pre-trained language models. We use one language model prompt to elicit affective event candidates, and we introduce a Co-prompting method to automatically label these event candidates with affective polarity. The key idea behind *Co-prompting* is to design two complementary prompts that capture independent views of an event, reminiscent of co-training (Blum and Mitchell, 1998). Combining information from two different views of an event produces labels that are more accurate than the labels assigned by either one alone.
Specifically, we acquire affective events in a twostep process: (1) Event Generation and (2) Polarity Labeling. The first step generates events that are associated with a set of gold "seed" affective events. For each seed event, we prompt a language model to generate sentences where the seed event co-occurs with some new events. Our hypothesis is that affective events are often preceded or followed by other affective events that are causally or temporally related. For example, if someone breaks his/her leg, a prior event might describe how it happened (e.g., "fell off a ladder" or *"hit by a car"*)
and a subsequent event might describe the consequences (e.g., "could not walk" or "rushed to the hospital").
The second step collects independent views of the polarity for each new event using two comple3189 mentary language model prompts. One prompt provides an *Associated Event View*, which considers the polarities of the known (labeled) events that co-occur with the new event during Event Generation. The second prompt provides an Emotion View, which considers the polarity of the most probable emotion words generated by a language model when prompted with the new event. Finally, we combine information from the two co-prompts to assign an affective polarity label to each new event.
Our experiments show that using these automatically acquired affective events as additional training data for an affective event classifier produces state-of-the-art performance over two benchmark datasets for this task. The analysis also confirms that our co-prompting method utilizing multiple views yields more accurate polarity labels than using either view alone.
In summary, the contributions of our work are:
1. We propose a method to generate weakly labeled data by prompting language models for the task of affective event classification. The method is effective but also simple, as it does not require fine-tuning any language model nor mining data from a text corpus.
2. We show that prompting for multiple views produces more accurate labels than prompting for a single view, as multiple views capture independent and complementary information.
## 2 Related Work
Several lines of research have recognized the importance of identifying events that carry affective polarity, including early work on plot units (Lehnert, 1981) and later work that learned patient polarity verbs (Goyal et al., 2010, 2013), emotionprovoking events (Vu et al., 2014), patterns associated with first-person affect (Reed et al., 2017),
major life events (Li et al., 2014) and +/- effects for opinion analysis (Choi and Wiebe, 2014; Deng and Wiebe, 2014, 2015).
Recent work has focused specifically on classifying affective event phrases (Ding and Riloff, 2016, 2018; Saito et al., 2019; Zhuang et al., 2020).
Ding and Riloff (2016) created a weakly supervised method for labeling events with affective polarity using label propagation. Ding and Riloff
(2018) subsequently developed a method that assigns affective polarity to events by optimizing for semantic consistency over a graph structure, and created an Affective Event Knowledge Base
(AEKB) of more than half a million event phrases labeled with affective polarity. Zhuang et al. (2020)
later created an Aff-BERT classifier that substantially outperformed AEKB for affective event classification by training BERT with a relatively small amount of gold data. They additionally developed a Discourse-Enhanced Self-Training (DEST) method that further improved Aff-BERT's performance.
Their approach used Twitter to collect events that corefer with sentiment expressions in specific lexical patterns. Our work also aims to improve a classification model with automatically generated affective events. The key difference is that our work produces weakly labeled affective events by prompting pre-trained language models, which alleviates the computational and practical problems of conventional pattern matching over a text corpus.
Pretrained language models such as GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2019)
have been shown to learn diverse world knowledge (Petroni et al., 2019; Davison et al., 2019; Jiang et al., 2020; Talmor et al., 2020). Researchers have studied how to prompt language models to transfer their knowledge to downstream tasks, including prompt-based fine-tuning, automatic prompt search, and discrete/continuous prompt optimization (Shin et al., 2020; Qin and Eisner, 2021; Schick and Schütze, 2021a,b). Our work differs in several aspects. Essentially, our approach utilizes co-prompts to elicit multiple types of information (views) that are independent and complementary to each other. This is significantly different from prior work that used a single prompt or an ensemble of prompts that seek the *same* type of information.
Our work is also related to recent work on data augmentation using language models. For example, Anaby-Tavor et al. (2020) proposed a method, LAMBDA, that first fine-tunes GPT-2 over labeled data and then synthesizes weakly-labeled data. Kumar et al. (2020) proposed a similar but unified approach for pretrained transformer-based language models. (Yang et al., 2020) fine-tuned two different generative language models to generate questions and answers separately for reading comprehension.
Most of these works require fine-tuning the language models while our approach does not.
![2_image_0.png](2_image_0.png)
## 3 Acquiring Affective Events With Multiple View Co-Prompting
Our research aims to automatically generate labeled affective events to improve classifiers because gold data for affective event classification is only available in limited quantities. Automated methods for data generation offer a cost-effective and practical solution for improving the performance of affective event classifiers, and also could be used to rapidly acquire training data for new domains or text genres.
Figure 1 shows the flowchart for our approach.
The process begins with a modest amount of "seed" data consisting of gold labeled affective events.
The first step (**Event Generation**) uses a language model prompt to elicit events that are associated with each seed event. The second step (**Polarity Labeling**) assigns a polarity label to each new event using *Co-prompting* to assess polarity from two independent views of the event. Given an event e, the *Associated Event View* considers the affective polarities of labeled events that co-occur with e during Event Generation. The *Emotion View* considers the affective polarities of emotion words that are generated by an Emotion Prompt given the event e. Polarity scores produced from these views are then combined to assign an affective polarity label to the event e.
This process repeats in an iterative fashion, where the newly labeled events are used to discover more affective events in the next cycle. The process ends when no new events are generated or a maximum number of iterations is reached.
## 3.1 Event Generation
The Event Generation process begins with a set of gold affective events and produces a set of new events, many of which we expect to be affective. For each seed event, we create an **Associated**
Event Prompt of the following form:
## Here Are The {Polarity} Things That Happened To Me Today: **{Event}**,
where {EVENT} is a placeholder filled by the seed event phrase, and {POLARITY} is a placeholder filled by the affective polarity of the seed event.
The prompt is designed to ask a generative language model to complete the sentence by enumerating other events that are likely to co-occur with the given event on the same day. The enumeration behavior is encouraged by the colon ':' and comma.
The temporal relation is encouraged by the word
'today'. The polarity placeholder, {POLARITY},
encourages the language model to generate events with the same affective polarity.
For the polarity terms, we used the word *'good'*
for events with positive polarity and the word *'bad'*
for events with negative polarity. For events with neutral polarity, we simply used an empty string
(i.e., *'Here are the things...'*).1 We expected that this prompt would generate some neutral events, but that it would produce positive and negative events too because people tend to recount events that are interesting or impactful, not boring and mundane. In fact, we do not expect any of these prompts to be perfect. Our goal at this stage is to generate a healthy mix of new events across all three affective polarities (positive, negative, and neutral). The affective polarity for each new event will ultimately be determined later in the Polarity Labeling step.
To be consistent with prior work on this topic, we represent each event expression as a 4-tuple of the form: <Agent, Predicate, Theme, Preposition Phrase (PP)>. To create an event phrase for the language model prompt, we concatenate the words in the tuple. For example, given the negative event
<*my house, burn down, -, -* >, the filled prompt would be *'Here are the bad things that happened* to me today: my house burn down,'.
2
| Polarity | Seed Event | Events Generated by Associated Event Prompt |
|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| I cut my leg | I fall off my bicycle, I hurt my knee, I wake up at hospital, I break my rib, I faint, kick me in head, they take my dog, my eye start to water, I break my ankle, I get in car accident | |
| NEG | I not get refund | they take my money, kick me out of game, this happen, freeze my account for hour, I lose money, I get refund, I get angry, ban me, make decision, I get email |
| I lose my job | I break up with my girlfriend, I not apply, kick me out of house, arrest me, I go to find out, they try to kill me, I find job, eat my lunch, dump me, I break down | |
| I walk in class | I start to talk to people, I take seat, I reply, professor tell me, my friend ask me, I take moment, I shake hand, I learn, I sit in front row, I have to explain, I want to tell story | |
| NEU | I close account | I call customer service, message say, I click on link, ban me for day, email tell me, I go, this show me, receive phone call, delete me, I call bank |
| I meet someone | I get call from them, I get my drink, I lose weight, I chat for minute, I say something stupid, person tell me, I start to talk, I talk for long time, they invite me, they respect me | |
| I get in college | convince myself, I graduate, I go, I read them, drink coffee, I meet cool people, watch tv, I learn lot about myself, I move, I find good job | |
| POS | I play match | my team win game, I lose, I go to hotel, I work, I go, I go on stage, I get score, I go to bed, play video game, I get point |
| I get house | I pay my tax, I move out of my apartment, I eat my favorite food, I get new job, I start to live, I learn, I pay bill, I care, I afford to eat, I start to look | |
We used open-source GPT-2LARGE (Radford et al., 2019) as the generative language model.3.
To obtain diverse outputs, we let GPT-2 generate 200 sentences for each labeled event.4 For the sampling method, we used nucleus sampling (Holtzman et al., 2020) with 0.9 as the top-p threshold, beam search with a beam size of 5, and a temperature of 2.0. We extracted new events from the sampled sentences to create event tuples, following the same conventions as earlier work (Ding and Riloff, 2018; Zhuang et al., 2020). For the sake of robustness, we selected the events that occur with at least 3 distinct seed events as new events for polarity labeling.
To illustrate, one example sentence generated from the event <*my house, burn down, -, -* > is
" *..., my mom passed away and my family lost everything*.", and the events extracted are <my mom, pass away, -, -> and <my family, lose, everything,
-> . We show more examples of extracted events in Table 1. Overall, the generated events are usually related to the seed event in some way and typically have the same affective polarity (e.g., "*I cut my leg*"
→ {*"I fall off my bicycle", "I hurt my knee", ...*}),
despite some exceptions (e.g., "*they take my dog*").
For our purposes, it is perfectly fine that some generated events are loosely associated with the seed events, because our goal is simply to harvest new affective events, and their precise relationship to the seed events is irrelevant.
## 3.2 Polarity Labeling With Multiple Views
The next step is to assign affective polarity labels to each new event. We collect affective information from two prompts that provide independent views of an event: (1) we collect affective polarity information from the events generated by the Associated Event Prompt, and (2) we use the *Emotion Prompt* to generate emotion terms associated with an event.
Finally, we combine the information gathered from these two prompts to assign a polarity label.
## 3.2.1 Emotion Prompting
To acquire another source of information about the affective polarity of an event, we prompt a language model to produce emotion terms with associated probabilities for each event. We design a cloze expression to generate emotion terms following an event expression by prompting a masked language model. Specifically, we use the following **Emotion**
Prompt: [*EVENT***]. I feel _ .**
The word "feel" leads the language model to return words that refer to emotions or other sentiments. We expect that positive events will typically be followed by positive emotions, and negative events by negative emotions. For neutral events, we expect to see a mix of both positive and negative emotions because these events can occur in a wide variety of contexts. We used BERTLARGE (Devlin et al., 2019) as the masked language model.5 We store all generated terms and their probabilities produced by BERT for later use.
Figure 2 illustrates this process for two example
![4_image_0.png](4_image_0.png)
events. The top shows the four most probable terms generated from the event tuple <*I, graduate, -, -*
>, all of which have positive polarity. The bottom shows the four most probable terms generated from the event tuple <*I, break, my leg, -*>. Three of these terms have negative polarity, but the fourth term has neutral polarity. This example shows that the prompt can produce inconsistent results, but the probability distribution across all of the generated terms typically captures a fairly reliable signal.
## 3.2.2 Multiple View Polarity Scoring
We first define scoring functions to determine the most likely affective polarity for an event from each view independently. Then we present a joint scoring function that combines the scores from the two views to produce a final affective polarity label.
Associated Event View This view captures the degree to which an event co-occurs with labeled events of each polarity. Intuitively, we expect that events tend to co-occur with other events of the same polarity. According to this view, we define the *Associated Event Score* (SA) of an unlabeled event e with respect to a polarity label l as:
$$S_{A}(l\mid e)={\frac{e^{\prime}\epsilon A E P(e)}{|A E P(e)|}}\qquad\qquad(1)$$
where AEP(e) is the set of labeled events that cooccur with e in the results produced by the Associated Event Prompt, I(e′, l) is an indicator function with a value of 1 if the polarity label of e′is l or zero otherwise, and *| · |* is the cardinality.
Emotion View This view captures the polarity of the emotion words generated by the Emotion Prompt. Based on this view, we define the Emotion Score (SE) for an unlabeled event e with respect to a polarity label l as:
$$S_{E}(l\mid e)={\frac{\sum_{w\in D_{l}}P_{\mathrm{{BERT}}}(w\mid E P(e))}{\sum_{l^{\prime}\in L}\sum_{w\in D_{l^{\prime}}}P_{\mathrm{{BERT}}}(w\mid E P(e))}}\quad{\mathrm{(2)}}$$
where D is a gold dictionary of emotion terms, Dlis the subset of words in D that have polarity label l, and PBERT(w | EP(e)) is the probability associated with word w produced by the Emotion Prompt (EP) given event e. For the gold dictionary D, we collect all of the adjectives and nouns in the MPQA subjectivity lexicon (Wilson et al., 2005)
along with their polarity labels.
Polarity Assignment We conservatively assign positive and negative polarities to an event only when both SA and SE predict the same polarity.
Formally, we label an event e with polarity l when both scores for l exceed a confidence threshold θ.
6
- if SA(pos | e) ≥ θ and SE(pos | e) ≥ θ, then e is positive
- if SA(neg | e) ≥ θ and SE(neg |e ) ≥ θ, then e is negative
For the neutral polarity, we found that the emotion scores SE(neu | e) are low in most cases because the Emotion Prompt tends to generate emotional words even for neutral events. However, we observed that the Emotion Prompt is more likely to generate a mixed set of both positive and negative emotion words for neutral events, presumably because neutral events can occur in both types of contexts. Therefore we assign neutral polarity by looking for a small difference between the positive and negative emotion scores. Specifically, we consider an event e to be **neutral** based on both its neutral Associated Event Score SA(neu | e) and the absolute difference between its positive and negative Emotion Scores, SE(pos | e) and SE(neg | e):
$$\begin{array}{l}{{\textrm{•if}S_{A}(n e u\mid e)\geq\theta}}\\ {{\textrm{and}1-|S_{E}(n e g\mid e)-S_{E}(p o s\mid e)|\geq\theta,}}\\ {{\textrm{then}e\mathrm{is~neutral}}}\end{array}$$
As an example, consider an event with SE(neg | e) = .50 and SE(pos | e) = .40, then 1 − |SE(neg | e) − SE(pos | e)| = .90, which indicates that the event is very likely to be neutral. In our experiments, we set all θ values to be .90 based on the performance over the development set.
6Note that θ must be greater than 0.5 to avoid multiple label assignments to an event.
## 4 Evaluation 4.1 Datasets
We conducted experiments over two previously used datasets for affective event classification: (1)
the **BLOG** dataset constructed by Ding and Riloff
(2018), which contains 1,490 manually annotated events (20% Positive, 18% Negative and 62% Neutral) extracted from blog posts, and (2) the **TWITTER** dataset developed by Zhuang et al. (2020),
which contains 1,500 manually annotated events
(29% Positive, 23% Negative and 48% Neutral) extracted from Twitter. We performed 10-fold crossvalidation on each dataset (8 folds for training, 1 fold for development, and 1 fold for testing).
## 4.2 Generating Newly Labeled Events
To generate newly labeled events for each domain
(TWITTER and BLOG), we used the training data as the seed events and ran the process for 15 and 10 iterations, respectively. We chose these stopping points because they produced around 10,000 new events for each domain, and we wanted to keep the number of new events manageable. Between iterations, we added the maximum number of newly labeled events that would maintain the original data distribution of affective polarities.
Figure 3 shows the number of new events acquired for each iteration. Both curves start at around 1,200 because that is the size of the gold training sets used for seeding. This process ultimately produced (on average, across the folds in our cross-validation experiments): 10,636 new events for the TWITTER domain and 10,800 new events for the BLOG domain.
![5_image_0.png](5_image_0.png)
## 4.3 Affective Event Classification Model
We use Aff-BERT (Zhuang et al., 2020) as our classification model, which is an uncased BERTbase model (Devlin et al., 2019) that takes an event tuple as input (we concatenate all of the words into a phrase) and classifies the phrase with respect to three affective polarities (positive, negative, or neutral). We train Aff-BERT with a weighted crossentropy function, which weights the gold and the new (weakly) labeled data differently: L = LG +
λLW , where LG is the loss over the gold data, LW
is the loss over the weakly labeled data, and λ is a weight factor. During training, we performed a grid search over all combinations of learning rates
(1e-5, 2e-5, 3e-5), epochs (5, 8, 10), batch sizes
(32, 64), and λ (0.1, 0.3, 0.5). We used the values that performed best over the development set.
## 4.4 Comparisons With Prior Work
We compared our method with several other approaches. Three methods were previously proposed by Zhuang et al. (2020) for affective event classification : 1) the Aff-BERT model; 2) Aff-BERT
with self-training; 3) Aff-BERT with DiscourseEnhanced Self-Training (DEST). The latter two methods improve Aff-BERT by providing additional weakly labeled data. Since the DEST method is specific to Twitter, we only evaluated that approach on the TWITTER dataset.
We also evaluated two general-purpose methods for data augmentation: 4) Back-translation (Sennrich et al., 2016), which generates paraphrases of an input phrase via machine translation, and 5) pattern-exploiting-training (PET) (Schick and Schütze, 2021a), which trains an ensemble of language models with multiple prompts and weaklylabeled data. For Back-translation, we translated each event phrase from English to German and then from German back to English using the wmt19-ende and wmt19-de-en machine translation models released by Facebook (Ng et al., 2019). We then paired the output phrase with the original event's polarity label.
To train PET7, we used BERT*BaseUncased* as the language model and used 3 prompts: "[EVENT].
I feel _.", "[EVENT]. I felt _." and *"[EVENT]. It* was _.". For hyperparameters, we used 1e-5 as the learning rate, 4 as the batch size, and 5 as the number of training epochs8. Since PET requires unlabeled data, we used 20K events randomly collected from the Affective Event Knowledge Base produced by Ding and Riloff (2018) for experiments with the BLOG data, and we used the 8,532 unlabeled events released by Zhuang et al. (2020)
for experiments with the TWITTER data.9
## 4.5 Experimental Results
Tables 2 and 3 show our experimental results, including the precision (Pre) and recall (Rec) for each polarity as well as macro-averaged F1 scores. The Aff-BERT row shows the results when trained over only gold labeled data. The other models exploit weakly labeled data for additional training.
Method Macro POS NEG NEU
F1 Pre Rec Pre Rec Pre Rec
Aff-BERT 75.7 74.4 71.5 79.0 74.0 76.1 80.1 Back-translation 76.4 80.4 69.2 79.2 75.1 75.3 83.4
Self-training 77.0 78.6 69.5 76.8 **82.3** 77.4 79.8 PET 78.3 78.1 75.6 78.2 81.6 79.2 79.1 DEST 79.0 81.8 74.8 78.4 80.0 79.4 82.4 Co-prompting 81.3 82.3 76.2 **85.9** 79.7 **79.7 86.1**
Table 2: Experimental results for TWITTER data.
On the TWITTER data, Co-prompting outperforms all other methods. We see a 5.6% absolute F1 score gain compared to Aff-BERT and a 2.3% gain compared to DEST, which is the strongest competitor. Most notably, we see a 3.7% recall gain over DEST for neutral polarity and a 7.5%
precision gain for negative polarity.
Method Macro POS NEG NEU
F1 Pre Rec Pre Rec Pre Rec
Aff-BERT 77.4 71.7 66.2 78.2 77.2 85.0 87.4
Back-translation 77.9 79.6 66.1 75.5 74.3 85.3 90.0 PET 78.0 78.5 60.2 81.4 **76.5** 83.8 91.1 Self-training 78.6 76.3 68.3 78.6 76.2 **85.5** 89.0
Co-prompting 80.7 81.4 70.1 **84.0** 75.3 85.4 **91.8**
Table 3: Experimental results for BLOG data.
On the BLOG data, Co-prompting also consistently outperforms the other methods. It surpasses Aff-BERT by 3.3 absolute points in F1 score, and self-training (the closest competitor) by 2.1 absolute points. In addition, it achieves the highest precision for both positive and negative polarity.
## 4.6 Impact Of Multiple Views
We also conducted experiments on the TWITTER
data to understand the contribution of each view for polarity labeling.
9The AEKB data could be found at https://github.
com/yyzhuang1991/AEKB and the unlabeled data for
| Method | Pre | Rec | F1 |
|-----------------------|-------|-------|------|
| Emotion View | 78.9 | 78.3 | 78.2 |
| Associated Event View | 79.4 | 78.9 | 78.8 |
| Both (Co-prompting) | 82.6 | 80.7 | 81.3 |
Table 4 shows the performance of models trained with events labeled by each view alone and by both of them together. Each view performs well on its own and produces classification models that outperform Aff-BERT. But Co-prompting yields a substantially higher F1 score than either view on its own.
Next, we investigated how and why the polarity labels change when incorporating both views. Figure 4 shows the number of labels that are changed correctly or incorrectly when adding the second view. The left table shows labels produced by the Associated Event View (AEV) that are changed by Co-prompting. For example, there are 19 good changes (wrong before, correct now) from neutral to negative (Neu → Neg) but 8 bad changes (correct before, wrong now). The ∆ column shows the overall net gain in correct labels. Overall, Co-prompting has the greatest impact by correctly changing neutral labels to be positive or negative.
This makes sense because the Associated Event View sometimes had trouble recognizing affective polarity, but the Emotion View specifically tries to identify emotions for each event.
EV → Co ✔ ✘ ∆
Neu → Neg 23 7 16 Neg → Neu 29 14 15
Pos → Neu 38 24 14
Neg → Pos 4 3 1 Pos → Neg 0 1 -1
Neu → Pos 22 23 -1
Figure 4: Counts of labels changed by Co-prompting
(Co). ✔: correct. ✘: incorrect. ∆: correct - incorrect The table on the right side of Figure 4 shows labels produced by the Emotion View (EV) that are changed by Co-prompting. Adding AEV has the greatest impact in the opposite direction: changing mislabeled negative or positive events to be neutral.
Intuitively, this is because EV can be too aggressive about assigning positive and negative polarity and have difficulty recognizing neutral events. These results nicely illustrate the power of Co-prompting: complementary views have different strengths and TWITTER could be found at https://github.com/
yyzhuang1991/DEST
| AEV → Co | ✔ | ✘ | ∆ |
|------------|-----|-----|-----|
| Neu → Neg | 19 | 8 | 11 |
| Neu → Pos | 24 | 17 | 7 |
| Pos → Neu | 33 | 28 | 5 |
| Neg → Neu | 18 | 13 | 5 |
| Pos → Neg | 3 | 2 | 1 |
| Neg → Pos | 2 | 5 | -3 |
weaknesses, and the strengths of one view can compensate for weaknesses in the other. And more generally, Figure 4 shows that most of the label changes produced by Co-Prompting were more accurate than the labels produced by one view alone, demonstrating that Co-Prompting with complementary views adds robustness.
## 4.7 Manual Analysis
To directly assess the accuracy of the polarity labels assigned by Co-prompting for the newly generated events, we asked two people to annotate 200 randomly sampled events from TWITTER.10 The pairwise inter-annotator agreement was 89.5% using Cohen's kappa. The annotators then adjudicated their disagreements.
![7_image_1.png](7_image_1.png)
Table 5 shows the accuracy of the labels produced by each view alone and by Co-prompting
(Both). The overall accuracy is only 83%-84% for the labels produced by each view but 91% for the labels produced by both views. The Associated Event View is most accurate for neutral labels, whereas the Emotion View is most accurate for positive and negative labels. These results again confirm the value of complementary sources of information for labeling data.
## 4.8 Learning Curves
We produced learning curves to understand the behavior of training with different amounts of data on the TWITTER domain. Figure 5 plots the F1 scores of Co-prompting when re-training the classification model with the data generated after every 3 iterations. The dashed line shows the F1 score of Aff-BERT (using only gold data) for comparison. The F1 score of Co-prompting rises steeply after the first 3 iterations, and continues to improve across later iterations. This graph suggests that running the iterative process even longer could yield further benefits.
We also investigated the effectiveness of our approach with smaller amounts of gold seed data.
![7_image_0.png](7_image_0.png)
Figure 6 shows the performance of Co-prompting on the TWITTER data when trained with subsets of the gold data ranging from 50% to 90%. For comparison, we also show the results for the two strongest competitors, DEST and PET, as well as the Aff-BERT baseline11. Co-prompting consistently outperforms the other approaches over all training set sizes. Surprisingly, Co-prompting trained with only 50% of the gold data achieves the same level of performance as Aff-BERT using 100% of the gold data. This result demonstrates that generating labeled events with our coprompting method can produce a high-quality classification model even with smaller amounts of gold seed data.
![7_image_2.png](7_image_2.png)
## 5 Conclusions
We presented a novel approach for eliciting and labeling affective events by co-prompting with 11The results of Aff-BERT and DEST are reported in
(Zhuang et al., 2020)
large language models. Our approach does not require fine-tuning and is more practical than patternmatching over large text collections, as has been done in prior work. The key idea is to design complementary prompts that collect independent types of information, which can then be used jointly as weak supervision to robustly label new data. Our experimental results show that labeling with multiple views is highly effective and that the elicited events substantially improve an affective event classifier.
Finally, we believe that co-prompting is a general idea that should be applicable for other data harvesting tasks as well. Co-prompting is more robust than relying on just one type of information, and we hope that other researchers will explore this idea for different types of NLP problems.
## 6 Limitations
We presented a method to automatically generate and label affective events by co-prompting with large language models. The data generation process does not involve creating or training new language models. There are some limitations to our approach. One limitation is that language models are not guaranteed to generate truthful or sensible information, which could introduce noisy information to our model. For example, we observed that the Emotion Prompt sometimes generates highly unlikely polarity labels for some events. Language models can also produce biased results, which could introduce biased information to our model. Another limitation is that it may be non-trivial for researchers who want to apply our method to other NLP problems to design prompts that are effective for their task. We believe that this method should be fairly general, but it has not yet been evaluated for other tasks. Lastly, our method requires a moderate amount of computational resources, including GPU cards with substantial memory and access to large language models. As a result, groups with limited resources might find our method too computationally intensive.
## Acknowledgement
We thank Tianyu Jiang for his helpful comments on our work. We also thank the anonymous reviewers for their insightful feedback.
## References
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, N. Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In *Proceedings of the Twentieth AAAI Conference on Artificial Intelligence (AAAI 2020)*.
A. Blum and T. Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training. In Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT-98).
Yoonjung Choi and Janyce Wiebe. 2014. +/-
EffectWordNet: Sense-level Lexicon Acquisition for Opinion Inference. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).
Joe Davison, Joshua Feldman, and Alexander Rush.
2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 1173–1178, Hong Kong, China. Association for Computational Linguistics.
Lingjia Deng and Janyce Wiebe. 2014. Sentiment Propagation via Implicature Constraints. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL
2014).
Lingjia Deng and Janyce Wiebe. 2015. Joint Prediction for Entity/Event-Level Sentiment Analysis using Probabilistic Soft Logic Models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015).
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT/NAACL 2019).
Haibo Ding and Ellen Riloff. 2016. Acquiring Knowledge of Affective Events from Blogs using Label Propagation. In Proceedings of the Thirtieth AAAI
Conference on Artificial Intelligence (AAAI 2016).
Haibo Ding and Ellen Riloff. 2018. Weakly Supervised Induction of Affective Events by Optimizing Semantic Consistency. In *Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence*
(AAAI 2018).
A. Goyal, E. Riloff, and H. Daumé III. 2010. Automatically producing plot unit representations for narrative text. In *Proceedings of the 2010 Conference on* Empirical Methods in Natural Language Processing
(EMNLP 2010).
Amit Goyal, Ellen Riloff, and Hal Daumé III. 2013. A
Computational Model for Plot Units. *Computational* Intelligence, 29(3):466–488.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained transformer models. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 18–26, Suzhou, China. Association for Computational Linguistics.
Wendy G Lehnert. 1981. Plot Units and Narrative Summarization. *Cognitive Science*, 5(4):293–331.
Jiwei Li, Alan Ritter, Claire Cardie, and Eduard Hovy.
2014. Major life event extraction from twitter based on congratulations/condolences speech acts. In *Proceedings of Empirical Methods in Natural Language* Processing (EMNLP 2014).
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission.
In *Proceedings of the Fourth Conference on Machine* Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 2463–2473, Hong Kong, China. Association for Computational Linguistics.
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts.
In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Lena Reed, JiaQi Wu, Shereen Oraby, Pranav Anand, and Marilyn A. Walker. 2017. Learning lexicofunctional patterns for first-person affect. In *Proceedings of the 55th Annual Meeting of the Association* for Computational Linguistics (ACL 2017).
Jun Saito, Yugo Murawaki, and Sadao Kurohashi. 2019.
Minimally Supervised Learning of Affective Events Using Discourse Relations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP/IJCNLP 2019).
Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269. Association for Computational Linguistics.
Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 2339–2352. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In *Proceedings of the 54th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV,
Eric Wallace, and Sameer Singh. 2020. AutoPrompt:
Eliciting Knowledge from Language Models with Automatically Generated Prompts. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–
4235. Association for Computational Linguistics.
Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. oLMpics-on what language model pre-training captures. *Transactions of the Association for Computational Linguistics*, 8:743–758.
Hoa Trong Vu, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Acquiring a Dictionary of Emotion-Provoking Events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014).
Theresa Wilson, Janyce Wiebe, and Paul Hoffmann.
2005. Recognizing Contextual Polarity in PhraseLevel Sentiment Analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing
(HLT/EMNLP 2005).
Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras, Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, and Doug Downey.
2020. Generative data augmentation for commonsense reasoning. In *Findings of the Association for* Computational Linguistics: EMNLP 2020, pages 1008–1025. Association for Computational Linguistics.
Yuan Zhuang, Tianyu Jiang, and Ellen Riloff. 2020. Affective event classification with discourse-enhanced self-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5608–5617. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Sec 6.
✗ A2. Did you discuss any potential risks of your work?
There is no potential risk, to our best knowledge, in this research topic.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
the abstract section and Sec 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
we use some pretrained language models which are mentioned in Sec 3 and Sec 4. The dataset we used is mentioned in Sec 4.
✓ B1. Did you cite the creators of artifacts you used?
We cite the artifacts in Sec 3 and Sec 4.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The artifacts I used are open to the research community, so we don't think we need to discuss it.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The artifacts I used have no intended use specified. We used them for fine-tuning (BERT), inference
(BERT, GPT2) and evaluation (the datasets) only.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data do not contain anything that uniquely identifies people or offensive content.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The artifacts I used are pretty well known in the community (BERT, GPT2). And I don't think there is any explicit things like linguistic phenomena or demographic groups related to these models. I do mention some characteristics of the dataset (e.g., domain) I used in Sec 4.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sec 4.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Sec 4.
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
The models we used are pretty well known (BERT, GPT2). So we don't think we need to report these.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sec 4.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sec 4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sec 3 and Sec 4.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Sec 4.
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
We don't think it is necessary to provide the instruction, as the instruction given to the participants is simply to judge if the model prediction is correct using their best knowledge.
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
The annotators are not recruited but our labmates in the lab.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
The data I collected is to evaluate how good our model is. There is no other use.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
There is no ethic concern in my task.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Our annotation task involves only commonsense judgement, in which basic demographic and geographic characteristics barely play any role. |
guo-etal-2023-zeroae | {Z}ero{AE}: Pre-trained Language Model based Autoencoder for Transductive Zero-shot Text Classification | https://aclanthology.org/2023.findings-acl.200 | Many text classification tasks require handling unseen domains with plenty of unlabeled data, thus giving rise to the self-adaption or the so-called transductive zero-shot learning (TZSL) problem. However, current methods based solely on encoders or decoders overlook the possibility that these two modules may promote each other. As a first effort to bridge this gap, we propose an autoencoder named ZeroAE. Specifically, the text is encoded with two separate BERT-based encoders into two disentangled spaces, i.e., label-relevant (for classification) and label-irrelevant respectively. The two latent spaces are then decoded by prompting GPT-2 to recover the text as well as to further generate text with labels in the unseen domains to train the encoder in turn. To better exploit the unlabeled data, a novel indirect uncertainty-aware sampling (IUAS) approach is proposed to train ZeroAE. Extensive experiments show that ZeroAE largely surpasses the SOTA methods by 15.93{\%} and 8.70{\%} on average respectively in the label-partially-unseen and label-fully-unseen scenario. Notably, the label-fully-unseen ZeroAE even possesses superior performance to the label-partially-unseen SOTA methods. | # Zeroae: Pre-Trained Language Model Based Autoencoder For Transductive Zero-Shot Text Classification
Kaihao Guo1,2∗, Hang Yu1∗, Cong Liao1, Jianguo Li1†**, Haipeng Zhang**2†
1Ant Group, China 2School of Information Science and Technology, ShanghaiTech University, China
{guokh,zhanghp}@shanghaitech.edu.cn
{hyu.hugo, liaocong.lc, lijg.zero}@antgroup.com
## Abstract
Many text classification tasks require handling unseen domains with plenty of unlabeled data, thus giving rise to the self-adaption or the socalled transductive zero-shot learning (TZSL)
problem. However, current methods based solely on encoders or decoders overlook the possibility that these two modules may promote each other. As a first effort to bridge this gap, we propose an autoencoder named ZeroAE.
Specifically, the text is encoded with two separate BERT-based encoders into two disentangled spaces, i.e., label-relevant (for classification) and label-irrelevant respectively. The two latent spaces are then decoded by prompting GPT-2 to recover the text as well as to further generate text with labels in the unseen domains to train the encoder in turn. To better exploit the unlabeled data, a novel indirect uncertaintyaware sampling (IUAS) approach is proposed to train ZeroAE. Extensive experiments show that ZeroAE largely surpasses the SOTA methods by 15.93% and 8.70% on average respectively in the label-partially-unseen and labelfully-unseen scenario. Notably, the label-fullyunseen ZeroAE even possesses superior performance to the label-partially-unseen SOTA
methods.1
## 1 Introduction
Collecting human-labeled data often comes at a high cost for many NLP tasks, since it typically requires domain expertise and massive labeling efforts (Beltagy et al., 2022). It is therefore desirable and beneficial to consider the challenging
(generalized) zero-shot learning (ZSL) that aims to adapt a learner to unseen domains or even unseen tasks without any annotated data (Zhang et al.,
2020). In particular for the text classification problem, Yin et al. defines ZSL under two scenarios:
label-partially-unseen and label-fully-unseen. The former demands domain adaptation that generalizes the model to classify text of unseen classes whose labeled data are unavailable during training. As a further step, the latter requires generalpurpose ZSL models for new task adaption without requiring labeled data at all. The key to ZSL lies in improving the generalization performance by utilizing external knowledge (Chen et al., 2021).
Since pretrained language models (PLMs) memorize rich sources of external knowledge when being pretrained on a large text corpus, they can serve as an extremely powerful hammer for ZSL.
Existing PLM-based ZSL approaches concentrate on either encoder-based (i.e., discriminative) or decoder-based (i.e., generative) models.2 Specifically, encoder-based methods (Yin et al., 2019; Ye et al., 2020; Liu et al., 2021a; Alcoforado et al.,
2022) typically treat text classification as a text entailment (TE) or a QA task, and fine-tune BERT
or RoBERTa (Liu et al., 2019) as the embedding function to match the texts and labels in the seen classes. These methods then generalize to the unseen classes using the same embedding function and select labels that can best match the text semantically. As pointed out in (Ma et al., 2021), the BERT-based models could suffer from the issue of large uncertainty for unseen class generalization. Hence, labeled data are still required to stabilize the performance. On the other hand, decoder-based methods (Ye et al., 2022; Gao et al., 2022) attack the ZSL problem from the aspect of data augmentation. They employ GPT-2 to generate training data for the unseen classes given the labels, and next train a classifier based on the augmented data. Unfortunately, the data yielded by GPT-2 (i.e., the decoder-based methods) may contain a large portion of low-quality samples that are detrimental 2A more comprehensive review is provided in Appendix A.
to the training of the classifier. Worse still, GPT2 in these approaches cannot be fine-tuned in an end-to-end manner to soften this issue. Although GPT-3 can alleviate the data quality problem to some extent, the dauntingly huge size precludes its widespread use (Brown et al., 2020). One promising solution to the above issues is to combine the encoder and the decoder-based models: the former may help to filter out the data of low quality given by the latter, while the latter can generate data with labels to boost the performance of the former.
Apart from PLMs, another source of external knowledge is unlabeled data. In practice, we often have access to abundant unlabeled data, and such data can assist in familiarizing the domain
(or task) agnostic PLMs with the target domain (or task) (Rahman et al., 2019; Gera et al., 2022). The resulting zero-shot learning with unlabeled data is called transductive (generalized) zero-shot learning
(TZSL). One appealing approach for TZSL is to involve the unlabeled data in a self-training loop (Ye et al., 2020; Wang et al., 2021a, 2022; Gera et al.,
2022) by iterating between 1) estimating pseudo labels for all unlabeled data given an encoder-based model (e.g., a BERT-based TE model) and 2) refining the encoder-based model using the pseudo labels with high confidence. However, these selftraining methods may lead to the problem of error accumulation (Wang and Breckon, 2020), that is, the mistakenly pseudo-labeled data in one iteration can severely affect subsequent iterations and the final predictions.
In this paper, we propose an autoencoder framework for TZSL. It harnesses the strength of both PLMs and unlabeled data, while at the same time bringing the best from the encoder-based and the decoder-based methods together. We name the resulting model ZeroAE and it is to our knowledge the first approach that aims to solve the TZSL problem in NLP from the perspective of autoencoders.
Particularly, we specify the encoder and the decoder to be fine-tuned BERT and GPT-2 respectively. To enable the two PLMs to promote each other and to further self-adapt to the task at hand, we design two main types of data flows in ZeroAE:
text reconstruction flow and label reconstruction flow. The first one aims to recover the text data after inputting it to the encoder and subsequently the decoder, while the second tries to recover the label after first generating text given the label via the decoder and then predicting the label given the generated text via the encoder. Furthermore, we assume that the latent space can be split into two parts: label-relevant and label-irrelevant. These two parts are discrete (vector-quantized), disentangled, and are given by two different encoders (i.e.,
fine-tuned BERT). Only the label-relevant part is used for classification to remove the interference from the label-irrelevant part, while both parts are required for text reconstruction and generation. Additionally, to better handle the unlabeled data, we also adopt contrastive learning, and further propose a simple yet effective method named indirect uncertainty-aware sampling (IUAS) to train ZeroAE, allowing the model to pay more attention to those unlabeled data with high uncertainty as the training process proceeds and lowering down the uncertainty with the assistance of GPT-2.
In summary, our key contributions are:
- To our best knowledge, we are among the first to propose an end-to-end autoencoder, ZeroAE,
for TZSL in NLP, which seamlessly integrates the encoder and decoder-based models. By designing the text and label reconstruction flows, we allow BERT and GPT-2 to promote each other and equip them with the capability of auto-calibration to unseen domains and tasks.
- We propose a novel method named IUAS to train ZeroAE, gradually focusing on those unlabeled data with high uncertainty and reducing the uncertainty with the help of GPT-2.
- We further incorporate several advanced techniques into ZeroAE to boost its performance, including contrastive learning, latent space discretization, disentanglement, and prompting.
- We demonstrate the usefulness of ZeroAE
through extensive experiments on four realworld datasets for TZSL under the two settings:
label-partially-unseen and label-fully-unseen.
ZeroAE greatly outperforms the existing PLMbased methods by 15.93% and 8.70% on average respectively in the two settings. Remarkably enough, ZeroAE without labels is even superior to the existing methods with labels.
## 2 Zeroae
We begin this section by defining the transductive
(generalized) zero-shot learning (TZSL) problem.
Let D
S =
(x S i
, y S
i
)
and D
U =
(x U i
, y U
i
)
denote the data for seen and unseen classes respectively, where xiis the text, yiis the label corre-
![2_image_0.png](2_image_0.png)
sponding to xi, and i is the sample index. Note that yi can only take values from {y1, *· · ·* , yC},
where yj denotes the class name for class j and C is the number of classes. For *label-partiallyunseen* TZSL, labeled data from D
Sand unlabeled data from D
U are utilized to train the model. We then test the model under the assumption that the labels are from both seen and unseen classes. The objective of label-partially-unseen TZSL is to conduct domain adaption automatically from seen to unseen classes. On the other hand, for *label-fullyunseen* TZSL (a.k.a. extremely weakly supervised learning (Wang et al., 2022)), we take one step further, and only use unlabeled data from both D
S
and D
U to train the model. This challenging task requires the model to self-adapt to the new task at hand given no labeling information for the task.
In order to solve the TZSL problem, we propose ZeroAE in this paper, whose overall architecture is shown in Figure 1. As mentioned in the introduction, there are two major types of data flows in ZeroAE, namely, the text reconstruction flow
(denoted by the blue arrows) and the label reconstruction flow (denoted by the orange arrows). To provide an illuminating overview of ZeroAE, we mainly follow the text reconstruction flow and describe an example of how ZeroAE processes one sentence xi for the purpose of topic categorization. Suppose that the text xiis "Could stress or an unhealthy diet trigger lung cancer?". We first encode the text into two disentangled latent spaces z R and z I with two separate encoders (see 1 and 2 ). The first latent space z R characterizes the label-relevant information, such as "stress", "unhealthy diet", and "lung cancer", while the second z Irepresents the label-irrelevant information, such as the syntax "Could ... ?". The label-relevant information is further inputted into a classifier (see 3 ) in order to find the correct topic, i.e., "health".
A discriminator (see 4 ) is introduced to guarantee that the two latent spaces are disentangled so the label-irrelevant information cannot adversely affect the classifier. Finally, the two latent embeddings are fed into the decoder (see 5 ) so as to recover the original text xi. Next, we elaborate on each module of ZeroAE. For ease of exposition, all notations in this paper are summarized in Table 10.
## 2.1 Label-Relevant Encoder And Classifier
The label-relevant encoder EncR is a fine-tuned BERT (L=12, H=768, total parameters=110M). We fix the first 10 layers and fine-tune the remaining two layers. Concretely, we follow the framework of text entailment (Yin et al., 2019), and pack the input text xi with all C candidate labels y1:C like
"[CLS] xi [SEP] hypothesis of yj [SEP]", where j = 1, · · · , C. As EncR aims to extract features for classification, we regard the [CLS] token in BERT as the output of the encoder, namely,
$$\mathbf{z}_{i j}^{R}=\mathrm{Enc}^{R}(\mathbf{x}_{i},\mathbf{y}_{j}).$$
ij = EncR(xi, yj ). (1)
For labeled and generated data, we further input the embeddings from all candidate classes
[z R
i1
, ..., z R
iC] to a linear classifier Cls, and obtain the C-dimensional vector s R
iof the classification probability as:
$${\mathbf{s}}_{i}^{R}=\mathrm{Cls}([{\mathbf{z}}_{i1}^{R},...,{\mathbf{z}}_{i C}^{R}]).$$
iC]). (2)
The corresponding *classification loss* is:
$${\mathcal{L}}_{c l s}={\mathcal{H}}(f(\mathbf{y}_{i}),\mathbf{s}_{i}^{R}),$$
$$({\boldsymbol{3}})$$
i), (3)
where H represents the cross entropy, and f(·) is a function that converts yito a one-hot vector whose j-th element equals 1 if yi = yj . Meanwhile, the predicted class yˆi for xiis given by yˆi = yc, where the index c = arg maxjs R
ij and s R
ij denotes element j in the vector s R i
.
Additionally, contrastive learning is also applied to train EncR in order to enhance the performance of ZeroAE on the unlabeled data. Given the text data xi, the easy data augmentation (EDA)
method (Wei and Zou, 2019) is exploited to generate the positive (i.e., similar) samples x′i
. Specifically, we perform three random operations on xi with probability 0.1, including synonym replacement, random insertion, and random deletion of words. We do not use the swap-two-words operation in (Wei and Zou, 2019) though, so as to retain the semantic structure of the sentences. On the other hand, the negative (i.e., disimilar) samples of xi are chosen as the remaining texts xj in the same batch. The resulting *contrastive loss* can be expressed as:
$${\mathcal{L}}_{c o n}={\frac{\cos\left(\mathbf{z}_{i c}^{R},\mathbf{z}_{i c}^{R^{\prime}}\right)}{\sum_{j\neq i}\cos\left(\mathbf{z}_{i c}^{R},\mathbf{z}_{j c}^{R}\right)}},\qquad\qquad(4)$$
where z R
ic = EncR(xi, yc), yˆi = yc is the predicted class for xi, and cos denotes cosine similarity. Minimizing the above loss yields an embedding space where the semantically similar text pairs are nearby whereas the dissimilar ones are distant from each other, and offers the opportunity for discovering the decision boundaries between different classes.
## 2.2 Label-Irrelevant Encoder
The label-irrelevant encoder EncIis also a finetuned BERT. As the task here is to extract labelirrelevant features, which differs substantially from
$$(1)$$
$$(2)$$
the pretraining tasks of BERT, we fine-tune the total 12 layers. Since the label-irrelevant features such as the syntax are related to the entire sentence, we use the mean pooling of the last layer as the embedding of the text. In addition, similar to the output z R of EncR, we also discretize this latent space z I, by means of vector quantization (VQ) (Van Den Oord et al., 2017). Discrete latent space is proven to be advantageous to its continuous counterpart for text generation due to the discrete nature of NLP (Ji and Huang, 2021). VQ helps the latent space model to circumvent the issue of posterior collapse (i.e.,
the latent variables are ignored in the decoder),
which often plagues pretrained VAEs (Li et al.,
2020; Xu et al., 2020). Concretely, we introduce a codebook e = [e1, *· · ·* , eK] with size K = 32 to represent the discrete latent space as shown in Figure 1. The output of the encoder EncI(xi) is compared to the codebook, and the codeword ek closest to EncI(xi) in terms of Euclidean distance is chosen as the latent representation of xi. Framed mathematically,
$$z_{i}^{I}=\operatorname*{arg\,min}_{\mathbf{e}_{k}\in\mathbf{e}}\|\operatorname{Enc}^{I}(\mathbf{x}_{i})-\mathbf{e}_{k}\|_{2}^{2}.\qquad(5)$$
The codebook is updated along with the parameters of EncI, in analogy to k-means clustering, to minimize the within-cluster distance. The corresponding *VQ loss* can be written as:
$$\begin{array}{c}{{{\mathcal{L}}_{v q}=\parallel\mathrm{sg}[\mathrm{Enc}^{I}(\mathbf{x}_{i})]-\mathbf{e}\parallel_{2}^{2}}}\\ {{\qquad\quad+\beta\parallel\mathrm{Enc}^{I}(\mathbf{x}_{i})-\mathrm{sg}[\mathbf{e}]\parallel_{2}^{2},}}\end{array}\qquad{\mathrm{(6)}}$$
where sg stands for the operation "stop gradient" that prevents the gradient from flowing through that part of the equation. The first term fixes the encoder and aligns the codebook e such that the K
codewords inside are as close to the encoder output sg[EncI(xi)] as possible. The second term in turn fixes the codebook and updates the parameters of the encoder such that the encoder output commits as much as possible to its closest codeword. β here is a tuning parameter dictating the importance of the second term. We follow (Van Den Oord et al., 2017) to set β = 0.25.
## 2.3 Discriminator For Latent Space Disentanglement
To mitigate the possible negative impact of z I
on the classifier, we borrow the idea from factor VAE (Kim and Mnih, 2018) and encourage the two latent spaces z R and z Ito be disentangled, that is, q(z R, z I) = q(z R)q(z I).
3 Let a positive sample z
+ denote the concatenated label-relevant and label-irrelevant embeddings [f(yˆi), z I
i
] resulting from the same text xi and a negative sample z− denote the concatenated embeddings [f(yˆi), z I
j
] that are never from the same text in one epoch.4 The independence between z R and z Ican be achieved by firstly training a discriminator to distinguish the positive samples from the negative ones and secondly training the remaining parts of ZeroAE to fool the discriminator. These two steps are iterated in every epoch. To this end, the *discriminator* loss function for training the discriminator can be expressed as:
$${\mathcal{L}}_{d i s c}=-\log\Big(\operatorname{Disc}(\mathbf{z}^{+})\big(1-\operatorname{Disc}(\mathbf{z}^{-})\big)\Big),\tag{7}$$
where Disc stands for the linear discriminator. The disentanglement loss for training the remaining parts of ZeroAE can be written as:
$$\mathcal{L}_{d i s e}=-\log\big(\operatorname{Disc}(z^{-})\big).$$
. (8)
Note that we randomly sample M pairs of (z
+, z−)
in each epoch and average the above two losses over them.
## 2.4 Decoder
After obtaining the label-relevant and labelirrelevant embeddings (z R
i
, z I
i
) of text xi, we aim to reconstruct the text given the two latent embeddings by means of the decoder, that is,
$${\hat{\mathbf{x}}}_{i}=\operatorname{Dec}(g({\hat{\mathbf{y}}}_{i}),{\hat{\mathbf{z}}}_{i}^{I}),$$
i), (9)
where Dec is a GPT-2 (L=12, H=768, total parameters=124M). We fix the first six layers and fine-tune the other six in GPT-2 when training ZeroAE.
In addition, the first input to the decoder is g(yˆi),
where the predicted label yˆiis derived from z R
i(cf.
Section 2.1), and g(·) represents the prompting function that projects the label name to a sentence via a template. In the example of topic categorization, suppose the topic yˆiis "health" and the template is "The news with topic yˆiis: ", then the prompting function g will return the sentence
"The news with topic health is: ". Note that we use the true and the predicted label name (i.e., yi and yˆi) respectively for the labeled and unlabeled data as the input to g. The sentence given by g then guides GPT-2 to generate xˆi by acting as the initial condition in the autoregressive model. The merit of using g(yˆi) is that it converts the "black box" latent variable z R
ito a sentence that can be directly interpreted by GPT-2, even without any fine-tuning, greatly facilitating the reconstruction and generation of text related to the label.
On the other hand, the label-irrelevant latent variable z I
i is fed into GPT-2 via cross attention and serves as the key and value in the attention mechanism. Hence, GPT-2 can generate sentences in light of the label-irrelevant features. Once we obtain the reconstructed text xˆi from Eq. (9), we can compute the *reconstruction loss* as the cross entropy between the original and the reconstructed text:
$${\mathcal{L}}_{r e c}=\sum_{j}{\mathcal{H}}(f(x_{i j}),{\hat{x}}_{i j}),\qquad(10)$$
$\left(8\right)$.
where we abuse the notations xˆij for convenience to denote the estimated probability of word j in the reconstructed text xˆitaking the values in a predefined vocabulary, and f(xij ) represents the one-hot vector corresponding to the true word.
In a nutshell, the overall objective function can be written as:
$$\operatorname*{min}_{\Theta}{\mathcal{L}}_{d i s c}(\Theta)+\operatorname*{min}_{\Psi}{\mathcal{L}}_{r e m}(\Theta,\Psi),\qquad(11)$$
where
$\eqref{eq:walpha}$.
$${\cal L}_{rem}={\cal L}_{rec}+{\cal L}_{vq}+{\cal L}_{con}+{\cal L}_{disc}+{\cal L}_{cls},\tag{12}$$
and Θ and Ψ respectively denote the parameters of the discriminator and the remaining parts of ZeroAE. Note that in the above expression (11)
we recursively update the discriminator (parameterized by Θ) and the remaining parts of ZeroAE
(parameterized by Ψ), in a similar manner to GAN.
## 2.5 Indirect Uncertainty-Aware Sampling
It follows from the above discussion that the labeled data are invoked in all loss functions in Eq. (12), and the unlabeled data are concerned with all but the classification loss, while the generated data are only used in the classification loss.
Note that both the labeled and generated data are associated with labels. Unlabeled data, nevertheless, may present high classification bias and uncertainty during training. Pseudo labeling (Wang et al., 2021a) may be helpful to reduce the uncertainty, but it typically leads to the problem of error accumulation (i.e., bias) (Wang and Breckon, 2020). Different from pseudo labeling, we borrow the ideas from curriculum learning (Soviany et al., 2022) and uncertainty sampling in active learning (Aguilar et al., 2021) and propose an indirect uncertainty-aware sampling (IUAS) procedure to train ZeroAE. Viewed one way from curriculum learning, we intend to concentrate more on the
"hard" samples with high uncertainty as the training process proceeds. Curriculum learning is known to achieve higher convergence speed and better accuracy without extra computational cost (Soviany et al., 2022). Viewed another way from uncertainty sampling, in order to reduce the uncertainty, we would like to simulate similar samples from the decoder GPT-2. These generated data have labels, which can help the encoder to better distinguish the unlabeled data with high uncertainty.
To move forward to these goals, we propose to conduct data selection by removing the unlabeled data whose probability of belonging to a class is larger than a threshold τ = 0.8 at the beginning of every training epoch. In other words, we remove the data with small uncertainty but retain those with large uncertainty. Owing to the text reconstruction flow, GPT-2 can be gradually fine-tuned to generate samples similar to the retained unlabeled data. Meanwhile, in the label reconstruction flow, we further store the generated data from all previous epochs (denoted as D
G), randomly pick CK samples in each epoch from D
G, and use them to train the label-relevant encoder and the classifier, where C is the number of classes and K is the size of the VQ codebook. It is noteworthy that GPT-2 with the prompt acts as regularization here: the data generated by GPT-2 are usually associated with proper labels since half of the layers in GPT-2 are fixed, and using the generated data to train the classifier mitigates the problem of error accumulation in pseudo-labeling. Indeed, as shown in our experiments, IUAS outperforms pseudo labeling when coupled with ZeroAE. The overall training procedure is summarized in Algorithm 1 in the appendix.
## 3 Experiments 3.1 Datasets And Experiment Setup
We investigate the effectiveness of ZeroAE on four real-world datsets, including "Topic", "Sit-
| Dataset | Version | Seen classes | Unseen classes | |
|-----------|-----------|----------------|------------------|-------|
| Train | Valid | Test | | |
| Topic | v0 | 650000 | 5000 | 50000 |
| v1 | 650000 | 5000 | 50000 | |
| Situation | v0 | 2428 | 240 | 689 |
| v1 | 1747 | 173 | 1102 | |
| Emotion | v0 | 20465 | 2405 | 5101 |
| v1 | 14204 | 1419 | 8901 | |
| Complaint | - | 218 | 174 | 94 |
uation", "Emotion", and "Complaint", under both label-partially-unseen and label-fully-unseen scenarios. The first three datasets are often used for benchmarking different zero-shot text classification approaches (Yin et al., 2019), and the last one aims to assign the customer complaints regarding Alipay reported by users to the corresponding response teams. Note that in the label-partially-unseen case, two different versions (v0 and v1) of the first three datasets are provided in (Yin et al., 2019) with nonoverlapping labels, in order to prevent the models from overfitting to some classes. The detailed statistics regarding how the datasets are split into training, validation, and testing sets are summarized in Table 1. Please refer to Appendix B for more details on the datasets.
Experimental Setup: For the first three datasets, we use the pretrained BERT5and GPT-26as the encoder and decoder in ZeroAE. For the fourth dataset of customer complaints, we use the pretrained ERNIE7and Chinese GPT-28, since texts in this dataset are in Chinese. The templates of the prompting function for the four datasets are also different, which is listed in Table 9 in the appendix.
For optimization, we use Adam with learning rate 5 × 10−5and the linear warm-up scheduler. To avoid overfitting, we resort to early stopping with a maximum number of 40 epochs. All results for ZeroAE shown below are averaged over five trials.
## 3.2 Results And Analysis 3.2.1 Label-Partially-Unseen Tzsl
We first conduct experiments under the scenario of label-partially-unseen TZSL (Ye et al., 2020),
as introduced at the beginning of Section 2. We juxtapose ZeroAE with four SOTA methods. The first three methods are encoder-based methods, and the third one further uses self-training to cope with 5https://huggingface.co/BERT-base-uncased 6https://huggingface.co/gpt2 7https://huggingface.co/nghuyong/ernie-1.0-base-zh 8https://github.com/Hansen06/GPT2-Chinese
| Methods | Topic | Situation | Emotion | Complaint | | | |
|------------|---------|-------------|-----------|-------------|-------|-------|-------|
| v0 | v1 | v0 | v1 | v0 | v1 | | |
| BERT | 57.07 | 45.50 | 60.23 | 34.15 | 16.86 | 10.21 | 7.14 |
| BERT-MNLI | 54.37 | 45.80 | 63.74 | 50.13 | 30.21 | 21.40 | - |
| BERT+RL | 73.41 | 65.53 | 73.14 | 52.44 | 36.98 | 19.38 | 31.45 |
| ZeroGen | 64.71 | 54.34 | 67.97 | 52.67 | 26.38 | 22.44 | 26.07 |
| ZeroAE-LPU | 75.32 | 71.75 | 78.58 | 71.54 | 42.71 | 30.75 | 37.19 |
| ZeroAE-LFU | 69.68 | 66.11 | 72.87 | 63.36 | 31.25 | 23.31 | 20.49 |
unlabeled data, while the last one is a decoderbased method. More details are provided below:
1. **BERT** (Devlin et al., 2018): This approach directly uses BERT as a matching model without any fine-tuning.
2. **BERT-MNLI** (Yin et al., 2019): BERT is first pretrained on the MNLI dataset (Williams et al.,
2018) and then fine-tuned on the training data with labels. Note that we cannot apply this method to the dataset of complaints as the customer complaints are in Chinese but BERTMNLI is pretrained on texts in English.
3. **BERT+RL** (Ye et al., 2020): BERT plays the role of a pseudo labeler and reinforcement learning is utilized to select pseudo-labeled data automatically and further use them to refine BERT.
Here the labeled data are used to fine-tune BERT and learn the data selection policy.
4. **ZeroGen** (Ye et al., 2022): GPT-2 is employed to generate samples for each unseen class. The generated data are then combined with the originally observed labeled data to train a classifier.
In addition, we consider two different settings of ZeroAE: the first one uses labeled data in the seen classes D
Sin the same manner as in the above four methods, and the second only uses texts xiin both D
Sand D
U without any labels. We refer to the two settings as ZeroAE-LPU (label-partially-unseen)
and ZeroAE-LFU (label-fully-unseen) respectively.
We follow the SOTA methods to use macro F1score as the criterion to evaluate the performance9, and the results are summarized in Table 2.
Remarkably, the proposed ZeroAE-LPU significantly outperforms the SOTA methods, achieving at least 1.91%, 5.44%, 5.73%, and 5.74% of gains in terms of the macro F1-score respectively for the four datasets. The largest macro F1-score increase can be as high as 19.10%. The second best ap-9The code of BERT+RL is not publicly accessible. To make a fair comparison, here we follow the experiment configuration and evaluation criterion in BERT+RL.
proach, ZeroAE-LFU, also manifests a supremacy over the SOTA methods, even without using any information of labels at all. This bolsters our belief that it is quite beneficial to combine the encoder-based and decoder-based methods in a unified framework like ZeroAE.
As opposed to ZeroAE, the raw BERT model yields the worst results for all datasets, since the pretrained BERT without fine tuning cannot selfadapt to different tasks in practice. After finetuning with the labeled data for each dataset, BERTMNLI greatly increases the macro F1-score, but still compares unfavorably with ZeroAE, probably because it cannot well generalize to unseen classes (Ma et al., 2021). On the other hand, ZeroGen is on par with BERT-MNLI, showing the advantages of decoder-based models. The key caveat with ZeroGen though is that it may generate data with low quality, since GPT-2 is not fine-tuned to cope with the task. Hence, its macro F1-score is worse than that of ZeroAE. Finally, it can be observed that BERT+RL typically performs better than BERT-MNLI and ZeroGen, after reaping benefits from the unlabeled data. However, this approach suffers from the problem of error accumulation as pointed out in the introduction. As a consequence, its performance deteriorates when it becomes difficult to clear-cut the decision boundaries between different classes semantically and the amount of labeled data are too small to provide sufficient supervision. This explains its deficiency in comparison with BERT-MNLI and ZeroGen for Emotion-v1 (see Appendix B for more details on this dataset). Note that differentiating between emotions semantically is a difficult task and the number of samples for the seen classes is relatively small in this dataset.
## 3.2.2 Label-Fully-Unseen Tzsl
Next, we investigate the performance of ZeroAE when compared with other label-fully-unseen TZSL methods based on PLMs, including two
| Methods | Topic | Situation | Emotion | Complaint | Average | | | |
|-----------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------|
| v0 | v1 | v0 | v1 | v0 | v1 | difference | | |
| ZeroAE (ours) | 75.32 (1.69) | 71.75 (1.00) | 78.58 (1.43) | 71.54 (1.26) | 42.71 (2.35) | 30.75 (1.94) | 37.19 (1.14) | - |
| −IUAS | 55.80 (6.63) | 59.98 (1.67) | 77.10 (1.46) | 62.75 (2.54) | 28.72 (4.28) | 23.84 (2.45) | 25.73 (1.48) | -10.56 |
| −IUAS+Pseudo labeling | 71.29 (0.91) | 63.71 (1.79) | 66.90 (3.71) | 62.47 (3.16) | 38.39 (2.01) | 23.04 (0.86) | 29.93 (3.41) | -7.44 |
| −Classification Loss | 69.68 (1.58) | 66.11 (1.15) | 72.87 (2.14) | 63.36 (2.75) | 31.25 (3.61) | 23.31 (1.37) | 20.49 (1.60) | -8.68 |
| −Contrastive Loss | 72.86 (1.06) | 66.87 (1.95) | 72.32 (3.19) | 63.29 (4.52) | 33.46 (2.02) | 24.65 (0.72) | 30.18 (1.91) | -6.31 |
| −Disentanglement Loss | 74.59 (1.47) | 69.43 (1.38) | 74.41 (1.56) | 73.12 (1.89) | 36.48 (4.48) | 29.78 (2.89) | 36.27 (0.92) | -1.97 |
Table 4: Weighted F1-score resulting from all benchmark methods for label-fully-unseen TZSL. ZeroAE achieves an improvement of 8.70% averaged over datasets and methods.
Methods Topic Situation Emotion Complaint
BERT-TE 45.70 45.23 25.20 5.79
P-ZSC 50.68 58.84 30.22 8.14
ZeroGen 60.15 **62.11** 24.25 3.14
ZeroAE **62.96** 61.68 32.41 **12.53**
encoder-based methods and the decoder-based method ZeroGen. In this setting, we merge the training and testing data in both v0 and v1 for the datasets Topic, Situation, and Emotion. A summary of the benchmark methods is given below:
1. **BERT-TE** (Yin et al., 2019): This approach exploits pretrained BERT and formulates the TZSL problem as a text entailment task.
2. **P-ZSC** (Wang et al., 2022): The label names for all classes are first expanded by finding the most semantically similar words or phrases to them from a text corpus. Self-training is then applied by pseudo-labeling the data using a BERT-based matching algorithm that evaluates the similarity between the texts and the expanded label names.
3. **ZeroGen** (Ye et al., 2022): Different from the settings in the previous subsection where both labeled and generated data are used to train the classifier, here only generated data resulting from GPT-2 are used.
We follow the second and the third method to use weighted F1-score as the evaluation criterion10.
The results are shown in Table 4. Once again, ZeroAE markedly improves the weighted F1-score by 8.70% on average. It achieves the best weighted F1-score among all methods for three datasets and only slightly worse performance than ZeroGen for one dataset, suggesting that ZeroAE can well play the role of a general-purpose zero-shot learner that 10The corpus for label expansion in P-ZSC is not publicly accessible. For a fair comparison, we use the same experiment configuration and evaluation criterion as in P-ZSC.
![7_image_0.png](7_image_0.png)
allows for auto-calibration to different tasks with the assistance of unlabeled data.
## 3.2.3 Ablation Study
Impact of different modules in ZeroAE: We conduct an ablation study to verify the effectiveness of different modules in ZeroAE, and display the results in Table 3. More details regarding the experiment settings and results can be found in Appendix C. There are three major findings that can be gleaned from Table 3:
1. The training procedure IUAS contributes the most to the superior performance of ZeroAE
on TZSL. Ablating IUAS from ZeroAE leads to a dramatic drop of 10.56% in terms of the macro F1-score averaged over all datasets. Furthermore, by replacing IUAS with pseudo labeling, the resulting macro F1-score is reduced by 7.44%. This observation implies that pseudo labeling can help ZeroAE to handle the unlabeled data, but IUAS is a better option, since it adopts GPT-2 as regularization and alleviates the issue of error accumulation in pseudo labeling.
| Backbones | Topic | Situation | Emotion | | | |
|-------------|---------|-------------|-----------|-------|-------|-------|
| v0 | v1 | v0 | v1 | v0 | v1 | |
| BERT+GPT2 | 75.32 | 71.75 | 78.58 | 71.54 | 42.71 | 30.75 |
| BART | 76.49 | 74.11 | 77.14 | 69.51 | 45.37 | 28.47 |
Table 6: Ablation study on the number of fixed layers in the PLMs, including 1 the label-relevant encoder, 2 the labelirrelevant encoder, and 5 the decoder.
#Fixed Topic Situation Emotion
v0 v1 v0 v1 v0 v1
Module 1
0 71.23 67.54 78.47 70.84 41.20 24.39
2 72.41 68.62 **79.21** 69.02 42.11 27.93
6 73.95 70.11 78.38 **71.79** 42.52 29.19
10 **75.32 71.75** 78.58 71.54 **42.71 30.75**
Module 2
0 75.32 71.75 **78.58** 71.54 **42.71 30.75**
2 76.53 68.42 77.92 **72.54** 41.47 28.14
6 74.39 70.64 75.24 70.15 35.78 18.55
12 **75.41 72.98** 73.11 70.92 30.07 15.49
Module 5
0 74.23 67.81 77.61 68.86 40.77 **32.56**
2 **75.55** 68.72 78.55 69.22 39.16 30.14
6 75.32 71.75 78.58 71.54 **42.71** 30.75
10 73.91 70.89 77.19 67.11 38.21 30.53
2. Both the classification (3) and the contrastive loss (4) provide appreciable improvements to ZeroAE by helping to clear-cut the decision boundaries. They increase the averaged macro F1-score by 8.68% and 6.31% respectively.
3. The disentanglement loss also helps to increase the averaged macro F1-score by about 2%, since it separates the label-relevant features from the irrelevant ones and so the classifier can better distinguish between different labels. Indeed, as shown in Figure 2, after enforcing disentanglement, the label-relevant features become less correlated with the label-irrelevant features.
Impact of the backbone PLMs: Now let us check whether the proposed ZeroAE framework is agnostic to the backbone PLMs. Indeed, we replace BERT and GPT-2 with ERNIE and Chinese-GPT
when tackling the customer complaints data in the previous subsection, and the results are still the best among the existing methods. Here we further replace the two BERTs and the GPT with the encoders and decoders in BART (Lewis et al.,
2019), and the results are presented in Table 5.
As expected, it can be observed that changing the backbone PLMs does not affect the superior performance of ZeroAE.
Impact of the number of fixed layers in the PLMs: We further investigate the influence of the number fixed layers in the PLMs on the performance of ZeroAE. The results are summarized in Table 6. We can find that setting the number of
Threshold Topic Situation Emotion
v0 v1 v0 v1 v0 v1
0.75 **71.56** 63.44 **70.41** 60.50 36.77 17.88
0.8 71.29 **63.71** 66.90 **62.47** 38.39 **23.04**
0.85 68.79 63.89 67.89 61.34 **38.85** 13.41
0.9 69.01 61.18 68.12 60.82 34.53 20.59
Table 8: Impact of the codebook size.
Codebook
size
Topic Situation Emotion
v0 v1 v0 v1 v0 v1
16 72.14 65.90 69.51 57.86 21.19 20.64
32 75.32 71.75 78.58 71.54 **42.71 30.75**
64 73.71 71.25 73.21 64.51 37.08 28.73
fixed layers to be 10, 0, and 6 respectively in the label-relevant encoder, the label-irrelevant encoder, and the decoder yields the highest averaged macro F1-score. Therefore, we follow this setting in our experiments.
Impact of the IUAS threshold τ : We also provide experimental results for different choices of τ , namely, τ ∈ {0.75, 0.8, 0.85, 0.9}. τ = 0.8, which is the default value used in our paper, produces the highest avarged macro F1-score.
Impact of the codebook size: Lastly, we conduct an empirical investigation to examine the impact of codebook size on the performance of our model.
We perform several experiments with varying codebook sizes and present the results in Table 8. Based on the table, we observe that the parameter value of 32, as suggested in this paper, outperforms the other two sizes. This finding suggests that a codebook size that is too small may not provide adequate diversity to the decoder in ZeroAE, while a size that is too large may be superfluous for the given datasets.
## 4 Conclusion
In this paper, we propose a PLM-based autoencoder named ZeroAE for zero-shot text classification. The autoencoder framework enables the pretrained encoder and decoder to further complement and promote each other. Furthermore, the proposed IUAS training algorithm helps ZeroAE
to deal with unlabeled data. Experiments on realworld datasets demonstrates that ZeroAE provides a much better solution to the domain adaptation
(i.e., label-partially-unseen) and task adaptation (i.e., label-fully-unseen) problems in comparison with the SOTA methods.
## 5 Limitations
There are two limitations of our work: 1) As the overall loss function (12) comprises five components, we propose to directly add these components together. Although this simple summation already yields better results than the SOTA methods, we believe that it is better to tune the weights of these components based on expert knowledge, empirical experiments, or other machine-learning techniques.
2) In ZeroAE, we use three PLMs, including two BERTs and a GPT-2. Moreover, contrastive learning typically requires a relatively large batch size in order to collect a sufficient number of negative samples and achieve satisfying performance (Chen et al., 2020). The batch size in our experiments is typically 32. As a result, training ZeroAE incurs relatively large resource cost. In practice, we find that using four NVIDIA TESLA V100 GPUs with 32G memory works well, and further reducing the resources hurts the performance.
## 6 Ethical Considerations
We consider four datasets in our experiments, including Topic, Situation, Emotion, and Complaint.
The first three are publicly accessible. The last one will be released upon publication. In particular for this dataset, 1) it does not contain any Personal Identifiable Information (PII); 2) This dataset is desensitized and encrypted; 3) Adequate data protection was carried out during the experiment to prevent the risk of data copy leakage, and the dataset was destroyed after the experiment; 4) This dataset is only used for academic research, and it does not represent any real business situation.
## References
Eduardo Aguilar, Bhalaji Nagarajan, Rupali Khantun, Marc Bolaños, and Petia Radeva. 2021. Uncertaintyaware data augmentation for food recognition. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 4017–4024. IEEE.
Alexandre Alcoforado, Thomas Palmeira Ferraz, Rodrigo Gerber, Enzo Bustos, André Seidel Oliveira, Bruno Miguel Veloso, Fabio Levy Siqueira, and Anna Helena Reali Costa. 2022. Zeroberto: Leveraging zero-shot text classification by topic modeling. In *International Conference on Computational Processing* of the Portuguese Language.
Yashas Annadani and Soma Biswas. 2018. Preserving semantic relations for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7603–7612.
Iz Beltagy, Arman Cohan, Robert Logan IV, Sewon Min, and Sameer Singh. 2022. Zero-and few-shot nlp with pretrained language models. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 32–37.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Qi Chen, Wei Wang, Kaizhu Huang, and Frans Coenen.
2021. Zero-shot text classification via knowledge graph embedding for social media data. *IEEE Internet of Things Journal*.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Jiahui Gao, Renjie Pi, Yong Lin, Hang Xu, Jiacheng Ye, Zhiyong Wu, Xiaodan Liang, Zhenguo Li, and Lingpeng Kong. 2022. ZerogenΘ+: Self-guided highquality data generation in efficient zero-shot learning.
arXiv preprint arXiv:2205.12679.
Rui Gao, Xingsong Hou, Jie Qin, Jiaxin Chen, Li Liu, Fan Zhu, Zhao Zhang, and Ling Shao. 2020. Zerovae-gan: Generating unseen features for generalized and transductive zero-shot learning. *IEEE Transactions on Image Processing*, 29:3665–3680.
Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, and Noam Slonim. 2022. Zero-shot text classification with self-training. In *Conference* on Empirical Methods in Natural Language Processing.
He Huang, Changhu Wang, Philip S Yu, and ChangDong Wang. 2019. Generative dual adversarial network for generalized zero-shot learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 801–810.
Haozhe Ji and Minlie Huang. 2021. Discodvt: Generating long text with discourse-aware discrete variational transformer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4208–4224.
Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. In International Conference on Machine Learning, pages 2649–2658. PMLR.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiujun Li, Yizhe Zhang, and Jianfeng Gao. 2020. Optimus:
Organizing sentences via pre-trained modeling of a latent space. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 4678–4699.
Jin Li, Xuguang Lan, Yang Liu, Le Wang, and Nanning Zheng. 2019. Compressing unknown images with product quantizer for efficient zero-shot classification. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 5463–5472.
Hui Liu, Danqing Zhang, Bing Yin, and Xiaodan Zhu.
2021a. Improving pretrained models for zero-shot multi-label text classification through reinforced label hierarchy reasoning. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1051–1062.
Tengfei Liu, Yongli Hu, Junbin Gao, Yanfeng Sun, and Baocai Yin. 2021b. Zero-shot text classification with semantically extended graph convolutional network.
In *2020 25th International Conference on Pattern* Recognition (ICPR), pages 8352–8359. IEEE.
Yang Liu, Quanxue Gao, Jin Li, Jungong Han, Ling Shao, et al. 2018. Zero shot learning via low-rank embedded semantic autoencoder. In *IJCAI*, pages 2490–2496.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, and Tiejun Zhao. 2021. Issues with entailment-based zero-shot text classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 786–796.
Ashish Mishra, Shiva Krishna Reddy, Anurag Mittal, and Hema A Murthy. 2018. A generative model for zero shot learning using conditional variational autoencoders. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 2188–2196.
Farhad Pourpanah, Moloud Abdar, Yuxuan Luo, Xinlei Zhou, Ran Wang, Chee Peng Lim, Xi-Zhao Wang, and QM Jonathan Wu. 2022. A review of generalized zero-shot learning methods. IEEE transactions on pattern analysis and machine intelligence.
Raul Puri and Bryan Catanzaro. 2019. Zero-shot text classification with generative language models. arXiv preprint arXiv:1912.10165.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Shafin Rahman, Salman Khan, and Nick Barnes. 2019.
Transductive learning for zero-shot object detection.
In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 6082–6091.
Oscar Sainz and German Rigau. 2021.
Ask2transformers: Zero-shot domain labelling with pretrained language models. In *Proceedings of* the 11th Global Wordnet Conference, pages 44–52.
Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6943–
6951.
Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. 2019. Generalized zero-and few-shot learning via aligned variational autoencoders. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*,
pages 8247–8255.
Jiaming Shen, Wenda Qiu, Yu Meng, Jingbo Shang, Xiang Ren, and Jiawei Han. 2021. Taxoclass: Hierarchical multi-label text classification using only class names. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4239–4249.
Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. 2022. Curriculum learning: A survey.
International Journal of Computer Vision, pages 1–
40.
Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30.
Congcong Wang, Paul Nulty, and David Lillis. 2022.
Using pseudo-labelled data for zero-shot text classification. In International Conference on Applications of Natural Language to Information Systems, pages 35–46. Springer.
Qian Wang and Toby Breckon. 2020. Unsupervised domain adaptation via structured prediction based selective pseudo-labeling. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pages 6243–6250.
Xuesong Wang, Chen Chen, Yuhu Cheng, and Z Jane Wang. 2016. Zero-shot image classification based on deep feature extraction. *IEEE Transactions on* Cognitive and Developmental Systems, 10(2):432–
444.
Zihan Wang, Dheeraj Mekala, and Jingbo Shang. 2021a.
X-class: Text classification with extremely weak supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao.
2021b. Towards zero-label language learning. *arXiv* preprint arXiv:2109.09193.
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382–6388.
Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2018, pages 1112–1122. Association for Computational Linguistics (ACL).
Yongqin Xian, Saurabh Sharma, Bernt Schiele, and Zeynep Akata. 2019. f-vaegan-d2: A feature generating framework for any-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10275–10284.
Peng Xu, Jackie Chi Kit Cheung, and Yanshuai Cao.
2020. On variational learning of controllable representations for text without supervision. In *International Conference on Machine Learning*, pages 10534–10543. PMLR.
Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong.
2022. Zerogen: Efficient zero-shot learning via dataset generation. *arXiv preprint arXiv:2202.07922*.
Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, Suhang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot text classification via reinforced self-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3914–3923.
Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo. 2019. Integrating semantic knowledge to tackle zero-shot text classification. In Proceedings of NAACL-HLT, pages 1031–1040.
Ling Zhang, Xiaosong Wang, Dong Yang, Thomas Sanford, Stephanie Harmon, Baris Turkbey, Bradford J
Wood, Holger Roth, Andriy Myronenko, Daguang Xu, et al. 2020. Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation. *IEEE transactions on medical imaging*, 39(7):2531–2540.
Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein.
2021. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections.
In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 2856–2878.
Pengkai Zhu, Hanxiao Wang, and Venkatesh Saligrama.
2020. Don't even look once: Synthesizing features for zero-shot detection. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11693–11702.
## A Related Works
In this section, we first provided a more detailed review of the literature on zero-shot text classification. Next, since autoencoders have already been used for zero-shot image classification, we further review autoencoder-based approaches in this field.
## A.1 Zero-Shot Text Classification
We hereby review the aforementioned two approaches for PLM-based zero-shot text classification: encoder-based and decoder-based models.
Encoder-based Methods (a.k.a discriminative or embedding-based methods) typically learn a projection to associate the texts and labels via BERT
or RoBERTa (Liu et al., 2019). Some attempts have been made to formulate the text-label pair as a text entailment (TE) representation (Yin et al., 2019; Sainz and Rigau, 2021; Alcoforado et al., 2022), and the [CLS] token is then used to evaluate their similarity. Alternatively, the relation between the text-label pair can also be formulated as questionanswering (QA) tasks (Puri and Catanzaro, 2019; Zhong et al., 2021). The corresponding classification results depend on the answer "yes/no" to the question of whether the text belongs to a certain category. These methods take advantage of the semantic correlation between texts and labels implied by BERT (Yin et al., 2019), but they may fail to adapt to different domains and tasks without labeling information (Ma et al., 2021). To alleviate this difficulty, one may introduce additional information such as knowledge graphs (Liu et al., 2021a,b) and label semantic information (Zhang et al., 2019).
Another problem with these methods is that they ignore the unlabeled data which may help to transfer knowledge from seen domains or targets to unseen ones. To take unlabeled data into account, self-training is typically employed (Ye et al., 2020; Chen et al., 2021; Gera et al., 2022). These methods iteratively use the BERT-based classifier to pseudo-label the unlabeled data and further use the pseudo-labels with high confidence to train the classifier. These methods can even be extended to tackle the label-fully-unseen scenario (Wang et al.,
2021a; Shen et al., 2021; Wang et al., 2022), resulting in a general-purpose zero-shot learner for novel tasks. Unfortunately, the issue that impedes the use of self-training is error accumulation. The mistakenly pseudo-labeled data with high confidence could continuously bias the classifier and lead to inaccurate estimates. In this work, we instead propose an indirect uncertainty-aware sampling (IUAS) method to counteract this problem.
Decoder-based Methods (a.k.a generative-based methods) address the zero-shot text classification problem from another perspective. By utilizing the data synthesis power of GPT-2 (Radford et al.,
2019), they either simulate texts for labels corresponding to unseen classes and tasks (Ye et al.,
2022) or generate labels for unlabeled data (Schick and Schütze, 2021), and then train a classifier based on the generated data. However, the potentially low quality of the generated data may harm the classifier. This problem can be alleviated by using a larger PLM such as GPT-3 (Brown et al.,
2020; Wang et al., 2021b) or conducting generated data selection with a noise-robust framework (Gao et al., 2022). Nonetheless, in all these approaches, the PLMs cannot be fine-tuned to be domain or task-specific. In this paper, we propose an autoencoder framework to complement encoder-based and decoder-based models, thus rendering BERT
and GPT-2 to self-adapt to the domain or task at hand.
## A.2 Autoencoder-Based Zero-Shot Image Classification
Zero-shot learning based on autoencoder has seen success in the field of image classification, since it provides a framework to train the encoder and the decoder simultaneously for automatic domain adaption, while being able to tackle unlabeled data (Pourpanah et al., 2022). Thus, we provide a brief review here.
There are broadly three strategies for autoencoder-driven zero-shot image classification. The first one seeks to learn a better encoder for the sake of classification with the support of the decoder (Wang et al., 2016; Annadani and Biswas, 2018; Liu et al., 2018; Li et al.,
2019). As pointed out in (Annadani and Biswas, 2018), the introduction of the decoder and the corresponding reconstruction loss improves the modeling capability of the encoder and the zero-shot recognition performance. On the other hand, the second strategy concentrates more on exploiting the decoder to generate samples for the unseen classes given the class attributes (Mishra et al., 2018; Xian et al., 2019; Huang et al., 2019; Gao et al., 2020; Zhu et al., 2020). Conditional variational autoencoders (CVAE) are typically used: the role of the encoder is to adapt the latent space in CVAE to the domain at hand, and therefore, facilitates the decoder to synthesize data for this domain. Different from the above two strategies, the third one (Schonfeld et al., 2019)
first constructs two VAEs respectively for the image features and the class attributes and then aligns the two latent spaces via a cross-alignment loss. As such, the image and semantic features are projected into the same latent space that can be further utilized for classifying samples in the unseen classes.
Unfortunately, the key bottleneck with the abovementioned methods is that the learned latent space, which is used for classification, is often corrupted by the label-irrelevant information that adversely affects the classification performance. One remedy to this problem is to separate the label-relevant from the label-irrelevant information by promoting disentanglement between them in the latent space (Schonfeld et al., 2019; Xian et al., 2019; Gao et al., 2020). In our work, we disentangle the two types of information via a discriminator in a similar fashion to (Schonfeld et al., 2019), due to its advantageous performance as shown in (Kim and Mnih, 2018).
## B Datasets
We demonstrate the advantages of ZeroAE on four real-world datasets, including "Topic", "Situation",
"Emotion" and "Complaint". The first three datasets are often used for benchmarking different zero-shot text classification approaches (Yin et al., 2019),
and the last one aims to find the response team that can deal with the customer complaints reported to Ant Group based on the content of the complaints.
Note that in the label-partially-unseen case, the two different versions of the first three datasets are provided in (Yin et al., 2019) with non-overlapping labels, in order to prevent the models from overfitting to some labels. The detailed statistics regarding how the datasets are split into training, validation, and testing sets are summarized in Table 1. More information on the four datasets is provided below:
1. **Topic Categorization**: The dataset contains Yahoo news articles with 10 topics, including "Society & Cultur","Health", "Computers & Internet", "Business & Finance", "Family & Relationships", "Science & Mathematics", "Education & Reference", "Sports", "Entertainment &
Music", and "Politics & Government". The objective is to predict the topic given the news.
The version v0 selects the first five classes as the seen classes, while the version v1 selects the other five classes.
2. **Situation Detection**: The dataset aims to find the type of an event, including the need situations (e.g., the need for water or medical aid) and the issue situations (e.g., crime violence), given the corresponding news. There are 12 classes in total, that is, "Regime change",
"Crime violence", "Medical assistance", "Water supply", "Search/rescue", "Infrastructure",
"Shelter", "Utilities, energy, or sanitation",
"Evacuation", "Food supply", "Terrisms", and
"None". The version v0 chooses the first six classes as the seen classes, while the version v1 chooses the other five classes excluding the class
"None". We further follow the settings in (Ye et al., 2020) to remove the texts with multiple labels in our experiments.
3. **Emotion Detection**: The task here is to detect the emotion of the posters from the texts such as tweets, fairy tales, and emotional events. This dataset involves nine types of emotions, that is,
"Sadness", "Anger", "Fear", "Shame", "Love",
"Joy", "Disgust", "Surprise", and "Guilt". The versions v0 and v1 respectively treat the first five and the last four classes as the seen classes. Note that this task is more difficult than the above two tasks, since different emotions are often correlated with each other (Gera et al., 2022). For example, "Guilt" and "Shame" are synonyms, but represent two distinct classes here.
4. **Customer Complaint Triage**: The objective
| Dataset | Prompt |
|-----------|-------------------------------------|
| Topic | The news with _ topic is: |
| Situation | The news with _ situation is: |
| Emotion | The news with _ emotion is: |
| Complaint | The customer complaints about _ is: |
here is to find the response team in Ant Group that can deal with the customer complaints given the corresponding texts. There are 241 classes in total, but only 12 of them are seen classes in practice. The number of samples is also very small in this dataset, since the services provided by this company are relatively stable and there are few customer complaints. As a result, this task is very challenging, due to the data scarcity and the large number of unseen classes.
## C Ablation Study
In this section, we elaborate on the experiment settings in the first ablation study:
- −**IUAS**: In this experiment, the proposed IUAS
approach is not used for training ZeroAE. In other words, the unlabeled data are only used in the text reconstruction flow, and no generated data are used to train the label-relevant encoder and the classifier.
- −IUAS+**Pseudo labeling**: After ablating IUAS,
we instead employ the pseudo labeling approach to tackle the unlabeled data. Specifically, the pseudo labels are retained to train the classifier when the probability that the unlabled text belongs to a class is larger than 0.8.
- −**Classification Loss**: In this experiment, we remove the classification loss (3) when training ZeroAE, and treat the labeled data as unlabeled data. Note that this setting is called label-fullyunseen TZSL in this work.
- −**Contrastive Loss**: The contrastive loss (4) is removed when training ZeroAE.
- −**Disentanglement Loss**: Both the discriminator loss (7) and disentanglement loss (8) are removed during the training process of ZeroAE. As a result, the two latent spaces are not guaranteed to be disentangled.
| Table 10: Summary of notations. | | |
|-----------------------------------|-----------------|-----------------------------------------------------------------------------------------------------------|
| Symbol | Type (Size) | Meaning |
| C | Constant | The overall number of classes (both seen and unseen) |
| K | Constant | The number of codewords in the codebook |
| D | Constant | The dimension of the encoder output |
| W | Constant | The size of the vocabulary for the GPT-2 |
| c | Constant | The index of the class with the highest probability score for one input text |
| τ | Constant | The threshold for data selection in IUAS |
| β | Constant | The weight of the second term in the VQ loss |
| S | Dataset | Seen dataset |
| D U | Dataset | Unseen dataset |
| D L | Dataset | Labeled dataset |
| D G | Dataset | Generated dataset |
| D EncR | Module | The label-relevant encoder (i.e., BERT with the [CLS] token as the output) |
| EncI | Module | The label-irrelevant encoder (i.e., BERT with the mean pooling of the last layer as the output) |
| Cls | Module | The linear classifier |
| Dec | Module | The decoder (i.e., prompt-based GTP-2) |
| Disc | Module | The linear discriminator |
| Θ | Parameter | The parameters of the discriminator |
| Ψ | Parameter | The parameters of all modules in ZeroAE except the discriminator |
| H | Function | Cross entropy loss function |
| sg | Function | Stop gradient operation |
| f | Function | The function that converts a label to a one-hot vector |
| g | Function | The prompting function |
| q | Function | The density function of the latent variables |
| xi | Text | The i-th input text of the dataset |
| i | Text | The augmented text by applying EDA to xi |
| x ′ xˆi | Text | The reconstructed or generated text |
| xij | Text | The j-th word in text xi |
| xˆij | Tensor (W) | The vector of the probabilities that the word j in text xˆi takes the values in a predefined vocabulary |
| yi | Text | The true label of the i-th input text |
| yˆi | Text | The predicted label for i-th input text |
| y1:C | Set | The label names for all C classes (both seen and unseen) |
| yj | Text | The label name for the j-th class |
| i | Tensor (C) | The classification probability vector for the input text xi |
| sR ij | Constant | The classification probability score of the label being yj for input text xi |
| sR e | Tensor (K × D) | The VQ codebook |
| ek | Tensor (D) | The k-th codeword in the codebook |
| ij | Tensor (D) | The label-relevant embedding after packing the input text xi and the label name yj under the TE framework |
| R z z R ′ | Tensor (D) | The label-relevant embedding corresponding to the label yc with the highest classification score |
| ic z i | Tensor (C × D) | The label-relevant embedding for all C classes |
| R z i | Tensor (D) | The label-irrelevant embedding for the input text xi |
| I + | Tensor (C + 2D) | The positive samples for the discriminator |
| z z − | Tensor (C + 2D) | The negative samples for the discriminator |
## Algorithm 1 Iuas Based Training Procedure For Zeroae
Require: Texts and labels for the seen classes D
S = {(x S
i, y S
i )} if available, texts for the unseen classes D
U = {(x U
i )}, label names for all C classes y1:C = {y1, *· · ·* , yC }, and the IUAS threshold τ ;
1: Initialize the generated data as an empty set D
G = {};
2: **repeat**
3: if D
G is not empty **then**
4: Randomly pick CK samples from D
G and denote the sample set as D
G′;
5: **else**
6: D
G′= {};
7: **end if**
8: The labeled dataset can be computed as D
L = D
S ∪ D
G′;
9: for (xi, yi) in D
Ldo ▷ *Text Reconstruction Flow for Labeled Data* 10: Augment xi via EDA to obtain x
′i 11: for yj in y1:C do 12: Pack xi and yj together following the TE framework "[CLS] xi [SEP] hypothesis of yj [SEP]";
13: Encode the above packed text using the label-relevant encoder to obtain z R
ij following Eq. (1);
14: **end for**
15: Calculate the classification loss following Eq. (2)-(3);
16: Pack x
′i and yˆi following the TE framework as the positive sample to obtain z R
ic
′;
17: Calculate the contrastive loss following Eq. (4);
18: Obtain the label-irrelevant latent variable z I
ifollowing Eq. (5) and calculate the VQ loss following Eq. (6);
19: Calculate the disentanglement loss following Eq. (8);
20: Reconstruct the text xˆi following Eq. (9) and calculate the reconstruction loss following Eq. (10);
21: **end for**
22: for (xi) in D
Udo ▷ *Text Reconstruction Flow for Unlabeled Data* 23: Augment xi via EDA to obtain x
′i; 24: for yj in y1:C do 25: Pack xi and yj together following the TE framework and obtain z R
ij following Eq. (1);
26: **end for**
27: Calculate the classification score following Eq. (2);
28: Pack x
′i and yˆi following the TE framework as the positive sample to obtain z R
ic
′;
29: Calculate the contrastive loss following Eq. (4);
30: Obtain the label-irrelevant latent variable z I
ifollowing Eq. (5) and calculate the VQ loss following Eq. (6);
31: Calculate discriminator loss following Eq. (8);
32: Reconstruct the text xˆi following Eq. (9) and calculate the reconstruction loss following Eq. (10);
33: **end for**
34: Fix the discriminator and update the remaining parts of ZeroAE by minimizing Eq. (12) using gradient descent; 35: for yj in y1:C do ▷ *Train Discriminator* 36: Randomly pick positive sample and negative sample as in §2.3, and calculate the discriminator loss following Eq. (7);
37: **end for**
38: Update the discriminator only by minimizing Eq. (7) using gradient descent; 39: for yj in y1:C do ▷ *Text Generation in Label Reconstruction Flow* 40: for ek in e do 41: Generate data using the decoder xˆ = Dec(h(yj ), ek);
42: Put (xˆ, yj ) in the generated dataset D
G;
43: **end for**
44: **end for**
45: for (xi) in D
Udo ▷ *Data Selection in Indirect Uncertainty-Aware Sampling (IUAS)*
46: for yj in y1:C do 47: Pack xi and yj together following the TE framework and obtain z R
ij following Eq. (1);
48: **end for**
49: Calculate the classification score following Eq. (2);
50: if max s R
i > τ **then**
51: Remove xi from D
U;
52: **end if**
53: **end for** 54: **until** the maximum number of epochs is reached or early stop criteria is met.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2, 3.
✓ B1. Did you cite the creators of artifacts you used?
Section 1, 2, 3.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 7.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In table 1.
## C ✓ **Did You Run Computational Experiments?** Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 2 and Section 5.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, Appendix C and Table 6 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |